Topic
stringclasses
9 values
News_Title
stringlengths
10
120
Citation
stringlengths
18
4.58k
Paper_URL
stringlengths
27
213
News_URL
stringlengths
36
119
Paper_Body
stringlengths
11.8k
2.03M
News_Body
stringlengths
574
29.7k
DOI
stringlengths
3
169
Medicine
Cancer most frequently spreads to the liver; here's why
Hepatocytes direct the formation of a pro-metastatic niche in the liver, Nature (2019). DOI: 10.1038/s41586-019-1004-y , www.nature.com/articles/s41586-019-1004-y Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1004-y
https://medicalxpress.com/news/2019-03-cancer-frequently-liver.html
Abstract The liver is the most common site of metastatic disease 1 . Although this metastatic tropism may reflect the mechanical trapping of circulating tumour cells, liver metastasis is also dependent, at least in part, on the formation of a ‘pro-metastatic’ niche that supports the spread of tumour cells to the liver 2 , 3 . The mechanisms that direct the formation of this niche are poorly understood. Here we show that hepatocytes coordinate myeloid cell accumulation and fibrosis within the liver and, in doing so, increase the susceptibility of the liver to metastatic seeding and outgrowth. During early pancreatic tumorigenesis in mice, hepatocytes show activation of signal transducer and activator of transcription 3 (STAT3) signalling and increased production of serum amyloid A1 and A2 (referred to collectively as SAA). Overexpression of SAA by hepatocytes also occurs in patients with pancreatic and colorectal cancers that have metastasized to the liver, and many patients with locally advanced and metastatic disease show increases in circulating SAA. Activation of STAT3 in hepatocytes and the subsequent production of SAA depend on the release of interleukin 6 (IL-6) into the circulation by non-malignant cells. Genetic ablation or blockade of components of IL-6–STAT3–SAA signalling prevents the establishment of a pro-metastatic niche and inhibits liver metastasis. Our data identify an intercellular network underpinned by hepatocytes that forms the basis of a pro-metastatic niche in the liver, and identify new therapeutic targets. Main To understand the mechanisms that underlie the formation of a pro-metastatic niche in the liver, we used the LSL-Kras G12D /+ ;LSL-Trp53 R127H /+ ;Pdx1-cre (KPC) mouse model of pancreatic ductal adenocarcinoma (PDAC) 4 , 5 . We looked for features of a pro-metastatic niche in the livers of over-16-week-old tumour-bearing KPC mice and 8- to 10-week-old non-tumour-bearing (NTB) KPC control mice, which lack PDAC but harbour pancreatic intraepithelial neoplasia (PanIN) 6 . Compared to control mice, the livers of KPC mice contained increased numbers of myeloid cells, accompanied by an increase in the deposition and expression of fibronectin and type I collagen (COL1) (Fig. 1a , Extended Data Fig. 1a–d ). Orthotopic implantation of KPC-derived PDAC cells into wild-type mice recapitulated these changes (Extended Data Fig. 1e–i ). As shown previously 7 , 8 , matrix deposition did not require myeloid cells (Extended Data Fig. 1j–l ). These results are consistent with evidence that myeloid cell accumulation and extracellular matrix deposition are key components of a pro-metastatic niche 7 , 8 , 9 , 10 . Fig. 1: Primary PDAC development induces a pro-metastatic niche in the liver. a , Images and quantification of myeloid cells, fibronectin (FN), and COL1 in the liver. Arrows indicate Ly6G + cells. Numbers in parentheses on plots indicate the number ( n ) of mice. Data pooled from two experiments. TB, tumour-bearing; NTB, non-tumour-bearing. b , Images of the liver and quantification of PDAC–YFP cells. Control mice ( n = 14) and NTB KPC mice ( n = 10) were intrasplenically injected with PDAC–YFP cells, and the liver was analysed after 10 days. Data representative of two independent experiments. c , Scatter plot of transcriptome data. FPKM, fragments per kilobase of exon per million mapped fragments ( n = 5 for both groups). Scale bars, 50 μm ( a ) and 1 cm ( b ). Statistical significance calculated using one-way analysis of variance (ANOVA) with Dunnett’s test ( a ) and two-tailed Mann–Whitney test ( b ). Data represented as mean ± s.d. Source data Full size image We next evaluated the susceptibility of the liver to metastatic colonization. Yellow fluorescent protein (YFP)-labelled KPC-derived PDAC cells (PDAC–YFP) 6 were injected into control mice and KPC mice. The metastatic burden was threefold higher in KPC mice, and metastatic lesions were detected in the livers of KPC mice at increased frequency and size with enhanced proliferation (shown using Ki-67) (Fig. 1b , Extended Data Fig. 2a, b ). Similar findings were observed using a YFP-negative KPC-derived cell line (Extended Data Fig. 2c, d ). Orthotopic implantation of PDAC cells also increased the susceptibility of the liver to metastatic colonization, and this finding was independent of the presence of CD4 + and CD8 + T cells (Extended Data Fig. 2e–s ). We next performed mRNA sequencing on RNA isolated from the livers of control and KPC mice. We identified 275 differentially expressed genes (Extended Data Fig. 3a, b , Supplementary Data 1 ) and found that genes upregulated in KPC mice were associated with immune-related processes (Extended Data Fig. 3c ). Notably, genes encoding myeloid chemoattractants, including SAA and members of the S100 family, were upregulated in KPC mice 11 , 12 , 13 (Fig. 1c , Extended Data Fig. 3d, e ). We also found enrichment of immune-related pathways, particularly the IL-6–JAK–STAT3 signalling pathway (Extended Data Fig. 3f , Supplementary Table 1 ). We validated our results by examining the livers of KPC mice for the presence of phosphorylated STAT3 (pSTAT3). STAT3 was activated in 80–90% of hepatocytes from KPC mice, compared to less than 2% of hepatocytes in control mice (Extended Data Fig. 3g, h ). By contrast, we did not detect activation of STAT1 signalling (Extended Data Fig. 3i ). Orthotopic implantation of PDAC cells also induced phosphorylation of STAT3 in hepatocytes (Extended Data Fig. 3j, k ). As IL-6 is fundamental to STAT3 signalling in hepatocytes 14 , we examined the livers of control mice ( Il6 +/+ ) and IL-6 knockout mice ( Il6 −/− ) orthotopically injected with PBS or PDAC cells. Tumour-implanted Il6 −/− mice displayed a decrease in STAT3 activation, particularly in hepatocytes (Fig. 2a , Extended Data Fig. 4a ). This loss in STAT3 activation was accompanied by reductions in myeloid cell accumulation and extracellular matrix deposition without alterations in the morphology and density of liver sinusoids (Fig. 2a and Extended Data Fig. 4a-d ). We also observed reduced expression of SAA, other chemoattractants, and extracellular matrix proteins (Fig. 2b , Extended Data Fig. 4e ). Genetic ablation of Il6 , however, did not alter proliferation, vascular density, or primary tumour growth (Extended Data Fig. 4f, g ). Il6 −/− mice were also less susceptible than control mice to metastatic colonization, and blockade of the IL-6 receptor (IL-6R) similarly inhibited the formation of a pro-metastatic niche in the liver (Fig. 2c–e , Extended Data Fig. 4h–s ). Notably, genetic ablation of Il6 or blockade of IL-6R did not completely inhibit STAT3 signalling, suggesting that IL-6-independent mechanisms may contribute to STAT3 activation. Fig. 2: IL-6 is necessary for the establishment of a pro-metastatic niche in the liver. a , b , n = 5 and 6 for Il6 +/+ mice and n = 4 and 5 for Il6 −/− mice orthotopically injected with PBS or PDAC cells, respectively. a , Quantification of pSTAT3 + cells, myeloid cells, and fibronectin. b , mRNA levels of Saa1 and Saa2 in the liver. c – e , n = 4 and 5 for Il6 +/+ mice and n = 4 for Il6 −/− mice orthotopically injected with PBS or PDAC cells, respectively. All groups were intraportally injected with PDAC–YFP cells on day 10. c , d , Images of liver and flow cytometric analysis. e , Quantification of PDAC–YFP cells. Data representative of two independent experiments ( a – e ). Scale bars, 1 cm. Statistical significance calculated using one-way ANOVA with Dunnett’s test. Data represented as mean ± s.d. Source data Full size image IL-6 promotes the development and progression of PDAC 15 , 16 , 17 , 18 . To identify the source of IL-6, we orthotopically injected PBS or PDAC cells into Il6 +/+ and Il6 −/− mice and measured the concentration of IL-6 at distinct anatomic sites (Extended Data Fig. 5a ). We detected IL-6 only in tumour-implanted Il6 +/+ mice, with the highest concentration of IL-6 found in the primary tumour (Extended Data Fig. 5b, c ). Although Il6 mRNA was undetectable in the liver, lung, and malignant cells, we observed Il6 mRNA in host cells adjacent to CK19-expressing PDAC cells (Extended Data Fig. 5d–g ). Human primary tumours displayed a similar expression pattern (Extended Data Fig. 5h ). Moreover, Il6 mRNA was detected in α-SMA + stromal cells located adjacent to PanIN and PDAC cells in KPC mice (Extended Data Fig. 5i–k ). We also found that primary pancreatic tumour supernatant activated STAT3 signalling in hepatocytes, and this was reduced in the presence of anti-IL-6R antibodies (Extended Data Fig. 6a, b ). These results show that IL-6 released by non-malignant cells within the primary tumour is a key mediator of STAT3 signalling in hepatocytes. To study a role for hepatocytes in directing liver metastasis, we generated mice that lacked Stat3 in hepatocytes ( Stat3 flox/flox Alb-cre ). Compared to control mice ( Stat3 flox/flox ), tumour-implanted Stat3 flox/flox Alb-cre mice lacked features of a pro-metastatic niche (Fig. 3a–c , Extended Data Fig. 6c ) and failed to produce SAA (Fig. 3d–f ). However, deletion of Stat3 in hepatocytes did not affect liver sinusoid density or morphology and did not alter the size, proliferation, or vascular density of the primary tumour (Extended Data Fig. 6d–f ). The livers of tumour-implanted Stat3 flox/flox Alb-cre mice were also less susceptible to metastatic colonization (Extended Data Fig. 6g–l ). In addition to its expression in hepatocytes (Extended Data Fig. 6m ), mRNA for SAA was detected in colonic cells 19 and in cells present in the periphery of the primary tumour (Extended Data Fig. 6n ). However, both cell types maintained comparable levels of SAA mRNA despite deletion of Stat3 in hepatocytes. Fig. 3: STAT3 signalling in hepatocytes orchestrates the formation of a pro-metastatic niche in the liver. a , Study design for b – f ( n = 4 for Stat3 flox/flox mice injected with PBS or PDAC cells; n = 8 and 7 for Stat3 flox/flox Alb-cre mice injected with PBS and PDAC cells, respectively). b , c , Quantification of pSTAT3 + cells, myeloid cells, and fibronectin. d , mRNA levels of Saa1 and Saa2 in the liver. e , Images of Saa1 and Saa2 mRNA in liver cells. Dashed lines and asterisks indicate sinusoids and hepatocytes, respectively. f , Concentration of circulating SAA. Data representative of two independent experiments ( a – f ). Scale bars, 50 μm. Statistical significance calculated using one-way ANOVA with Dunnett’s test. Data represented as mean ± s.d. Source data Full size image SAA proteins are acute phase reactants 20 . Consistent with elevated levels of circulating SAA in tumour-implanted mice (Fig. 3f ), patients with PDAC displayed elevated levels of circulating SAA (Extended Data Fig. 7a ). Overexpression of SAA and pSTAT3 by hepatocytes was also observed in five of seven patients with liver metastases (Fig. 4a , Extended Data Fig. 7b ). Notably, high levels of circulating SAA correlated with worse outcomes (Extended Data Fig. 7c ). Elevated levels of circulating SAA were also observed in patients with non-small-cell lung carcinoma (NSCLC) with liver metastases, and overexpression of SAA by hepatocytes was detected in the livers of patients with colorectal carcinoma (CRC) (Extended Data Fig. 7d, e ). In addition, compared to tumour-implanted control mice ( Saa +/+ ), double-knockout Saa1 −/− Saa2 −/− mice (hereafter referred to as Saa −/− mice) implanted with PDAC or MC-38 CRC cells failed to show features of a pro-metastatic niche in the liver, though genetic ablation of Saa1 and Saa2 had no effect on primary tumour growth (Fig. 4b–e , Extended Data Fig. 7f–s ). SAA was also necessary for IL-6-mediated formation of a pro-metastatic niche and for fibrosis and myeloid cell recruitment in the setting of liver injury (Extended Data Fig. 8 ). Fig. 4: SAA is a critical determinant of liver metastasis. a , Images of SAA in the livers of healthy donors (top) and patients with PDAC with liver metastases (bottom). Dashed lines and asterisks indicate sinusoids and hepatocytes, respectively. Data representative of one experiment. b , Quantification of pSTAT3 + cells, myeloid cells, and fibronectin ( n = 5 for all groups orthotopically injected with PBS or PDAC cells). For c – e , n = 4 and 5 for Saa +/+ mice and n = 5 and 6 for Saa −/− mice orthotopically injected with PBS and PDAC cells, respectively. All groups were intraportally injected with PDAC–YFP cells on day 10. c , d , Images of liver and flow cytometric analysis. e , Quantification of PDAC–YFP cells. Data representative of two independent experiments ( b – e ). Scale bars, 50 μm ( a ) and 1 cm ( c ). Statistical significance calculated using one-way ANOVA with Dunnett’s test. Data represented as mean ± s.d. Source data Full size image Tissue inhibitor of metalloproteinases 1 (TIMP1) 7 , 8 and macrophage migration inhibitory factor (MIF) 9 , 10 have been implicated in the promotion of metastasis. However, expression of these molecules was not affected by IL-6–STAT3–SAA signalling (Extended Data Fig. 9 ). We next determined whether formation of a pro-metastatic niche in the liver is dependent on the anatomical proximity of the pancreas to the liver. To this end, we looked for features of a pro-metastatic niche in the livers of CD45.1 and CD45.2 mice that were parabiotically joined (Extended Data Fig. 10a ). Although only CD45.2 mice were implanted with PDAC cells, both mice displayed myeloid cell accumulation and fibrosis in the liver (Extended Data Fig. 10b–g ), suggesting that formation of this niche is not dependent on the anatomical distance between the tumour and the liver. We also investigated whether SAA has a role in establishing a pro-metastatic niche in the lung. Development of PDAC in KPC mice induced accumulation of Ly6G + myeloid cells and deposition of fibronectin within the lung, but IL-6–STAT3–SAA signalling was not required for the formation of a pro-metastatic niche in the lung (Extended Data Fig. 10h–o ). Our data provide insight into the mechanisms that direct liver metastasis. Although recent studies have suggested a role for tumour-intrinsic factors in driving metastatic spread of cancer 7 , 8 , 9 , 10 , 21 , 22 , 23 , we provide evidence that inflammatory responses mounted by hepatocytes are critical to liver metastasis. Mechanistically, hepatocytes orchestrate this process through activation of IL-6–STAT3 signalling and the subsequent production of SAA, which alters the immune and fibrotic microenvironment of the liver to establish a pro-metastatic niche (Extended Data Fig. 10p ). Our findings suggest that therapies that target hepatocytes might prevent liver metastasis in patients with cancer. Methods Mice CD45.2 (wild type, C57BL/6J), CD45.1 (B6.SJL- Ptprc a Pepc b /BoyJ), Il6 knockout ( Il6 −/− , B6.129S2- Il6 tm1Kopf /J ) , Stat3 flox / flox (B6.129S1- Stat3 tm1Xyfu /J), and Alb - cre +/+ (B6.Cg-Tg(Alb-cre)21Mgn/J) mice were obtained from the Jackson Laboratory. Stat3 flox/flox mice were bred to Alb - cre +/+ mice to generate Stat3 flox /+ Alb-cre +/− mice, which were backcrossed onto Stat3 flox/flox mice to generate Stat3 flox/flox Alb-cre +/− mice. These mice were then bred to each other to create Stat3 flox/flox Alb-cre +/+ and Stat3 flox/flox Albumin-cre +/− mice ( Stat3 flox/flox Alb-cre ), and Stat3 flox/flox Albumin-cre −/− mice ( Stat3 flox/flox ). Kras LSL-G12D /+ Trp53 LSL-R172H /+ Pdx1-cre (KPC) mice and Trp53 LSL-R172H /+ Pdx1-cre (PC) mice were as previously described 4 , 5 . Saa1 and Saa2 double-knockout ( Saa −/− ) mice were as previously described 24 and provided by the University of Kentucky College of Medicine. Saa −/− mice used for experiments had been bred to obtain a 99.9% C57BL/6 background using the Jackson Laboratory Speed Congenic Service 24 . All transgenic mice were bred and maintained in the animal facility of the University of Pennsylvania. Animal protocols were reviewed and approved by the Institute of Animal Care and Use Committee of the University of Pennsylvania. In general, mice were monitored three times per week for general health and euthanized early based on defined endpoint criteria including tumour diameter ≥1 cm, ascites, lethargy, loss of ≥10% body weight, or other signs of sickness or distress. Clinical samples All patient samples were obtained after written informed consent and were de-identified. Studies were conducted in accordance with the 1996 Declaration of Helsinki and approved by institutional review boards of the University of Pennsylvania and the Mayo Clinic. To obtain plasma from healthy donors, patients with PDAC patients, and patients with NSCLC, peripheral whole blood was drawn in EDTA tubes (Fisher Scientific). Within 3 h of collection, blood samples were centrifuged at 1,600 g at room temperature for 10 min with the brake off. Next, the plasma was transferred to a 15-ml conical tube without disturbing the cellular layer and centrifuged at 3,000 g at room temperature for 10 min with the brake off. This step was repeated with a fresh 15-ml conical tube. The plasma was then stored at –80 °C until analysis. Biopsy results, computed tomography, and/or magnetic resonance imaging records were used to determine sites of metastasis in patients with PDAC or NSCLC whose plasma samples were used for assessment of SAA levels. Liver specimens from healthy donors were obtained by percutaneous liver biopsy, and acquisition of liver specimens from patients with liver metastases was as previously described 25 . Liver specimens from patients with CRC with liver metastases were obtained from the Cooperative Human Tissue Network (CHTN). Patient characteristics are shown in Supplementary Table 2 . Cell lines PDA.69 cell line (PDAC cells) was used for intrasplenic and orthotopic injection, and PDA.8572 cell line (PDAC–YFP cells) was used for intrasplenic, intraportal, and retro-orbital injections. These cell lines were derived from PDAC tumours that arose spontaneously in KPC mice, as previously described 4 , 26 . The MC-38 cell line, which was used for orthotopic implantation, was purchased from Kerafast. Cell lines were cultured in DMEM (Corning) supplemented with 10% fetal bovine serum (FBS, VWR), 83 μg/ml gentamicin (Thermo Fisher), and 1% GlutaMAX (Thermo Fisher) at 37 °C, 5% CO 2 . Only cell lines that had been passaged fewer than 10 times were used for experiments, and trypan blue staining was used to ensure that cells with >95% viability were used for studies. Cell lines were tested routinely for Mycoplasma contamination at the Cell Center Services Facility at the University of Pennsylvania. All cell lines used in our studies tested negative for Mycoplasma contamination. Animal experiments For all animal studies, mice of similar age and gender were block randomized in an unblinded fashion. Male and female mice aged between 8 to 12 weeks were used unless indicated otherwise. Mice were age- and gender-matched with appropriate control mice for analysis. Sample sizes were estimated based on pilot experiments and were selected to provide sufficient numbers of mice in each group for statistical analysis. For orthotopic and intrasplenic injections of pancreatic tumour cells, mice were anaesthetized using continuous isoflurane, and their abdomen was sterilized. After administering analgesic agents and assessing the depth of anaesthesia, we performed a laparotomy (5–10 mm) over the left upper quadrant of the abdomen to expose the peritoneal cavity. For orthotopic injection, the pancreas was exteriorized onto a sterile field, and sterile PBS or pancreatic tumour cells (5 × 10 5 cells suspended in 50 μl of sterile PBS) were injected into the tail of the pancreas via a 30-gauge needle (Covidien). Successful injection was confirmed by the formation of a liquid bleb at the site of injection with minimal fluid leakage. The pancreas was then gently placed back into the peritoneal cavity. For intrasplenic injection, 150 μl sterile PBS was drawn into a syringe and then sterile PBS or pancreatic tumour cells (5 × 10 5 cells suspended in 100 μl sterile PBS) was gently drawn into the same syringe in an upright position as previously described 27 . After the spleen was exteriorized onto a sterile field, pancreatic tumour cells were injected into the spleen via a 30-gauge needle. Successful injection was confirmed by whitening of the spleen and splenic blood vessels with minimal leakage of content into the peritoneum. Splenectomy was then performed by ligating splenic vessels with clips (Horizon) then cauterizing them to ensure that there was no haemorrhage. Afterwards, the remaining blood vessels were placed back into the peritoneal cavity. For both procedures, the peritoneum was closed with a 5-0 PDS II violet suture (Ethicon), and the skin was closed using the AutoClip system (Braintree Scientific). Following surgery, mice were given buprenorphine subcutaneously at a dose of 0.05-0.1 mg/kg every 4–6 h for 12 h and then every 6–8 h for 3 additional days. Mice that were orthotopically injected with pancreatic tumour cells were analysed after 20 days, unless indicated otherwise in study designs. Mice that were intrasplenically injected with PDAC cells were analysed after 10 days. For intraportal injection of pancreatic tumour cells and hydrodynamic injection of expression vectors, mice were anaesthetized using continuous isoflurane, and their abdomen was sterilized. After administration of analgesic agents, median laparotomy (10 mm) was performed, and the incision site was held open using an Agricola retractor (Roboz). After exposure of the peritoneal cavity, the intestines were located and exteriorized onto a sterile field surrounding the incision site to visualize the portal vein. Throughout the procedure, the intestines were kept hydrated with sterile PBS that was pre-warmed to 37 °C. For intraportal injection, sterile PBS or pancreatic tumour cells (5 × 10 5 cells suspended in 100 μl sterile PBS) were injected into the portal vein via a 30-gauge needle. Successful injection was confirmed by partial blanching of the liver. For hydrodynamic injection, 1 μg of pLIVE expression vectors was suspended in sterile saline corresponding to 8% of mouse body weight as previously described 28 . Vectors were injected into the portal vein via a 27-gauge needle within 5–8 s. Successful injection was confirmed by complete blanching and swelling of the liver. For both procedures, a sterile gauge was then held over the injection site for 1 min to ensure that no injected contents would leak into the peritoneal cavity. Afterwards, the intestines were placed back into the peritoneal cavity, and the peritoneum and skin were closed with a suture and autoclips, respectively. Following surgery, mice were given buprenorphine subcutaneously as described above. Intraportal injection of pancreatic tumour cells was performed on day 10, and metastatic burden in the liver was evaluated on day 20, unless indicated otherwise in study designs. For orthotopic implantation of colorectal tumour cells, wild-type mice were first subcutaneously injected with MC-38 (1 × 10 6 cells suspended in 100 μl of sterile PBS) into the right flank. After 10 days, mice were euthanized, and subcutaneous tumours were collected. Tumours were then cut into small pieces, each 3 × 3 mm in size, and placed in sterile PBS on ice until implantation. Mice were anaesthetized using isoflurane, and their abdomen was sterilized. Following administration of analgesic agents, median laparotomy was performed as described above. Implantation of colorectal tumour tissues into the caecum was then performed as previously described 29 . After we placed the intestines back into the peritoneal cavity, the peritoneum and skin were closed with a suture, and mice were given buprenorphine as described above. Mice were analysed after 10 days. For parabiotic joining of mice, female CD45.2 mice were orthotopically injected with sterile PBS or pancreatic tumour cells as described above and co-housed with age-matched female B6 CD45.1 mice. Each parabiotic pair was housed in a separate cage to maximize bonding between partners. After one week, parabiotic partners were anaesthetized using continuous isoflurane, and their flanks were sterilized. After administration of analgesic agents, longitudinal skin flaps from the lower limb to the upper limb were created, and everted skin flaps were sewn using a suture. In addition, the knees and olecranons of parabiotic partners were joined together using a suture for additional stabilization. Following surgery, mice were given buprenorphine subcutaneously at a dose of 0.05-0.1 mg/kg every 4-6 h for 5 days. Parabiotically joined mice were analysed after 20 days. For administration of antibodies, the abdomen of mice was sterilized, and anti-CD4 antibodies (GK1.5, 0.2 mg), anti-CD8 antibodies (2.43, 0.2 mg), anti-IL-6R antibodies (15A7, 0.2 mg), or rat isotype control antibodies (LTF-2, 0.2 mg) were suspended in 100 μl sterile PBS. Antibodies were subsequently injected into the peritoneum via a 30-gauge needle. All antibodies used in in vivo experiments were obtained from BioXCell. To deplete F4/80 + myeloid cells, clodronate-encapsulated liposomes (Liposoma) were administered by intraperitoneal injection according to the manufacturer’s protocol. For induction of liver injury, mice were intraperitoneally injected with CCl 4 (Sigma, 1 ml/kg body weight) dissolved in sunflower seed oil as previously described 30 . Detailed information on antibodies and reagents used in experiments can be in found in Supplementary Table 3 . Microscopic analysis For preparation of formalin-fixed paraffin-embedded (FFPE) sections, dissected tissues were fixed in 10% formalin for 24 h at room temperature, washed twice with PBS, and then stored in 70% ethanol solution at 4 °C until they were embedded in paraffin and sectioned at 5 μm. For preparation of cryosections, dissected tissues were embedded in Tissue-tek O.C.T. (Electron Microscopy Sciences) and frozen on dry ice. Frozen tissues were stored at –80 °C until they were sectioned at 7 μm. Automated immunohistochemistry, immunofluorescence, and RNA in situ hybridization were performed on FFPE sections using a Ventana Discovery Ultra automated slide staining system (Roche). Reagents were obtained from Roche and ACDBio (Supplementary Table 3 ) and used according to manufacturer’s protocol. Images were acquired using a BX43 upright microscope (Olympus), an Aperio CS2 scanner system (Leica), or an IX83 inverted multicolour fluorescent microscope (Olympus). Manual immunohistochemistry of mouse tissues for SAA was previously described 31 . For manual multicoloured immunofluorescence staining, O.C.T. liver cryosections were briefly air dried and fixed with 3% formaldehyde at room temperature for 15 min. For intracellular staining, sections were permeabilized with methanol at –20 °C for 10 min immediately after formaldehyde fixation. Sections were then blocked with 10% normal goat serum in PBS containing 0.1% TWEEN 20 for 30 min. For intracellular staining, 0.3% Triton X-100 was added to the blocking solution for permeabilization of cellular and nuclear membranes. Sections were incubated with primary antibodies (Supplementary Table 3 ) in the blocking solution for 1 h at room temperature or overnight at 4 °C, followed by washing with PBS containing 0.1% TWEEN 20. Sections were then incubated with secondary antibodies (Supplementary Table 3 ) in the blocking solution for 1 h at room temperature or overnight at 4 °C. After washing, sections were stained with DAPI to visualize nuclei and subsequently with Sudan Black B in 70% ethanol to reduce autofluorescence, as previously described 32 . Immunofluorescence imaging was performed on an IX83 inverted multicolour fluorescent microscope (Olympus). For quantification of cells and extracellular matrix proteins, five random fields were acquired from each biological sample. Flow cytometry Mice were euthanized, and the liver and lung were removed after the blood was drained by severing the portal vein and inferior vena cava. The liver and lung were rinsed thoroughly in PBS before mincing with micro-dissecting scissors into small pieces (<0.5 × 0.5 mm in size) at 4 °C in DMEM containing collagenase (1 mg/ml, Sigma-Aldrich), DNase (150 U/ml, Roche), and Dispase (1 U/ml, Worthington). Tissues were then incubated at 37 °C for 30 min with intermittent agitation, filtered through a 70-μm nylon strainer (Corning), and washed three times with DMEM. Cells were resuspended in ACK lysing buffer (Life Technologies) at room temperature for 15 min to remove red blood cells. After washing three times with DMEM, cells were counted and stained using Aqua dead cell stain kit (Life Technologies) following the manufacturer’s protocol. For characterization of immune cell subsets, cells were washed three times with PBS containing 0.2 mM EDTA with 2% FBS and stained with appropriate antibodies (Supplementary Table 3 ). For quantification of PDAC–YFP cells, cells were not stained with any antibodies. Lastly, cells were washed three times with PBS containing 0.2 mM EDTA with 2% FBS and examined using a FACS Canto II (BD Biosciences). Collection and analysis of the peripheral blood was as previously described 26 . FlowJo (FlowJo, LLC, version 10.2) was used to analyse flow cytometric data and generate 2D t -SNE plots. Detection of IL-6, SAA, and TIMP1 Mice that were orthotopically implanted with PDAC cells were euthanized, and primary tumours were removed and weighed. In addition, blood samples were collected from the portal vein and left ventricle of the heart using a 27-gauge needle. Tumours were rinsed thoroughly in PBS and minced with micro-dissecting scissors into small pieces (<0.5 × 0.5 mm in size) at 4 °C in serum-free DMEM at 1 mg of tissue per 1 μl medium. Tumour suspensions were then centrifuged at 12,470 g at 4 °C for 15 min, and tumour supernatant was collected and stored at –80 °C until analysis. A similar procedure was performed to obtain pancreas supernatant from mice that were orthotopically injected with PBS. To collect the serum, blood samples were allowed to clot at room temperature for 30 min. Samples were then centrifuged at 12,470 g at 4 °C for 15 min, and the serum was collected and stored at –80 °C until analysis. IL-6 levels in tumour or pancreas supernatant and serum were assessed using a cytometric bead array (BD Biosciences) following the manufacturer’s protocol. Samples were examined using a FACS Canto II (BD Biosciences), and data were analysed using FCAP Array (BD Biosciences, version 3.0). SAA and TIMP1 levels in mouse serum samples were measured using a commercially available enzyme-linked immunosorbent assay kit (Thermo Fisher) following the manufacturer’s protocol. Similarly, SAA levels in plasma samples collected from healthy donors and patients with PDAC as described under ‘Clinical samples’ were measured using a commercially available human enzyme-linked immunosorbent assay kit (Thermo Fisher) following the manufacturer’s protocol. RNA and quantitative PCR Mouse organs and cells were stored in TRIzol (Thermo Fisher) at –80 °C until analysis. Samples were thawed on ice and allowed to equilibrate to room temperature before RNA was isolated using a RNeasy Mini kit (Qiagen) following the manufacturer’s protocol. cDNA synthesis was performed as previously described 33 . Primers for quantitative PCR were designed using the Primer3 online program 34 , and sequences were analysed using the Nucleotide BLAST (NCBI) to minimize non-specific binding of primers. Primers were synthesized by Integrated DNA Technologies, and their sequences can be found in Supplementary Table 4 . Quantitative PCR was performed as previously described 33 . Gene expression was calculated relative to Actb (β-actin) using the ∆ C t formula, and fold change in gene expression was calculated relative to the average gene expression of control groups using the ∆∆ C t formula. Genes with C t greater than or equal to 30 were considered not detected. QuantSeq 3′ mRNA sequencing and data analysis RNA was isolated from the livers of control mice and NTB KPC mice as described above and submitted to the Genomics Facility at the Wistar Institute. After the quality of RNA was assessed using a 2100 Bioanalyzer (Agilent), samples were prepared using a QuantSeq 3′ mRNA-Seq library prep kit FWD for Illumina (Lexogen) following the manufacturer’s protocol and analysed on a NextSeq 500 sequencing system (Illumina). FASTQ files were uploaded to the BaseSpace Suite (Illumina) and aligned using its RNA-Seq Alignment application (version 1.0.0), in which STAR was selected to align sequences with maximum mismatches set to 14 as recommended by Lexogen. Output files were analysed using Cufflinks Assembly & DE application (version 2.1.0) in the BaseSpace Suite to determine differentially expressed genes, which were used to generate an expression heatmap and a FPKM scatter plot. In addition, these genes were analysed using ClueGO (version 2.3.3) 35 and CluePedia (version 1.3.3) 36 , which are applications of Cytoscape software (version 3.5.1) 37 . Functional grouping of biological processes was performed on the basis of kappa score. Gene Ontology data 38 , 39 downloaded on 23 January 2018 were used for analysis. Gene set enrichment analysis (version 3.0) 40 was used to determine biological processes that were differentially enriched in experimental groups. In vitro studies To isolate primary hepatocytes for in vitro studies, mice were anaesthetized using continuous isoflurane, and their abdomen was sterilized. After administering analgesic agents and assessing the depth of anaesthesia, we performed a laparotomy (10–15 mm) along the midline of the abdomen to expose the peritoneal cavity. The intestines were then located and exteriorized to visualize the inferior vena cava and portal vein. The inferior vena cava was cannulated via a 24 gauge Insyte Autoguard cathether (BD), and the liver was perfused using 50 ml liver perfusion medium (Thermo Fisher) at a flow rate of 8–9 ml/min using a peristaltic pump. At the start of perfusion, the portal vein was severed to drain the blood from the liver. Successful perfusion was confirmed by blanching of the liver, which was subsequently perfused using 50 ml liver digest medium (Thermo Fisher) at the same flow rate. Both liver perfusion medium and liver digest medium were pre-warmed to 42 °C in a water bath. After perfusion, the liver was carefully transferred to a Petri dish containing William’s E medium (Sigma) supplemented with 10% FBS, 83 μg/ml gentamicin, and 1% GlutaMAX. To dissociate hepatocytes from the liver, cell scrapers were used to create small cuts (5 mm) on the surface of the liver, and the tissue was gently shaken. Dissociated cells were then filtered through a 100-μm nylon strainer (Corning) and centrifuged at 50 g at 4 °C for 5 min. After the supernatant was discarded, cells were resuspended in a solution consisting of isotonic Percoll (Sigma) and supplemented William’s E medium (2:3 ratio). Cells were then centrifuged at 50 g at 4 °C for 10 min to obtain a pellet enriched in hepatocytes. The supernatant was discarded, and hepatocytes were resuspended in supplemented William’s E medium. Cell viability and number were determined using trypan blue staining, and 5 × 10 4 hepatocytes were seeded in each well of a 48-well plate pre-coated with collagen. Hepatocytes were incubated in supplemented William’s E medium for 4 h at 37 °C, 5% CO 2 to allow attachment to the plate. The medium was then switched to HepatoZYME-SFM (Thermo Fisher) supplemented with 83 μg/ml gentamicin and 1% GlutaMAX. Medium was replenished every 24 h for the next 48–72 h. For hepatocyte activation assays, hepatocytes were incubated in supplemented HepatoZYME-SFM mixed with (i) serum-free DMEM, (ii) primary pancreatic tumour supernatant, or (iii) serum-free DMEM containing 250 ng/ml IL-6 (Peprotech) for 30 min at 37 °C, 5% CO 2 . All mixtures were made in a 1:1 ratio, and each condition was run in triplicate. For the in vitro IL-6R blockade experiment, hepatocytes were pre-incubated with 5 μg/ml anti-IL-6R antibodies for 2 h before being stimulated with tumour supernatant. After stimulation, medium was carefully removed, and formaldehyde and methanol were used to fix and permeabilize hepatocytes, respectively, as described above. Hepatocytes were then stained for pSTAT3 (Supplementary Table 3 ), and their nuclei stained with DAPI. Immunofluorescence imaging was performed on an IX83 inverted multicolour fluorescent microscope (Olympus). Statistical analysis Statistical significance was calculated using Prism (GraphPad Software, version 7) unless indicated otherwise. Multiple comparisons testing was performed using one-way ANOVA with Dunnett’s test. Paired group comparisons test was carried out using two-tailed Wilcoxon matched-pairs signed rank test. Unpaired group comparisons test was performed using two-tailed unpaired Student’s t test or two-tailed Mann–Whitney test. Comparison of Kaplan–Meier overall survival curves was performed using log-rank (Mantel-Cox) test. P values less than 0.05 were treated as significant. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment, unless stated otherwise. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability QuantSeq 3′ mRNA sequencing data have been deposited in the Gene Expression Omnibus (GEO) under accession number GSE109480 . Source Data are provided for all figures and extended data figures. All data are available from the corresponding author upon reasonable request.
When cancer spreads to another organ, it most commonly moves to the liver, and now researchers at the Abramson Cancer Center of the University of Pennsylvania say they know why. A new study, published today in Nature, shows hepatocytes—the chief functional cells of the liver—are at the center of a chain reaction that makes it particularly susceptible to cancer cells. These hepatocytes respond to inflammation by activating a protein called STAT3, which in turn increases their production of other proteins called SAA, which then remodel the liver and create the "soil" needed for cancer cells to "seed." The researchers show that stopping this process by using antibodies that block IL-6—the inflammatory signal that drives this chain reaction—can limit the potential of cancer to spread to the liver. "The seed-and-soil hypothesis is well-recognized, but our research now shows that hepatocytes are the major orchestrators of this process," said senior author Gregory L. Beatty, MD, Ph.D., an assistant professor of Hematology-Oncology at Penn's Perelman School of Medicine. Jae W. Lee, an MD/Ph.D. candidate in Beatty's laboratory, is the lead author. For this study, the team first used mouse models of pancreatic ductal adenocarcinoma (PDAC), the most common type of pancreatic cancer and currently the third leading cause of cancer death in the United States. They found that nearly all hepatocytes showed STAT3 activation in mice with cancer, compared to less than two percent of hepatocytes in mice without tumors. They then partnered with investigators at the Mayo Clinic Arizona and other Penn colleagues to show that this same biology could be seen in patients with pancreatic cancer as well colon and lung cancer. Genetically deleting STAT3 only in hepatocytes effectively blocked the increased susceptibility of the liver to cancer seeding in mice. The team collaborated further with investigators at the University of Kentucky to show that IL-6 controls STAT3 signaling in these cells and instructs hepatocytes to make SAA, which acts as an alarm to attract inflammatory cells and initiate a fibrotic reaction that together establish the "soil." "The liver is an important sensor in the body," Lee said. "We show that hepatocytes sense inflammation and respond in a structured way that cancer uses to help it spread." The study also found that IL-6 drives changes in the liver whether there's a tumor present or not, implying that any condition associated with increased IL-6 levels—such as obesity or cardiovascular disease, among others—could affect the liver's receptiveness to cancer. Researchers say this provides evidence that therapies which target hepatocytes may be able to prevent cancer from spreading to the liver, a major cause of cancer mortality.
10.1038/s41586-019-1004-y
Physics
Scientists develop first implantable magnet resonance detector
Jonas Handwerker et al. A CMOS NMR needle for probing brain physiology with high spatial and temporal resolution, Nature Methods (2019). DOI: 10.1038/s41592-019-0640-3 Journal information: Nature Methods
http://dx.doi.org/10.1038/s41592-019-0640-3
https://phys.org/news/2019-11-scientists-implantable-magnet-resonance-detector.html
Abstract Magnetic resonance imaging and spectroscopy are versatile methods for probing brain physiology, but their intrinsically low sensitivity limits the achievable spatial and temporal resolution. Here, we introduce a monolithically integrated NMR-on-a-chip needle that combines an ultra-sensitive 300 µm NMR coil with a complete NMR transceiver, enabling in vivo measurements of blood oxygenation and flow in nanoliter volumes at a sampling rate of 200 Hz. Main Methods based on nuclear magnetic resonance (NMR) are powerful analytical techniques in the life sciences, using nuclear spins as specific nanoscopic probes. Despite substantial advances in magnetic resonance (MR) hardware and methodology, NMR is still limited by its poor sensitivity (compared, for example, with optical methods), hindering in particular its use in the study of brain physiology and pathology. Recently, integrated circuit (IC)-based NMR systems have been introduced 1 , 2 , 3 , 4 , 5 to simplify the hardware complexity of MR experiments and to boost sensitivity. Integration of the MR detection coil with the transceiver on a single IC 4 , 5 laid the foundation for millimeter-size, sensitive MR systems for in situ and in vivo applications such as palm-size NMR spectrometry 1 and NMR spectroscopy of single cells 5 . Here, we present a monolithic needle-shaped NMR-on-a-chip transceiver (Fig. 1a,b ) that makes the advantages of IC-based NMR available for various applications in neuroscience. With its miniaturized on-chip coil, low-noise performance and compact, 450 µm-wide needle design, our NMR-on-a-chip transceiver simultaneously improves sensitivity as well as spatial and temporal resolution. In contrast to conventional microcoils 6 , 7 , the micrometer-scale interconnecting wires between the on-chip coil and the electronics combined with the fully differential design reduce the pickup of parasitic MR signals and electromagnetic interference. This enables interference-free in vivo experiments in a defined region of interest. Compared to conventional functional MR imaging (fMRI), the on-chip microcoil removes the need for time-consuming spatial encoding and allows for a continuous recording of MR signals in a nanoliter volume with millisecond resolution. Fig. 1: Schematic overview of the target application of the needle-shaped NMR-on-a-chip transceiver, the ASIC design and the experimental setup. a , The NMR needle is inserted into the target brain area, for example the somatosensory cortex, to perform localized and fast functional MR experiments. b , Fully integrated NMR-on-a-chip spectrometer with an on-chip planar broadband detection coil. The transceiver electronics include a low-noise receiver with quadrature demodulation, an H-bridge-based PA and a frequency synthesizer (containing a phase-frequency detector (PFD), a charge pump (CP) and a quadrature signal generator (IQ)). c , Experimental setup around the NMR needle: the ASIC is glued and bonded on a small carrier PCB and connected via a ribbon cable to the signal conditioning PCB. This setup can be mounted either on a carrier with a sample container and a conventional 8 mm surface coil as reference for system characterization, such as linewidth, sensitivity and SNR, and MR imaging (in vitro setup) or on an animal bed for neuronal experiments to measure changes in blood oxygenation and flow in rats (in vivo setup). The bed or carrier is placed inside a 14.1 T small-animal scanner and the system is completed by a commercial data-acquisition card and a LabVIEW-based console located in the control room. Full size image To achieve the required detection sensitivity in a form factor that is suitable for localized in vivo experiments in brain tissue, we realized a complete NMR spectrometer as a complementary metal-oxide-semiconductor (CMOS) application-specific integrated circuit (ASIC) (Fig. 1b ). This low-power (20 mW) NMR-on-a-chip transceiver features an on-chip, 24-turn, 300 µm outer diameter, transmit/receive (TX/RX) NMR coil. The RX path contains a complete quadrature receiver with an overall noise figure of 0.7 dB including a phase-locked loop (PLL)-based frequency synthesizer and protection switches for the low-noise amplifier (LNA). The TX path features an H-bridge power amplifier (PA) operating from a 3.3 V supply and driven by the on-chip PLL that produces a maximum coil current of 15 mA at 600 MHz. Owing to its amplitude and phase modulation capabilities, the on-chip electronics allow for the use of standard imaging sequences and spectroscopy techniques. In mechanical postprocessing, we first ground the manufactured chips down to a thickness of 100 µm and then shaped them as a needle with a wafer dicer. We used two different setups for in vitro characterization and for in vivo neuronal rat experiments in a 14.1 T small-animal scanner (Fig. 1c ). After first-order manual shimming, the NMR needle achieves a spectral linewidth of 12 Hz in a water phantom (Supplementary Fig. 1 ) and 53 Hz for in vivo experiments (Supplementary Fig. 2 ). We determined the sensitivity of the NMR needle using a three-dimensional gradient echo (3DGRE) sequence, resulting in a sensitive volume of 9.8 nl (Fig. 2a and Supplementary Fig. 3 ) and a time-domain spin sensitivity of \(2.0 \times 10^{13}{\,\mathrm{spins}}\,\mathrm{per}\,\sqrt {{\mathrm{Hz}}}\) . Compared to a conventional 8 mm surface coil, the NMR needle’s signal-to-noise ratio (SNR) per spin is 40 times higher ( Methods ). We obtained 3DGRE images of a polyimide phantom with an isotropic resolution of 13 µm in less than 15 min, demonstrating the excellent MR imaging capabilities of the NMR needle (Supplementary Fig. 4 ). Fig. 2: In vitro measurement of the sensitive volume and representative experimental results from in vivo rat forepaw stimulation experiments. a , Single-shot (that is, no averaging) 3DGRE image of the sensitive volume V sens of the NMR needle immersed in 10 mM Gd-doped water ( N = 1). b , Coronal anatomical MR image recorded with a conventional surface coil, showing the precise needle location (no averaging, N = 1). The inset shows an overlay from EPI fMRI with a contralateral activation from the stimulation of the left paw in the implantation region of the needle (average of N = 20 stimulation blocks). c , Axial anatomical MR image showing the precise needle location and implantation depth ( N = 1). The inset shows an overlay from EPI fMRI ( N = 20). The presented data for b and c are representative of 12 animals. d , Contralateral BOLD response showing activations in each of the 20 identical 30 s stimulation blocks of a T R = 5 ms acquisition sequence ( N = 1 block for each curve). The stimulation period t stim = 6 s in each block is indicated by the gray background. e , Mean μ and standard deviation σ of contralateral BOLD responses (average of N = 20 blocks) from EPI fMRI and NMR needle FIDs for stimulations of the left paw with different temporal (t) resolutions (for tSNR calculation see Methods and Supplementary Table 1 ). f , Ipsilateral BOLD responses ( N = 20) from EPI fMRI and NMR needle FIDs for stimulations of the right paw. g , Fit of Δ M 0, i from the functional measurements in e indicating the inflow effect for short T R ( N = 20). h , Fit of \({\mathrm{\Delta }}R_{2,{i}}^ \ast\) from the functional measurements in e for multiple T R ( N = 20). i , Combined plot of mean values μ from g and h ( N = 20). The presented data for d to i are representative of seven animals. Full size image As an in vivo benchmark application of our NMR sensing platform against conventional MR systems for neuronal measurements, we selected the detection of changes in blood flow and oxygenation in rats upon electrical forepaw stimulation. For this purpose, we slowly inserted the NMR needle 1.5 mm deep into the rat’s somatosensory cortex 8 ( Methods ). The in vivo setup (Fig. 1c ) allows for the recording of the typical NMR response after a pulse excitation, the so-called free induction decay (FID), with the NMR needle as well as conventional fMRI using echo planar imaging (EPI) with a surface coil. We also used the surface coil to determine the needle location via high-resolution anatomical MR imaging of the implantation region. The overlays of conventional EPI data on the anatomical MR images show the responses to the stimulation at the needle location (Fig. 2b,c ). The effect of hemodynamic changes on the time course of the FID acquired with the NMR needle is twofold. First, changes in cerebral blood flow (CBF) modulate the initial FID amplitude through a change of inflowing unsaturated blood into the sensitive volume of the coil. Furthermore, changes in local oxygenation of blood (BOLD effect) alter the decay rate \(R_{2}^{\ast}=1/T_{2}^{\ast}\) of the FID. To capture both effects, we calculated the area under each magnitude FID, obtaining a time series with a temporal resolution of up to 200 Hz. We corrected these time series further for temporal stability and physiological noise (Supplementary Fig. 5 ). A stimulation experiment to measure CBF and BOLD changes consisted of 20 identical 30 s blocks with a stimulation for t stim = 6 s, followed by a resting period of 24 s ( Methods ). A corrected time course for an NMR repetition time of T R = 5 ms shows a contralateral response to each of the 20 identical stimulations of the left paw (Fig. 2d ). The signal detected with the NMR needle upon stimulation has an amplitude around 1% and displays a very small delay with respect to the onset and the end of the 6 s stimulation. Signals measured with the needle at T R = 1 s have a similar lineshape, relative signal change Δ S / S and temporal SNR (tSNR) as the reference EPI time course (Fig. 2e ), while being recorded in a substantially lower sensitive volume (9.8 nl compared to a region of interest (ROI) of 12 µl). Compared to a single EPI voxel, the volume-normalized tSNR of the NMR needle at T R = 1 s is 150-fold increased (Supplementary Table 1 ). Increasing the sampling rate of the needle FIDs to 20 Hz and 200 Hz results in a faster tracking of hemodynamic changes. The ipsilateral responses to a stimulation of the right paw (Fig. 2f ) show no measurable effect in any of the measurements, which confirms that the signals measured in the contralateral cortex (Fig. 2e ) represent hemodynamic responses. To separate the changes of local CBF and blood oxygenation in the contralateral responses, we fitted each individual i th FID time course to a physiological model and verified the results by numerical simulations ( Methods ). For a repetition time T R = 1 s, the blood within the sensitive region fully exchanges within one T R , therefore no inflow-related magnitude change Δ M 0, i was observed (Fig. 2g ). The Δ M 0, i for both short T R (5 ms and 50 ms) are around 0.5%, which corresponds to a change in CBF of about 15–30 ml per 100 g per min (Supplementary Fig. 6a ), or 13% to 25% assuming a baseline CBF of 120 ml per 100 g per min(ref. 9 ). This is in the lower range of reported values of 20% to 90% based on MR perfusion measurements 10 , 11 , most likely due to different anesthetics or a potential inclusion of larger vessels. Observed changes in local blood oxygenation \({\mathrm{\Delta }}R_{2,{i}}^ \ast\) are between 1.5 Hz and 2 Hz across all chosen T R (Fig. 2h ), which relates to a local oxygenation change around 15% to 20% (Supplementary Fig. 6b ). The measured changes in \({\mathrm{\Delta }}R_{2,{i}}^ \ast\) are comparable to quantitative \({\mathrm{\Delta }}T_2^ \ast\) measurements in humans and rats ranging between 1 Hz and 6 Hz (refs. 12 , 13 ). Despite the unprecedented temporal resolution of 5 ms, our results indicate neither the presence of an initial dip (a short and small BOLD signal decrease attributed to oxygenation decrease prior to any subsequent blood flow and oxygenation increase 14 ) nor a mismatch between CBF and oxygenation changes (Fig. 2i ). Our NMR needle targets the deep cortical layers of rodents where no initial dip was detected in a previous study 15 . Conventional fMRI studies with large voxel sizes often report a substantial mismatch between CBF and oxygenation changes 16 . However, combined optical measurements of CBF and oxygenation show that this mismatch is only visible at the venous side, but not at the capillary level or at the artery side 17 . Our data thus support, in agreement with optical measurements 17 , that the temporal mismatch between oxygenation and CBF changes is strongly reduced in deep cortical regions. Although preliminary in nature, our results demonstrate the power of CMOS-based in vivo MR experiments and the NMR needle’s potential for future applications in neuroscience. Applications may reveal currently unknown dynamics in the laminar-specific hemodynamic response and the underlying physiology of fMRI with layer-specific resolution, and even effects that are not related to hemodynamics. The NMR needle allows correlation of the continuously detectable and locally acquired MR signals with other recordings, such as local field potentials or optically detected changes in local calcium concentration at a comparable sampling bandwidth and spatial resolution. This provides the possibility of discovering novel effects or fingerprints of neuronal activation inside the continuously evolving MR magnetization. This might include the detection of local geometric changes, for example cell swelling, or the direct detection of bulk neuronal currents through their induction of a local magnetic field 18 , 19 . The sensitive volume of the NMR needle is comparable to the thickness of a single cortical layer, the extension of a cortical column or small subcortical nuclei. Moreover, the scalable CMOS design is well suited to form an array of small coils along the shaft of the needle to collect signals from different cortical layers simultaneously with individual coils and without the need to move the sensor. Additionally, the NMR needle can be extended by an array of electrophysiological electrodes individually connected to integrated preamplifiers for (multisite) in vivo electrical stimulation and recording 20 , 21 on a single sensor chip, thus opening up the path for future multi-modal brain sensing platforms. As the presented NMR needle achieves a similar spatiotemporal resolution as electrophysiology or optical brain recording while offering the specificity and versatility of NMR, IC-based in vivo NMR is a promising approach to close the gap between these complementary imaging modalities. Therefore, we believe that our approach can help to disentangle physiologic processes within the neural network and that this technology can potentially uncover MR effects beyond the conventional hemodynamic signal responses to provide even deeper insights. Methods CMOS chip design and operation The needle-shaped NMR-on-a-chip transceiver has been fabricated in a 130 nm CMOS technology from GlobalFoundries. The postprocessed chips are 3,000 µm long, 450 µm wide and 100 µm thick. The microcoil is located at the needle tip, which is formed in a mechanical postprocessing step, while the bondpads are placed on the opposite end to allow for an implantation depth up to 2 mm. The transceiver electronics substantially improve and extend a previously published version of an NMR-on-a-chip transceiver 4 for the target application of this paper. Here, the extreme form factor required special care in the design of all electrical interconnects to avoid an undesirable coupling into the RX path from the TX path or the frequency synthesizer. The on-chip RX path is fully differential to suppress Hall and magnetoresistive effects inside the strong B 0 field and incorporates a current-reuse LNA with a common mode feedback. The LNA provides an input referred voltage noise of \(1.26\,{\mathrm{nV}}\,\mathrm{per}\,\sqrt {{\mathrm{Hz}}}\) over a bandwidth from 30 MHz to 700 MHz. At 600 MHz, the LNA degrades the intrinsic on-chip coil SNR by 9% corresponding to a noise figure of 0.7 dB. Importantly, in contrast to conventional MR coil arrays, the high-impedance on-chip LNA provides an efficient coil decoupling, allowing for an arbitrary placement of multiple coils along the needle for future microcoil arrays. Active quadrature Gilbert cell mixers follow the LNA and demodulate the detected NMR signal at f NMR to the desired low to intermediate frequency (low-IF) f IF in the range of 10 kHz to 100 kHz. The signals at the low-IF are further amplified and converted to single-ended signals v out,I and v out,Q in the buffer stage to minimize the number of required bondpads. The TX path operates at 3.3 V compared to the 1.5-V RX supply to increase the maximum coil current for pulsed excitation. The H-bridge PA is a nonresonant design to maximize the current in the NMR coil for a given supply voltage 22 and can be disabled during RX operation without requiring additional series switches, which would otherwise decrease the TX performance. The TX signal and the quadrature local oscillator signals f LO,sin and f LO,cos for the RX path are generated from a low-frequency reference using an integer- N PLL, enabling micrometer-length radio frequency interconnects and facilitating the electrical connection of the needle. A frequency shift keying (FSK) of the PLL reference f ref allows for an on-resonance excitation and a low-IF RX operation outside the 1/ f noise region of the receiver. In vitro setup Each postprocessed NMR needle was glued to a carrier printed circuit board (PCB), with two-thirds of the chip extending beyond the PCB edge to enable implantation. The ASIC was wire bonded onto the carrier PCB, which was in turn connected to the 6 × 3 cm 2 large signal conditioning PCB containing amplifiers for the NMR signals, clock and signal buffers and the power supply for the ASIC. The same assembly was used in the in vitro and the in vivo setups (Fig. 2b ), both being designed for operation inside a 14.1 T, 26 cm horizontal bore magnet (Magnex Scientific). The in vitro setup features a sample basin with a diameter of 13 mm and a height of 7 mm, which was filled with 700 µl of deionized or 10 mM gadolinium (Gd)-doped water. Conventional planar coils with 8 mm and 10 mm diameters were placed below the basin and interfaced to the standard BioSpec spectrometer (Bruker BioSpin). Simulation and measurement of the sensitive volume Finite-element electromagnetic simulations of the on-chip coil’s unitary B field, B u , were carried out to characterize its inhomogeneity (Supplementary Fig. 7 a–c), leading to a nonuniform flip angle distribution in the sample during TX and a nonuniform sensitivity during RX 23 . The flip angle was selected by choosing the output current of the H-bridge amplifier via TX supply modulation and an appropriate pulse length. The resulting signal intensity versus pulse length for the maximum TX supply of 3.3 V (Supplementary Fig. 7 d) has its peak value at 13 µs. The sensitive volume of the planar microcoil with its inhomogeneous field distribution was defined consistently throughout the study using the following definition. An image slice parallel to the coil surface at a distance of 0.1 d coil , where d coil is the diameter of the coil, is selected in which the signal ROI is defined as a centered square with a side length of 0.5 d coil . The image signal \(\hat S\) is determined from the mean μ of the individual voxel intensities I S ,1 ,…, I S , i within the signal ROI according to \(\hat S = \mu \left( {I_{S,1}, \ldots ,I_{S,i}} \right)\) . The sensitive volume is then defined as the volume with a signal amplitude of at least 10% of the signal \(\hat S\) , resulting in a simulated sensitive volume of the microcoil of 9.5 nl (Supplementary Fig. 3 a–d). The simulation results were validated experimentally in 10 mM Gd-doped water. The nutation curve was measured in simple pulse-acquire experiments (Supplementary Fig. 7 d). The sensitive volume was assessed in a 3DGRE imaging experiment using T R = 30 ms, pulse time T P = 10 μs, echo time T E = 4.77 ms, acquisition time T acq = 5.1 ms, number of averages N avg = 1, matrix size 128 × 128 × 128, isotropic voxel size 13 µm, field of view (FOV) 1.7 × 1.7 × 1.7 mm 3 , scan time 8 min 12 s and manual shim. The measured sensitive volume was 9.8 nl and simulation and measurement were in good agreement. B 0 map and manual shim The B 0 map in Supplementary Fig. 8 was recorded with the in vitro setup using a 10 mm surface coil underneath the deionized water-filled basin using two 3DGRE sequences with different echo times T E1 and T E2 with T R = 47.86 ms, flip angle FA = 30°, T E1 = 2.56 ms, T E2 = 5.88 ms, T acq = 2.15 ms, N avg = 1, matrix size 256 × 175 × 256, isotropic voxel size 45 µm, FOV 11.5 × 7.9 × 11.5 mm 3 , scan time 35 min 44 s and global shim. The NMR needle causes local susceptibility variations in the order of ±500 Hz. To improve the linewidth and thereby the needle’s frequency domain SNR, a manual shim procedure was used, where the three first-order shim gradients were iteratively changed until a minimum linewidth was found. Modifying higher-order shim gradients did not result in a significant improvement. The resulting x gradient was between −11,000 Hz cm −1 to −8,000 Hz cm −1 , while the y and z gradients were between −2,000 Hz cm −1 and +2,000 Hz cm −1 . The B 0 map in Supplementary Fig. 8b has a field gradient in the x direction of −10,000 Hz cm −1 , which is in good agreement with the results found during the manual shim procedure. A spectral linewidth of 12 Hz (corresponding to 0.02 ppm) was achieved on a water phantom (Supplementary Fig. 1b ). Under in vivo conditions, the intrinsic relaxation time \(T_2^ \ast\) of brain tissue is about 20 ms to 30 ms (corresponding to a linewidth of 16 Hz to 11 Hz) and can be even shorter in regions with high blood volume fraction. Thus, the needle linewidth of 12 Hz does not substantially degrade the intrinsic tissue linewidth for in vivo NMR experiments. Image SNR and system sensitivity The sensitivity of the described microcoil was compared with a conventional 8 mm-diameter surface coil. The time-domain spin sensitivity is defined as the minimum detectable number of spins with an SNR of three in 1 s of measurement time. This is a suitable figure of merit for comparing coil sensitivities because it directly relates to the minimum detectable voxel size. It can be computed from the image SNR per spin, which in turn can be determined from a single magnitude image according to the National Electrical Manufacturers Association Standards Publication MS 1-2008 (ref. 24 ) as detailed in Anders et al. 3 . For the 8- mm coil, 3DGRE imaging with T R = 30 ms, FA = 35°, T E = 4.77 ms, T acq = 1.28 ms, N avg = 1, matrix size 128 × 128 × 128, isotropic voxel size 100 µm, FOV 13 × 13 × 13 mm 3 and scan time 8 min 12 s was used (Supplementary Fig. 9b ). Using the definition of \(\hat S\) , a noise ROI (10 × 10 voxel in each of the four corners) to determine the image noise \(\hat \sigma _{\mathrm{N}}\) and following the procedure of Anders et al. 3 , the time-domain spin sensitivities for both coils were determined. With SNR needle = 68.9 and SNR coil = 383 from Supplementary Fig. 9 , voxel sizes \(\Delta_{{\mathrm{needle}}}^3 = \left( {13\,{{\upmu {\mathrm{m}}}}} \right)^3\) and \(\Delta_{{\mathrm{coil}}}^3 = \left( {100\,{{\upmu {\mathrm{m}}}}} \right)^3\) , T ACQ,needle = 5.1 ms, T ACQ,coil = 1.28 ms ( T R = 30 ms, T E = 4.77 ms, number of phase encoding steps N PE = 128 × 128, N avg = 1, spin density \(N_{{\mathrm{s}},{\mathrm{H}}_2{\mathrm{O}}} = 6.7 \times 10^{28}\,{\mathrm{spins}}\,{\mathrm{m}}^{-3}\) for both), the time-domain spin sensitivities of the two coils are \(8.0 \times 10^{14}\,{\mathrm{spins}}\,{\mathrm{per}}\,\sqrt {{\mathrm{Hz}}}\) for the 8 mm coil and \(2.0 \times 10^{13}{\mathrm{spins}}\,{\mathrm{per}}\,\sqrt {{\mathrm{Hz}}}\) for the NMR needle. This corresponds to a 40-fold improvement in spin sensitivity and a 1,600-fold improvement in imaging time of the presented NMR needle compared to the surface coil. MR imaging The imaging capabilities of the NMR needle were demonstrated using a polyimide foil phantom with 50 × 50 μm 2 laser-cut square openings (Supplementary Fig. 4 ). This phantom was immersed in 10 mM Gd-doped water and the needle placed in close proximity to the foil. A 3DGRE image with an isotropic resolution of 13 µm was recorded using T R = 50 ms, T P = 8 μs, T E = 4.77 ms, T acq = 5.1 ms, N avg = 1, matrix size 128 × 128 × 128, isotropic voxel size 13 µm, FOV 1.7 × 1.7 × 1.7 mm 3 , scan time 13 min 39 s and manual shim. Animal preparation Fifteen healthy, anesthetized rats (Sprague-Dawley, male, 402 ± 49 g, 11 ± 2 weeks) were examined in the 14.1-T small-animal scanner. The study was approved by the local authorities (Regierungspräsidium Tübingen, Germany) and was in full compliance with the guidelines of the European Community for the care and use of laboratory animals. The rats were anesthetized with urethane (1.2 g kg −1 body weight, more if necessary to maintain anesthesia). The body temperature was monitored via a rectal probe and kept constant at around 37 °C by a heating pad. Breathing rate, oxygen saturation and heart rate were monitored throughout surgery and experiment with a pulse oximeter (MouseOx, Starr Life Sciences). During surgery, the animal was supplied with a gas mixture of two-thirds nitrous oxide and one-third oxygen for additional analgesia. The head was shaved, disinfected, fixed in a stereotactic frame and treated with a local analgesic (Lidocaine). The skull was removed around the somatosensory cortex, 3.5 mm off the midline and 0.5 mm posterior to the bregma 8 . The NMR needle was attached to a holder, which was fixed to the skull with bone cement. The needle was then slowly inserted between 1.5 mm and 2 mm into the brain. One animal died during the surgery and for a second one a head trauma was observed during surgery; therefore, no experiments were conducted with those two animals. For the MR measurements, the animal was positioned in an MR-compatible bed with ventilation through a controlled pumped-air facemask. A conventional oval-shaped 20 × 30 mm 2 surface coil was attached horizontally around the implanted needle. Two pairs of needle electrodes were placed between the toes of both anterior paws for peripheral sensory stimulation. Protocol for in vivo MR experiments The position of the needle in the brain after insertion was confirmed via anatomical MR images, acquired with the surface coil (3DGRE matrix size 384 × 384, FOV 45 × 40 mm 2 , eight 1 mm slices, T E = 2.89 ms, T R = 200 ms, FA = 20°). In all functional experiments, a 6 s stimulus (9 Hz, 300 µs pulses, 2.5 mA), interleaved with 24 s rest periods was repeated 20 times. In combination with an initial 20 s rest period with dummy pulses to reach the equilibrium magnetization, this resulted in a total duration of 620 s for one measurement. Conventional functional experiments were performed by transmitting and receiving with the surface coil. A GRE EPI sequence was used (matrix size 64 × 48, FOV 43 × 38 mm 2 , eight 1 mm slices, T E = 9 ms, T R = 1,000 ms, bandwidth 300 kHz). At this stage, one animal had the needle implanted outside the target area, which was confirmed by anatomical images. A second animal did not show any response to the stimulation, neither in conventional EPI nor in the needle experiments, most likely due to an improper anesthesia, which is also a common reason for unsuccessful experiments in conventional fMRI 25 . In three experiments, no valid signal could be measured with the NMR needle, which was found to be caused by mechanical stress at the CMOS–PCB interface during the implantation. This was eliminated in subsequent experiments by an improved probe head design and using a different epoxy glue (EPO-TEK 353ND-T, Epoxy Technology Inc.). In all remaining in vivo experiments, the NMR needle experiments were successful, and all functional experiments were carried out. Here, in seven animals, a change in CBF and blood oxygenation could be observed, while in one animal, no activation could be measured. In the unsuccessful experiment, although near the relevant region, it is most likely that the needle was not close enough to the active brain area. The functional NMR needle experiments were performed using pulse-acquire sequences with 10 µs pulse length without gradients. Different repetition times of T R = 1,000 ms, 50 ms and 5 ms were used corresponding to 600, 12,000 and 120,000 FIDs per experiment. The complex quadrature time-domain signals were sampled at 2 MS s −1 and 16 bit resolution, saved as raw data and evaluated offline after the experiment. Modeling of blood flow and oxygenation changes Neuronal activation triggers a cascade of hemodynamic changes such as increased local CBF and blood oxygenation. To quantify these changes, the relation between the individual FID i and the mean FID mean was assumed to be $${\mathrm{FID}}_i\left( t \right) = \frac{{M_{0,i}}}{{M_{0,{\mathrm{mean}}}}} \times {\mathrm{FID}}_{{\mathrm{mean}}}\left( t \right) \times \exp \left( {\frac{t}{{{\mathrm{\Delta }}T_{2,i}^ \ast }}} \right),$$ where i = 1,…, n is the number of the FID, t is the time elapsed after the excitation pulse and \({\mathrm{\Delta }}T_{2,i}^ \ast\) is the absolute change of \(T_2^ \ast\) of the i th FID. The factor M 0, i / M 0,mean models the relative change of the initial amplitude of each FID i caused by the inflow effect, resulting in \({\mathrm{\Delta }}M_{0,i} = \left( {M_{0,i}/M_{0,{\mathrm{mean}}}} \right) - 1 > 0\) during activation. An increased oxygenation level prolongs the decay of the FID i resulting in $${\mathrm{\Delta }}T_{2,i}^ \ast = 1/{\mathrm{\Delta }}R_{2,i}^ \ast = 1/(R_{2,i}^ \ast - R_{2,\mathrm{mean}}^ \ast ) > 0$$ To estimate Δ M 0, i and \({\mathrm{\Delta }}R_{2,i}^ \ast\) , an exponential fit was applied to $$M_{0,i} \exp \left( {{\mathrm{\Delta }}R_{2,i}^ \ast t} \right) = M_{0,{\mathrm{mean}}} \frac{{{\mathrm{FID}}_i\left( t \right)}}{{{\mathrm{FID}}_{{\mathrm{mean}}}\left( t \right)}}$$ for each individual FID. A two-compartment model consisting of extravascular and intravascular space and corresponding T 1 relaxation times of 2.46 s and 3.16 s (ref. 26 ) was used to simulate the inflow-related signal increase for different excitation flip angles (Supplementary Fig. 6a ). These results do not depend on the chosen fractional blood volume (FBV) and total volume of the two-compartment model and are independent of the repetition time T R up to a certain limit where the saturated blood pool can fully exchange with fresh blood within one T R . This limit is at about T R = 500 ms assuming a baseline CBF of 120 ml per 100 g per min and an FBV of 2% (ref. 9 ). Similar to earlier research 27 , 28 , 29 , randomly oriented cylinders with different radii have been used to model the change of the FID decay rate \(\left( {{\mathrm{\Delta }}R_2^ \ast } \right)\) for different FBV and different changes in local oxygenation, ΔLOX (Supplementary Fig. 6b ). Data analysis All data processing was performed with MATLAB 2017b (The MathWorks Inc.) unless noted otherwise. The volumes of all 3DGRE images were reconstructed by a three-dimensional fast Fourier transform (FFT) without filtering. The B 0 map from Supplementary Fig. 8 was calculated from two 3DGRE sequences with different echo times T E1 and T E2 voxel-by-voxel by extracting the phase difference between the two echoes, unwrapping the calculated volume, and these were converted to hertz by scaling with the difference in the echo times Δ T E . The functional data from the EPI acquisitions were analyzed using Analysis of Functional Neuroimages (v.17.2.02, National Institute of Mental Health) 30 including slice timing correction for the interleaved acquisition and anatomical co-registration. The activation maps were computed on a voxel-by-voxel basis using temporal autocorrelations to calculate the statistically significant maps with thresholds of P < 0.01 (false discovery rate-corrected). Only clusters comprising at least 10 voxels were considered significant. These maps generated the ROIs, which were then used to extract the averaged and concatenated time course in MATLAB. The complex FIDs of the NMR needle were filtered with a Gaussian bandpass filter around the low-IF frequency of 70 kHz with a bandwidth of 5 kHz to remove unwanted noise. The magnitude of the filtered FID was then integrated from 150 µs to 20 ms (4.5 ms for T R = 5 ms) resulting in a single value per FID. Those values were corrected for long-term drifts by applying a second-order polynomial fit over the entire dataset of a 620 s time series and for physiological noise originating from breathing and heart rate by applying narrow-band notch filters at the corresponding frequencies extracted from the measurements with the breathing pad and the pulse oximeter. Additionally, the functional signals were low pass filtered with a 3 Hz Gaussian filter to reduce the noise, since they did not contain any visible stimulation-related features beyond that frequency (Supplementary Fig. 5 ). Calculation of tSNR The tSNR describing the temporal stability of an fMRI signal is the most important figure of merit for the performance of fMRI systems. The measured tSNR values for all signals shown in Fig. 2e are given in Supplementary Table 1 for the raw signal and after each of the signal filtering steps. Although the sensitive volume of 9.8 nl of the needle is about 50 times smaller than a single voxel of the EPI fMRI (530 nl), the tSNR values of the NMR needle at T R = 1,000 ms are only 30% worse than the EPI fMRI results, which were measured over an ROI of 22 voxels with a total volume of 12 µl. The volume-normalized tSNR of the needle for T R = 1,000 ms was about 150 times better than for a single voxel of the reference EPI fMRI experiment. Those results are directly comparable, since both experiments are limited by the short \(T_2^ \ast\) of the tissue. Decreasing T R leads to saturation effects, which therefore results in lower signal amplitudes (Supplementary Fig. 2 ), and, consequently, also in lower raw tSNR for T R = 50 ms and for T R = 5 ms. The second-order baseline correction and the physiological filter produce only a minor tSNR increase, indicating that long-term drifts and physiological noise (mostly heart beat and breathing) do not significantly degrade the NMR needle time course. Since hemodynamic changes are significantly slower than the sampling rates of 20 Hz and 200 Hz, the application of a 3 Hz Gaussian lowpass filter results in an effective tSNR increase of the oversampled physiological signal. The resulting volume-normalized tSNR for T R = 50 ms and T R = 5 ms are 227 and 337 times larger than for the EPI fMRI, respectively. The noise performance of the NMR needle under different conditions was measured to separate different noise sources. The time-domain noise for the needle increased by merely 20% if the needle was surrounded by a water phantom instead of air, confirming that the system noise is dominated by the ohmic resistance of the detection coil and that sample noise is negligible, which is typical for NMR microcoils. The time-domain SNR for the NMR needle immersed in deionized water with T R = 1,000 ms was 330. In the in vivo measurement with T R = 1,000 ms shown in Fig. 2e , the time-domain SNR is reduced to 240, mainly due to the 20% lower proton density of brain tissue compared to water 31 . The additional decrease of the tSNR to 131 compared to the time-domain SNR of 240 is caused by short-term temporal instabilities, which are not corrected by the second-order fit. If those drifts are also corrected, the tSNR increases to 220, which is very close to the time-domain SNR of 240. Statistics and reproducibility From the in vivo experiments, we show representative datasets in Fig. 2b–h and in Supplementary Figs. 2 and 5 . We were able to reproduce similar results for neuronal activation experiments in 7 animals for the NMR needle and in 12 animals for the conventional EPI fMRI. As we observed slightly different response times and amplitudes for different animals, we did not calculate averages for the datasets. The slightly different responses could be caused by several different factors. The most important factor here is the precise location and implantation depth of the needle. Deeper brain areas are known to have a lower magnitude of oxygenation changes 32 . Two other important factors are the anesthesia and the distance between the needle microcoil and nearby vessels and their size. Also, in the EPI results, different degrees of neuronal responses were observed, which are likely caused by an imperfect anesthesia 25 . Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available from the corresponding authors upon request.
A team of neuroscientists and electrical engineers from Germany and Switzerland developed a highly sensitive implant that enables to probe brain physiology with unparalleled spatial and temporal resolution. They introduce an ultra-fine needle with an integrated chip that is capable of detecting and transmitting nuclear magnetic resonance (NMR) data from nanoliter volumes of brain oxygen metabolism. The breakthrough design will allow entirely new applications in the life sciences. The group of researchers led by Klaus Scheffler from the Max Planck Institute for Biological Cybernetics and the University of Tübingen as well as by Jens Anders from the University of Stuttgart identified a technical bypass that bridges the electrophysical limits of contemporary brain scan methods. Their development of a capillary monolithic nuclear magnetic resonance (NMR) needle combines the versatility of brain imaging with the accuracy of a very localized and fast technique to analyze the specific neuronal activity of the brain. "The integrated design of a nuclear magnetic resonance detector on a single chip supremely reduces the typical electromagnetic interference of magnetic resonance signals. This enables neuroscientists to gather precise data from minuscule areas of the brain and to combine them with information from spatial and temporal data of the brain´s physiology," explains principal investigator Klaus Scheffler. "With this method, we can now better understand specific activity and functionalities in the brain." According to Scheffler and his group, their invention may unveil the possibility of discovering novel effects or typical fingerprints of neuronal activation, up to specific neuronal events in brain tissue. "Our design setup will allow scalable solutions, meaning the possibility of expanding the collection of data from more than from a single area—but on the same device. The scalability of our approach will allow us to extend our platform by additional sensing modalities such as electrophysiological and optogenetic measurements," adds the second principal investigator Jens Anders. The teams of Scheffler and Anders are very confident that their technical approach may help demerge the complex physiologic processes within the neural networks of the brain and that it may uncover additional benefits that can provide even deeper insights into the functionality of the brain. With their primary goal to develop new techniques that are able to specifically probe the structural and biochemical composition of living brain tissue, their latest innovation paves the way for future highly specific and quantitative mapping techniques of neuronal activity and bioenergetic processes in the brain cells.
10.1038/s41592-019-0640-3
Medicine
New genomic variants associated with CHIP identified
Michael D. Kessler et al, Common and rare variant associations with clonal haematopoiesis phenotypes, Nature (2022). DOI: 10.1038/s41586-022-05448-9 Kirsty Minton, CHIPping away at the genetic aetiology of clonal haematopoiesis, Nature Reviews Genetics (2022). DOI: 10.1038/s41576-022-00565-7 Journal information: Nature , Nature Reviews Genetics
https://dx.doi.org/10.1038/s41586-022-05448-9
https://medicalxpress.com/news/2022-12-genomic-variants-chip.html
Abstract Clonal haematopoiesis involves the expansion of certain blood cell lineages and has been associated with ageing and adverse health outcomes 1 , 2 , 3 , 4 , 5 . Here we use exome sequence data on 628,388 individuals to identify 40,208 carriers of clonal haematopoiesis of indeterminate potential (CHIP). Using genome-wide and exome-wide association analyses, we identify 24 loci (21 of which are novel) where germline genetic variation influences predisposition to CHIP, including missense variants in the lymphocytic antigen coding gene LY75 , which are associated with reduced incidence of CHIP. We also identify novel rare variant associations with clonal haematopoiesis and telomere length. Analysis of 5,041 health traits from the UK Biobank (UKB) found relationships between CHIP and severe COVID-19 outcomes, cardiovascular disease, haematologic traits, malignancy, smoking, obesity, infection and all-cause mortality. Longitudinal and Mendelian randomization analyses revealed that CHIP is associated with solid cancers, including non-melanoma skin cancer and lung cancer, and that CHIP linked to DNMT3A is associated with the subsequent development of myeloid but not lymphoid leukaemias. Additionally, contrary to previous findings from the initial 50,000 UKB exomes 6 , our results in the full sample do not support a role for IL-6 inhibition in reducing the risk of cardiovascular disease among CHIP carriers. Our findings demonstrate that CHIP represents a complex set of heterogeneous phenotypes with shared and unique germline genetic causes and varied clinical implications. Main As humans age, somatic alterations accrue in the DNA of haematopoietic stem cells (HSCs) due to mitotic errors and DNA damage. Alterations that confer a selective growth advantage can lead to the expansion of particular cell lineages, a phenomenon called clonal haematopoiesis. The presence of clonal haematopoiesis has been associated with an increased risk of haematological neoplasms, cytopaenias, cardiovascular disease (CVD), infection and all-cause mortality 1 , 2 , 3 , 4 , 5 . For this reason, identifying germline causes of clonal haematopoiesis has the potential to improve our understanding of initiating events in the development of these common diseases. Large-scale studies of the germline causes of clonal haematopoiesis have used samples from the UKB and other large cohorts, but those studies have been limited mostly to clonal haematopoiesis phenotypes that can be assessed using single nucleotide polymorphism (SNP) array genotype data, such as mosaic chromosomal alternations (mCA) and mosaic loss of sex chromosomes 4 , 7 , 8 (mLOX and mLOY). Identifying individuals with CHIP, which is defined by somatic protein-altering mutations in genes that are recurrently mutated in clonal haematopoiesis, requires sequencing of blood 1 , 2 . Once a clone has expanded sufficiently, the somatic variants from this clone can be captured along with germline variants by exome sequencing. Since exome sequencing captures protein-altering variants, its large-scale application enables the detection of readily interpretable rare variant association signals, and can elucidate critical genes and pathways and potential therapeutic targeting 9 , 10 . So far, the largest genetic association study of CHIP has included 3,831 CHIP mutation carriers in a sample of 65,405 individuals and has identified four common variant loci 11 . Here, we use exome sequencing data to characterize CHIP status in 454,803 UKB 10 and 173,585 Geisinger MyCode Community Health Initiative (GHS) participants. We then conduct a common variant genome-wide association study (GWAS) and rare variant and gene burden exome-wide association study (ExWAS) of CHIP by leveraging 27,331 CHIP mutation carriers from the UKB. We perform a replication analysis using 12,877 CHIP mutation carriers from the GHS cohort. To identify germline predictors of specific clonal haematopoiesis driver mutations, we also conduct GWAS and ExWAS in carriers of CHIP mutations from individual CHIP genes. We then compare genetic association findings for CHIP to those from analyses of other clonal haematopoiesis phenotypes determined from somatic alterations in the blood, including mCA, mLOX, mLOY and telomere length. Although GWAS of these non-CHIP clonal haematopoiesis phenotypes have been conducted 4 , 7 , 12 , none have evaluated the effect of rare variation. The ExWAS we perform here represents the first systematic large-scale exploration of the effect of rare variants on the genetic susceptibility of these phenotypes. Finally, we examine the clinical consequences of somatic CHIP mutations and germline predictors of CHIP in several ways. We first conduct a PheWAS 13 of germline predictors of CHIP to understand their biological functions, and test cross-sectional phenotype associations of CHIP carrier status across 5,194 traits in the UKB. We then test the risk of incident cancer, CVD and all-cause mortality among specific CHIP gene mutation carriers and use Mendelian randomization to test for evidence of causal associations between CHIP and phenotypes of interest. Calling CHIP We used exome sequencing data from 454,803 and 173,585 individuals from the UKB and GHS cohorts, respectively, to generate large callsets of CHIP carrier status ( Methods ). In brief, we called somatic mutations using Mutect2 in a pipeline that included custom QC filtering (Extended Data Fig. 1a ), and ultimately restricted our analysis to 23 well defined and recurrent CHIP-associated genes. This focused analysis identified 29,669 variants across 27,331 individuals in the UKB (6%), and 14,766 variants across 12,877 individuals in the GHS (7.4%). DNMT3A , TET2 , ASXL1 , PPM1D and TP53 were the most commonly mutated genes in both cohorts (Extended Data Fig. 2a ). Although the GHS cohort had a wider age range, and therefore a larger number of older individuals, the prevalence by age was similar across cohorts, and reached approximately 15% by 75 years of age (Extended Data Fig. 1b,c ). Prevalence of CHIP gene-specific mutations was consistent with recurrence patterns, with mutations in the most commonly mutated CHIP genes beginning to increase in prevalence at younger ages (Extended Data Fig. 1d,e and Supplementary Note 1 ). Somatic mutations within the IDH2 and SRSF2 genes co-occurred significantly more frequently than expected in both the UKB and GHS cohorts, whereas DNMT3A mutations co-occurred less frequently with other mutations than expected (Extended Data Fig. 2b,c and Supplementary Table 1 ). Among individuals with multiple CHIP mutations (Supplementary Note 2 and Supplementary Fig. 1 ), JAK2 mutations consistently had the highest variant allele fraction (VAF) (Supplementary Fig. 1b ). CHIP demographics Compared with controls, CHIP carriers in both the UKB and GHS cohorts were older and more likely to be heavy smokers, consistent with previous studies 11 (Table 1 ). Although our cohorts were predominantly comprised of European ancestry individuals, the prevalence of CHIP was similar across all ancestries (Supplementary Fig. 2 ). In multivariate logistic regression models, each additional year of age was strongly associated with an increased risk of CHIP in the UKB (odds ratio [range] = 1.08 [1.077–1.082], P < 10 −300 ) and GHS (odds ratio = 1.06 [1.057–1.063], P < 10 −300 ), and heavy smoking was strongly associated with CHIP carrier status in both UKB (odds ratio = 1.17 [1.14–1.21], P = 7.32 × 10 −24 ) and GHS (odds ratio = 1.24 [1.10–1.41], P = 6.3 × 10 −4 ). Overall, our results suggest that the prevalence of CHIP doubles every 9–12 years of life. These associations with age and smoking were stronger when restricting to high-VAF (≥0.1) CHIP carriers. In our multivariate modelling, women were significantly more likely to be CHIP mutation carriers than men in the UKB (odds ratio = 1.08 [1.05–1.11], P = 6.01 × 10 −7 ), but not in the GHS (odds ratio = 1.01 [0.93–1.11, P = 0.77]). These associations were consistent when restricting to high-VAF CHIP carriers, although the risk of high-VAF CHIP was not significantly greater in women in the UKB (odds ratio = 1.035 [0.99–1.08], P = 0.126). Table 1 Descriptive statistics for CHIP mutation carriers Full size table Genetic association with CHIP carrier status We first conducted genetic association analyses in the UKB cohort to identify germline loci associated with the risk of developing CHIP. In the common variant (minor allele frequency (MAF) > 0.5%) GWAS, which included 25,657 cases and 342,869 controls with European ancestry, we identified 24 loci (21 novel loci) harbouring 57 independently associated variants (Fig. 1 and Supplementary Table 2 ). To confirm these signals, we conducted a replication analysis in 9,523 CHIP cases and 105,502 controls of European ancestry from the GHS cohort. We estimated that we had sufficient statistical power in the GHS to detect 19.99 true and directionally consistent associations across lead SNPs from the 24 loci we identified in the UKB and achieved nominally significant ( P < 0.05) replication for 15 SNPs (Supplementary Table 2 ). We used conditional analysis and statistical fine-mapping to further evaluate the independence of our genome-wide associations and found results to be consistent across methods (Extended Data Fig. 3 , Supplementary Note 3 , Supplementary Tables 3 – 6 and Supplementary Fig. 3 ). Fig. 1: GWAS of CHIP. Manhattan plot showing results from a genome-wide association analysis of CHIP. Twenty-four loci reach genome-wide significance ( P ≤ 5 × 10 −8 , dashed line), and top-associated variants per locus are labelled with biologically relevant genes. Three of these loci have been previously identified (black), whereas 21 represent novel associations (red). Loci with suggestive signal ( P ≤ 5 × 10 −7 ) are labelled in grey. Association models were run with age, age 2 , sex and age × sex, and 10 ancestry-informative principal components as covariates. P -values are uncorrected and are from two-sided tests performed using approximate Firth logistic regression. Full size image We next sought to identify rare germline variants associated with CHIP. Since the CHIP phenotype is based on the presence of rare somatic variants in recurrently mutated genes, rare germline variants potentially misclassified as somatic can lead to false association signals. To address potential misclassification, we evaluated median VAF and association with age for each rare germline variant or gene burden associated with CHIP. We also conditioned these rare variant analyses on independent common variant signals to address confounding due to linkage disequilibrium (LD) (Supplementary Note 4 ). Ultimately, we identified a single rare germline frameshift variant in the CHEK2 gene that was significantly associated with CHIP (odds ratio = 2.22 [1.89–2.61], P = 8.04 × 10 −22 ; Supplementary Table 7 ), remained so after conditioning on common variant signals (odds ratio = 2.90 [1.93–4.34], P = 2.40 × 10 −7 ), and replicated in the GHS (odds ratio = 1.56 [1.19–2.04], P = 1.22 × 10 −3 ). The two cancer-associated genes ATM and CHEK2 were associated with an increased risk of CHIP via rare variant gene burden testing (Supplementary Table 8 ), and we also found a significant gene burden association between rare loss of function (and missense) variants in the telomere maintenance and DNA replication associated gene CTC1 and an increased risk of CHIP (odds ratio = 1.55 [1.32–1.81], P = 5.24 × 10 −8 ). Of these three gene burden associations, the ATM and CHEK2 signals were replicated in the GHS ( P = 8.22 × 10 −5 and P = 0.03, respectively), and VAF and age-association calculations suggested that all three of these gene burden signals were driven by germline variation. We also performed genome-wide association analyses in individuals of non-European ancestral background (Supplementary Note 5 and Supplementary Table 9 ). For each germline variant associated with CHIP and prioritized by clumping and thresholding, conditional analysis or fine-mapping (see Methods ), we queried its associations across 937 binary and quantitative health traits from the UKB for which we have previously performed genetic association analysis 10 (Supplementary Table 10 ). Overall, the traits with significant associations consisted predominantly of blood measures (that is, cells counts and biomarker levels), anthropometric measures related to body size, autoimmune phenotypes and respiratory measures. SNPs with the largest number of significant phenotypic associations included those at the HLA , TP53 , ZFP36L2 and THADA, CD164 and MYB loci (Extended Data Fig. 4 ). Whereas associations with blood cell counts and biomarker levels are probably the direct result of expansion of individual cell lineages in blood, association with autoimmune phenotypes could reflect the consequences of disrupted immune system differentiation related to clonal haematopoiesis. Analyses of individual CHIP gene mutations To identify CHIP subtype-specific risk variants, we defined gene-specific CHIP phenotypes for each of the eight most commonly mutated CHIP genes. For each subtype, we selected individuals with mutations in one of the eight genes and no mutations in any of the other genes used to define CHIP. We then conducted genetic association analyses comparing these single CHIP gene carriers to CHIP-free controls, with replication in the GHS, and observed shared, unique, and opposing effects of associated loci on CHIP subtypes, including 8 genome-wide significant loci that were not significant in our overall analysis of CHIP (Fig. 2a , Extended Data Fig. 5 and Supplementary Tables 11 – 19 ). Fig. 2: Germline effect size comparisons across CHIP and Forest plots of PARP1 and LY75 missense variants. a , Using results from CHIP gene-specific association analyses, effect sizes of index SNPs are compared across CHIP subtypes. SNPs were chosen as those that were independent on the basis of clumping and thresholding (with some refinement based on our conditionally independent variant list) and genome-wide significant in at least one association with CHIP or a CHIP subtype. Certain loci showed notably different effects across CHIP subtypes, as seen at the CD164 locus, which was associated with DNMT3A CHIP and ASXL1 CHIP but not TET2 CHIP, and the TCL1A locus, which was associated with increased risk of DNMT3A CHIP but reduced risk of other CHIP subtypes (blue rectangles). b , Forest plots are shown reflecting the protective associations of a PARP1 missense variant (rs1136410-G) and two LY75 missense variants (rs78446341-A, rs147820690-T) with our DNMT3A CHIP phenotype in the UKB and GHS cohorts. Centre points represent odds ratios as estimated by approximate Firth logistic regression, with errors bars representing 95% confidence intervals. P -values are uncorrected and reflect two-sided tests. Numbers below the cases and controls columns represent counts of individuals with homozygote reference, heterozygote and homozygous alternative genotypes, respectively. Full size image DNMT3A , which was the most commonly mutated gene in the overall CHIP phenotype, had the largest number of significantly associated loci ( n = 23), most of which overlapped with the overall CHIP association signals. Six loci achieved genome-wide significance in our DNMT3A CHIP analysis that were not significant in our overall analysis ( RABIF , TSC22D2 , ABCC5 , MYB , FLT3 and TCL1A ; Extended Data Fig. 5 ). Although most loci harboured variants that increased CHIP risk, two exceptions are noteworthy (Fig. 2b ). At the PARP1 locus on chromosome 1, a tightly linked block of around 30 variants (29 in the 95% credible set from fine-mapping; Supplementary Table 6 ) with an alternate allele frequency (AAF) of 0.15 was associated with reduced risk of DNMT3A CHIP (odds ratio = 0.87 [0.84–0.90], P = 2.70 × 10 −17 ). PARP1 has a role in DNA damage repair, and many variants in this block have been identified across multiple transcriptomic studies of blood as PARP1 expression quantitative trait loci (eQTLs) that associate with reduced PARP1 gene expression 14 , 15 , 16 , 17 . Furthermore, a missense variant (rs1136410-G, V762A) that is predicted as likely to be damaging (combined annotation dependent depletion (CADD) score = 27.9) is a part of this LD block, and has recently been reported to associate with improved prognosis and survival in myelodysplastic syndromes 18 (MDS). At a locus on chromosome 2, rs78446341 (P1247L in LY75 ) was associated with reduced risk of DNMT3A CHIP (odds ratio = 0.78 [0.72–0.84], P = 3.70 × 10 −10 ), and was prioritized by fine-mapping (Extended Data Fig. 3 ). LY75 features lymphocyte-specific expression (Supplementary Fig. 4a ), and is thought to be involved in antigen presentation and lymphocyte proliferation 19 . We also identified a second rare (AAF = 0.002) missense variant (rs147820690-T, G525E) that associated with reduced risk of DNMT3A CHIP at close to genome-wide significance (odds ratio = 0.48 [0.36–0.63], P = 1.15 × 10 −7 ). This variant was predicted as likely to be damaging (CADD = 23.6) and remains associated (odds ratio = 0.63 [0.51–0.77], P = 4.80 × 10 −6 ) when conditioning on common variant signal in this locus (that is, this rare variant signal is independent of the common variant signal in this locus). This variant was also prioritized by fine-mapping (Extended Data Fig. 3 and Methods for jointly fine-mapping common and rare variants). Finally, these signals in PARP1 and LY75 replicated in the GHS (Fig. 2b ). Among loci associated with multiple CHIP subtypes (Supplementary Note 6 ), we observed genome-wide significant association signals at the TCL1A locus that were not present in the overall CHIP analysis. This locus is notable because it exhibited genome-wide significant effects in opposing directions across CHIP subtypes (Extended Data Figs. 2a and 5 and Supplementary Table 20 ), with lead SNPs (for example, rs2887399-T, rs11846938-G and rs2296311-A) at the locus associated with an increased risk of DNMT3A CHIP (odds ratio = 1.14 [1.11–1.17], P = 2.13 × 10 −20 ) but a reduced risk of TET2 CHIP (odds ratio = 0.75 [0.71–0.80], P = 9.14 × 10 −22 ) and ASXL1 CHIP (odds ratio = 0.70 [0.65–0.76], P = 8.59 × 10 −18 ). Effect estimates from the other five CHIP gene-specific association analyses were also consistent with protective effects. This is consistent with findings from a recent genetic association study of CHIP in the TOPMed cohort 11 , which identified a genome-wide significant positive association of the TCL1A locus and DNMT3A CHIP as well as a nominally significant opposing signal for TET2 CHIP. Additionally, the DNMT3A CHIP-increasing allele has been found to reduce the risk of mLOY in a recent GWAS 7 . This observation suggests that DNMT3A CHIP is distinct among clonal haematopoietic subtypes with regard to the genetic influence of the TCL1A locus, which may relate to the fact that TCL1A has been reported to directly interact with and inactivate DNMT3A 20 . CHIP and mosaic chromosomal alterations To evaluate the relationship between CHIP and other forms of somatic alterations of the blood, we used phenotype information on other types of clonal haematopoiesis that are available for UKB participants 4 , 7 , 8 , 12 . We first evaluated the phenotypic overlap between CHIP and mLOY, mLOX and autosomal mosaic chromosomal alterations (mCAaut). CHIP is distinct from mCA phenotypes (mCAaut, mLOX and mLOY), with more than 80% of CHIP carriers having no identified mCAs (Supplementary Fig. 4b ). Furthermore, having an mCA is not significantly associated with being a CHIP carrier after adjusting for age, sex and smoking status (odds ratio = 1.02, P = 0.27). Carriers of only a single clonal haematopoiesis driver (that is, CHIP, mLOY, mLOX or mCAaut) were younger on average than those with multiple clonal haematopoiesis lesions, and mCAaut and CHIP carriers were youngest among single clonal haematopoiesis phenotype carriers (Supplementary Fig. 4c ). We then conducted GWAS and ExWAS analyses of these somatic alteration phenotypes and evaluated the germline genetic contributions shared between CHIP and these traits (Supplementary Fig. 5 and Supplementary Tables 21 – 27 ). Genome-wide genetic correlation ( r g ) 21 , 22 was nominally significant between CHIP and mLOY ( r g = 0.27, P = 0.014 (uncorrected); Supplementary Table 21 ). Notably, variants at 4 loci (marked by the genes ATM, LY75, CD164 and GSDMC ) showed similar associations with both CHIP and mLOY, whereas variants at the SETBP1 locus were negatively associated with CHIP and positively associated with mLOY. These comparisons suggest that despite being distinct clonal haematopoietic phenotypes, CHIP and mLOY share multiple germline genetic risk factors. Although the common variant association analyses of these other somatic alteration phenotypes were undertaken for the purpose of comparing to CHIP, and our results are consistent with recent published associations for these non-CHIP UKB somatic alteration phenotypes 4 , 7 , 8 , we also identified novel rare variant and gene burden associations via ExWAS analyses (Supplementary Note 7 , Supplementary Tables 22 – 27 and Supplementary Fig. 6 ). We also extended our ExWAS analysis to telomere length and identified multiple novel rare variant associations (Supplementary Note 8 and Supplementary Tables 28 – 30 ). Phenotypic associations with CHIP Clonal haematopoiesis has been associated with an increased risk of haematologic malignancy and CVD, as well as other health outcomes including all-cause mortality and susceptibility to infection 3 , 4 , 23 , 24 . To test for expected as well as potentially novel associations, we performed cross-sectional association analyses across 5,041 traits (2,640 binary and 2,401 quantitative traits) from the UKB, curated as part of our efforts for the UKB Exome Sequencing Consortium. We performed Firth penalized logistic regression using CHIP gene mutation carrier status (that is, whether an individual had a mutation in our callset within a specific CHIP gene) as the binary outcome for 22 of the 23 CHIP genes in our callset (counts were too low for CSF3R ; Methods ), with age, sex and ten genetic principal components as covariates. Our results are consistent with previous findings, with the majority of associated phenotypes deriving from cardiovascular, haematologic, neoplastic, infectious, renal and/or smoking-related causes (Fig. 3 , Supplementary Fig. 7 and Supplementary Table 31 ). Fig. 3: Phenome association profiles per CHIP subtype. Profiles are shown for each CHIP gene subtype reflecting phenome-wide association results. The y -axis (concentric circles) represents the proportion of phenotypes within a trait category that were nominally associated ( P ≤ 0.05) with carrier status of the CHIP gene. A CHIP gene had to have at least one disease category with the proportion of associated phenotypes ≥ 0.2 to be included in the figure. As expected, haematological traits show the largest proportion of phenotypic trait associations overall. The largest number of cancer associations are seen for DNMT3A CHIP, whereas JAK2 CHIP shows the highest proportion of cardiovascular associations. Respiratory associations are most pronounced for ASXL1 CHIP. SUZ12 CHIP shows a unique profile across CHIP subtypes, with a higher proportion of ophthalmological and endocrine associations. Association models were run with age, age 2 , sex and age × sex, and ten ancestry-informative principal components as covariates. Full size image ASXL1 CHIP was associated with the largest number and widest range of traits, and many of these associations traced to correlates of smoking. SUZ12 CHIP showed a distinct association profile amongst CHIP genes, with a larger proportion of associations in endocrine and ophthalmologic traits than other CHIP genes. Many traits showed associations with DNMT3A CHIP and TET2 CHIP that were in opposing directions, including white blood cell count, platelet count and neutrophil count, which were all positively associated with DNMT3A CHIP and negatively associated with TET2 CHIP. These results are consistent with functional differences in the haematopoietic phenotypes of DNMT3A - and TET2 -knockout mice 25 . Notably, body mass index (BMI) and fat percentage were negatively associated with DNMT3A CHIP and other leukaemogenic CHIP mutations (for example, JAK2 , CALR and MPL ), but are positively associated with other CHIP subtypes (for example, TET2 and ASXL1 ). We also observed significant associations between JAK2 mutations and gout, which may reflect the increased uric acid production that can accompany haematopoiesis 26 and/or renal disease 27 , or even uric acid-independent associations identified between anaemia and gout 28 . Given recent reports that clonal haematopoiesis is associated with an increased risk of COVID-19 and other infections 4 , 29 , we also tested for an association between CHIP and COVID-19 infection in the UKB cohort 30 . When restricting to CHIP carriers with VAF ≥ 10% (Supplementary Note 9 ), we found that CHIP carrier status was significantly associated with COVID-19 hospitalization (odds ratio = 1.26 [1.07–1.47], P = 4.5 × 10 −3 ) and severe COVID-19 infection (odds ratio = 1.55 [1.19–1.99], P = 8.5 × 10 −4 ) in logistic regression models that excluded individuals with any previous blood cancers and that adjusted for age, sex, smoking, BMI, type 2 diabetes, active malignancy, and five genetic principal components. Analyses at the CHIP subtype level suggested that PPM1D carriers may be at elevated risk of severe COVID-19 (odds ratio = 5.42 [1.89–12.2], P = 2.8 × 10 −4 ; Supplementary Note 9 ). Longitudinal disease risk among CHIP carriers Given the confounding that can bias cross-sectional association analyses, we performed survival analyses to evaluate whether individuals with CHIP at the time of enrolment and blood sampling in the UKB were at an increased risk of subsequent CVD, cancer and all-cause mortality. To do this, we generated aggregate longitudinal phenotypes of CVD, lymphoid cancer, myeloid cancer, lung cancer, breast cancer, prostate cancer, colon cancer and overall survival (that is, any death). Because prior longitudinal studies of CHIP and the risk of many of these outcomes have focused on high-VAF CHIP, we focused on CHIP carriers with VAF ≥ 0.10 for these analyses. To complement these longitudinal analyses, we used Mendelian randomization to evaluate the relationship between CHIP and subsequent disease (Extended Data Fig. 6a , Supplementary Note 10 and Supplementary Table 32 ). We observed a significantly increased risk of CVD in CHIP carriers (hazard ratio = 1.11 [1.03–1.19], P = 4.2 × 10 −3 ), which was driven by TET2 CHIP (hazard ratio = 1.31 [1.14–1.51], P = 1.3 × 10 −4 ; Supplementary Fig. 8a ). However, this risk estimate is lower than the hazard ratio of 1.59 recently reported by Bick et al. 6 in an analysis of CHIP from the first 50,000 UKB participants (hereafter referred to as the 50k UKB subset) with exome sequencing data available. Therefore, we restricted our analysis to the 50,000 individuals from the previous study and found that the estimated hazard ratio is indeed higher in this subset (hazard ratio = 1.30 [1.06–1.59], P = 0.013; Supplementary Fig. 8b ). Bick et al. also observed a cardio-protective effect of IL6R rs2228145-C (a genetic proxy for IL-6 receptor inhibition) among CHIP carriers in the 50k UKB subset, so we repeated that analysis in both the 50k UKB subset and the full UKB cohort ( n = 430,924 in these analyses). We observed the same CHIP-specific protective IL6R effect in the 50k UKB subset as previously reported (hazard ratio = 0.60 [0.40–0.89], P = 0.012), however we did not find any IL6R effect in the full cohort (hazard ratio = 0.99 [0.91–1.07], P = 0.784, n = 430,924; Extended Data Fig. 7a–d ). These results were consistent when varying which CHIP mutations we used to define CHIP case status, as well as when using different VAF thresholds and a variety of CVD endpoint composites ( Methods ). We did not find any association between CHIP and CVD, nor a CHIP-specific protective IL6R effect, when repeating this analysis in the GHS cohort (Supplementary Figs. 8d and 9a, b ). Furthermore, we did not find evidence for a casual association between CHIP and CVD when using a two-sample Mendelian randomization approach (Supplementary Note 10 , Supplementary Fig. 10 and Supplementary Table 32 ). We next tested whether CHIP carriers are at an increased risk of haematologic and solid cancers, and whether risk differed by CHIP mutational subtype for the three most common CHIP genes (that is, DNMT3A , TET2 and ASXL1 ; Extended Data Figs. 7 – 9 and Supplementary Figs. 11 – 14 ). To control for the possibility that toxic chemotherapeutic treatment for previous cancers might drive the development of CHIP mutations 31 and/or otherwise confound association analyses, we performed all analyses after excluding individuals with any diagnoses of cancer prior to DNA collection. As expected, we found CHIP carriers with VAF ≥ 0.10 to be at a significantly elevated risk of developing any blood cancer (hazard ratio = 3.88 [3.46–4.36], P = 9.10 × 10 −117 ; Supplementary Fig. 11a ), and we identified similarly elevated risk when replicating these analyses in the GHS (Supplementary Fig. 11d ). We also estimated the risk of CHIP on neoplastic myeloid subtypes, including acute myeloid leukaemia (AML), MDS and myeloproliferative neoplasms (MPN), and found that high-VAF CHIP carriers have more than 23-fold increased risk of acquiring an MPN (hazard ratio = 23.11 [17.63–30.29], P = 1.60 × 10 −114 ) (Extended Data Fig. 8 ). As expected, we identified a significant association between myeloid leukaemia and CHIP by Mendelian randomization (Supplementary Note 10 , Supplementary Fig. 12 and Supplementary Table 32 ). We then tested whether CHIP carriers had an increased risk of developing solid tumours, and found that high-VAF carriers are at significantly increased risk of developing lung cancer (hazard ratio = 1.64 [1.42–1.90], P = 1.10 × 10 −11 ), and more modest increased risk of developing prostate cancer (hazard ratio = 1.18 [1.05–1.32], P = 5.30 × 10 −3 ) and non-melanoma skin cancer (hazard ratio = 1.14 [1.04–1.24], P = 4.7 × 10 −3 ; Fig. 4 and Supplementary Fig. 13 ). We also observed a non-significant increased risk of developing breast cancer (hazard ratio = 1.14 [0.99–1.31], P = 0.062) and no increase in risk for the development of colon cancer (hazard ratio = 0.95 [0.78–1.15], P = 0.59; Supplementary Fig. 13 ). Models estimating event risk on the basis of CHIP mutational subtype (for example, DNMT3A CHIP) suggest that these associations with prostate and breast cancer are driven primarily by DNMT3A mutations. Only the association with lung cancer was replicated in the GHS (Fig. 13e ), although sample sizes were limited for the analyses in the GHS owing to how the biobank data were ascertained ( Methods ). Fig. 4: Increased risk of lung cancer among CHIP carriers. a , Forest plot and table featuring hazard ratio estimates from Cox proportional hazard models of the risk lung cancer among CHIP carriers. Error bars represent a 95% confidence interval. Associations are similar across common CHIP subtypes, as well as among CHIP carriers with lower VAF (≥2%). Models are adjusted for sex, low density lipoprotein, high density lipoprotein, smoking status, pack years, BMI, essential primary hypertension, type 2 diabetes mellitus, and 10 genetic principal components specific to a European ancestral background. HR, hazard ratio. UKB 450K, the 450,00-participant full UKB dataset. DNMT3A+ represents subjects with DNMT3A CHIP and at least one other type of CHIP mutation. b , Estimated associations via four Mendelian randomization methods between CHIP and lung cancer. Each point represents one of 29 instrumental variables (that is, conditionally independent SNPs) that were identified in the UKB cohort as associated with CHIP. The x -axis shows the effect estimate (beta) of the SNP on CHIP in the UKB cohort, and the y -axis shows the effect estimate (beta) of the SNP on lung cancer in the GHS cohort. The slope of each regression line represents the effect size estimated by respective methods. IVW, inverse variance weighted. Full size image Given the strong associations between CHIP and both blood and lung cancers, and the associations between smoking and both CHIP and lung cancer, we performed additional analyses stratified by smoking status to test whether these associations were driven by smoking and merely marked by CHIP mutations. Although smoking status is difficult to ascertain, we used an inclusive ‘ever smoker’ definition to minimize the likelihood that individuals labelled as non-smokers had engaged in any smoking ( Methods ). High-VAF CHIP carriers had an increased risk of developing blood cancers in both smokers (hazard ratio = 3.95 [3.25–4.78], P = 2.80 × 10 −44 ) and non-smokers (hazard ratio = 3.97 [3.43–4.58], P = 1.10 × 10 −77 ; Supplementary Fig. 14a, b ). Notably, lung cancer risk for high-VAF CHIP carriers was significantly elevated among both smokers (hazard ratio = 1.67 [1.41–1.97], P = 1.5 × 10 −9 ) and non-smokers (hazard ratio = 2.02 [1.53–2.67], P = 8.30 × 10 −7 ; Extended Data Fig. 9a,b ). These associations were driven by DNMT3A and ASXL1 CHIP carriers, with both estimated to have elevated lung cancer risk in both smokers and non-smokers. We replicated the association between CHIP carrier status and lung cancer in both smokers and non-smokers in the GHS (Extended Data Fig. 9c,d ). Overall, these models suggest that CHIP mutation carriers are at an elevated risk of both blood cancer and lung cancer, independent of smoking status. We also found support for a causal association between CHIP and lung cancer (inverse variance weighted odds ratio (OR IVW ) = 1.55 [1.34–1.80], P = 8.90 × 10 −9 ; Fig. 4 and Extended Data Table 1 ), as well as more modest support for causal associations between CHIP and melanoma (OR IVW = 1.39 [1.13–1.1.71], P = 0.0021), CHIP and non-melanoma skin cancer (OR IVW = 1.26 [1.13–1.41], P = 5.30 × 10 −5 ), CHIP and prostate cancer (OR IVW = 1.20 [1.03–1.1.39], P = 0.017), and CHIP and breast cancer (1.17 [1.04–1.31], P = 0.01), when performing Mendelian randomization (Extended Data Fig. 6a , Supplementary Note 10 and Supplementary Table 32 ). Although there is a concern that variants predisposing to CHIP via cancer-associated pathways (for example, telomere biology, DNA damage repair and cell cycle regulation) may confound these associations via horizontal pleiotropy, Egger-based Mendelian randomization methods that account for this bias by fitting a non-zero intercept provided additional support for these associations. Finally, the risk of death from any cause was significantly elevated among high-VAF CHIP carriers (hazard ratio = 1.27 [1.18–1.36], P = 2.70 × 10 −11 ), and was similar across DNMT3A , TET2 and ASXL1 CHIP subtypes (Extended Data Fig. 6b ). In this study, we present the largest assessment to date of individuals with CHIP mutation carrier information, as well as the use of these calls to identify novel common and rare variant loci associated with CHIP and CHIP subtypes. These loci, which have shared, unique and opposing effects on the risk of developing different types of CHIP and other somatic alterations of the blood, highlight the fact that germline variants can predispose to clonal expansions, and that CHIP encapsulates a complex set of heterogeneous phenotypes. We further show that the genetic aetiology of CHIP is reflected in its clinical consequences, as the risk of various clinical conditions is differentially associated across CHIP gene mutations. The new loci identified in this study provide a foundation on which to investigate the biological mechanisms that lead to specific features of CHIP. For example, among CHIP-associated loci, variants in the TCL1A locus that are associated with an increase in the risk of DNMT3A CHIP have the opposite effect on the risk of all other CHIP and clonal haematopoiesis subtypes. Coupled with recent findings that link the role of TCL1A in mLOY to lymphocytes 7 (for example, B cells), our results further suggest TCL1A as a critical mediator of clonal haematopoiesis as well as clonal haematopoiesis subtype-specific differences. Several novel loci associated with DNMT3A CHIP harbour genes that are potential targets for the development of new treatments to prevent or slow the expansion of CHIP clones. Both PARP1 and LY75 contain missense variants associated with reduced risk of CHIP and of DNMT3A CHIP specifically. The variants in the PARP1 locus are significantly associated with reduced PARP1 gene expression in whole blood 32 ( P ≤ 1 × 10 −13 ), and the V762A missense variant (rs1136410-G) has been recently reported to associate with improved prognosis and survival in MDS 18 . Given the well-established role of PARP1 in DNA repair 33 , and that a recent CRISPR screen study in zebrafish identified PARP1 inhibition as a selective killer of TET2 mutant haematopoietic stem cells 34 , it seems plausible that a therapeutic strategy that inhibits PARP1 might be viable for the antagonization of CHIP clone expansion. Furthermore, PARP1 -inhibiting drugs are already approved for use in the treatment of BRCA-mutant cancers 35 . Conversely, PARP1 inhibition is known to cause haematologic toxicity and to increase the risk of treatment related haematologic malignancy 36 . Therefore, further research is needed to test whether PARP1 inhibition may be appropriate for use in antagonizing the expansion of CHIP clones, and whether any effect is clonal haematopoiesis subtype-specific. The more common LY75 missense variant (rs78446341-A, P1247L) is located in the extracellular domain of lymphocytic antigen 75 (also known as DEC-205 or CD205), and has a role in antigenic capture, processing and presentation 37 . The rarer LY75 missense variant (rs147820690-T, G525E) is located in a C-type lectin domain and reported to interact directly with this receptor’s ligand. LY75 is expressed predominantly in haematopoietic-derived cells 37 , 38 (and particularly dendritic cells), and its ablation impairs T cell proliferation and response to antigen challenge 19 . The protective associations with this variant that we identified appear to be most pronounced for DNMT3A CHIP and mLOY, and highlight LY75 as a potential therapeutic target for the antagonization of clonal haematopoiesis in general. Although most of the phenotypic associations we observe in our cross-sectional analyses are expected associations with haematologic and oncologic traits, the associations we identify with obesity and body mass traits are of particular interest. This relationship between body mass and CHIP may relate to inflammatory or hormonal signalling, and directions of effect that we estimate are consistent with recent findings that DNMT3A CHIP reduces bone mineral density via increases in macrophage-mediated IL-20 signalling 39 . The fact that the association we report between obesity and body mass and CHIP are in opposing directions across CHIP subtypes (for example, negative in DNMT3A CHIP and positive in TET2 CHIP and ASXL1 CHIP) suggests that the relationship between CHIP and adiposity is complex and requires further investigation. Perhaps most unexpectedly, we found associations between CHIP and CVD to be more modest than previously reported 1 , 2 , 3 . DNMT3A mutations do not associate with CVD, which is consistent with the absence of any association between CHIP and CVD when applying Mendelian randomization. However, this pattern is not seen across CHIP associations with solid tumours, which we found to be driven by DNMT3A , and to be supported by Mendelian randomization. Overall, our results further clarify the role of CHIP mutational subtypes in the development of cancer and CVD and emphasize the importance of viewing (and potentially treating) different CHIP subtypes as distinct haematologic preconditions. Whereas Bick et al. 6 . found statistical support for reduced CVD incidence among CHIP carriers with an IL6R coding mutation (rs2228145-C) serving as a genetic proxy for IL-6 inhibition, we do not find any support for this association when extending their analysis from the first 50,000 exomes in the UKB to the full cohort of 450,000 exomes, nor when repeating this analysis in 175,000 exomes from the GHS cohort. The signal identified across the first 50,000 exomes may result from a chance ascertainment bias 40 . Alternatively, whereas the rs2228145-C variant is thought to mimic IL-6 inhibition, and therefore confer protection from heart disease 41 , neither our analysis nor Bick et al. found evidence that rs2228145 carriers are protected from CVD in subjects without CHIP. Therefore, it is possible that this mutation is a poor proxy for IL-6 inhibition, and that direct pharmacological inhibition of IL-6 may still antagonize the interplay between CHIP clone expansion and the onset of CVD. This study benefits from its biobank-scale size, which we leverage to further resolve clonal haematopoiesis subtypes and broadly assess clinical phenotypes associated with CHIP. However, limitations include the potential inclusion in our CHIP callset of a small number of germline variants, a lack of serial sampling, and a lack of experimental data to characterize the mechanisms underpinning the novel associations that we identify. Although we have taken many steps to ensure the quality of our callset and analysis (Supplementary Notes 11 and 12 and Supplementary Figs. 15 – 18 ), the misclassification of somatic variants with high VAF as germline variants, and/or the misclassification of true germline variants as somatic clonal haematopoiesis variants (for example, germline variants at genomic positions identified as clonal haematopoiesis hotspots) remain challenges inherent to calling and analysing CHIP and clonal haematopoiesis when using population scale genomic data. Serial sampling would enable the evaluation of changes to CHIP clones over time, and future studies that focus on such serial analysis at large scale will be able to better estimate CHIP subtype-specific clonal changes and clinical risk. Such increased data assets would also likely facilitate the identification of additional genes that show recurrent mutation during clonal haematopoiesis, as well as how such mutations relate to one another (that is, in dependency, mutual exclusivity and temporal order). Nonetheless, we identify many novel common and rare variant associations with CHIP and other clonal haematopoiesis phenotypes, which help to set the stage for future functional, mechanistic and therapeutic studies. On the whole, our analyses emphasize that CHIP is really a composite of somatic mutation-driven subtypes, with shared genetic aetiology and distinct risk profiles. Methods Study approval UKB study: ethical approval for the UKB study was previously obtained from the North West Centre for Research Ethics Committee (11/NW/0382). The work described herein was approved by UKB under application number 26041. GHS study: approval for DiscovEHR analyses was provided by the Geisinger Health System Institutional Review Board under project number 2006-0258. Exome sequencing and variant calling Sample preparation and sequencing were done at the Regeneron Genetics Center as previously described 10 , 40 . In brief, sequencing libraries were prepared using genomic DNA samples from the UKB, followed by multiplexed exome capture and sequencing. Sequencing was performed on the Illumina NovaSeq 6000 platform using S2 (first 50,000 samples) or S4 (all other samples) flow cells. Read mapping, variant calling and quality control were done according to the Seal Point Balinese (SPB) protocol 40 , which included the mapping of reads to the hg38 reference genome with BWA MEM, the identification of small variants with WeCall, and the use of GLnexus to aggregate these files into joint-genotyped, multi-sample VCF files. While certain UKB exome analysis efforts have used calls generated with the OQFE pipeline 42 , this pipeline has only been used to a limited degree for disease association analysis. Therefore, we chose to use calls from the SBP pipeline, which have been used very extensively for disease association analysis, including the largest set of association analyses done with UKB exome data 10 . Depth and allelic valance filters were then applied, and samples were filtered out if they showed disagreement between genetically determined and reported sex, high rates of heterozygosity or contamination (estimated with the VerifyBamId tool as a FREEMIX score > 5%), low sequence coverage, or genetically determined sample duplication. Calling CHIP To call CHIP carrier status, we first used the Mutect2 (GATK v4.1.4.0) somatic caller 43 to generate a raw callset of somatic mutations across all individuals. This software aims to use mapping quality measures as well as allele frequency information to identify somatic mutations against a background of germline mutations and sequencing errors. We used data generated from gnomAD v2 as the reference source for germline allele frequency 44 . We generated a cohort-specific panel of normals, which Mutect2 uses to estimate per-site beta distribution parameters for use in refining somatic likelihood assignment. Since CHIP is strongly associated with age, we chose 100 random UKB samples from 40 year olds and 622 samples from individuals less than 18 years of age in GHS to build these cohort-specific panels of normals. By evaluating the degree to which default Mutect2 filtering excluded known CHIP hotspot mutations, we noted that the default Mutect2 pass/fail filters were too stringent. Therefore, we initially considered all Mutect2 variants (that is, even those that did not pass default Mutect2 filtering), and proceeded to perform our own QC and somatic mutation call refinement. As an initial refinement step, we selected variants occurring within genes that have been recurrently associated with CHIP according to recent reports from the Broad 2 , the TOPMed Consortium 11 , and the Integrative Cancer Genomics (IntOGen) project 45 . We then filtered putative somatic mutations using the outlined functional criteria 2 . Next, we performed additional QC steps, which consisted of (1) removing multi-allelic somatic calls, (2) applying sequencing depth filters (total depth (DP) ≥ 20; alternate allele depth (AD) ≥ 3, F1R2 and F2R1 read pair depth ≥ 1), (3) removing sites flagged as panel of normals by Mutect2 (unless previously reported), (4) removing indels flagged by the Mutect2 position filter, (5) removing sites within homopolymer runs (a sequence of ≥5 identical bases) if AD < 10 or VAF < 0.08, (6), removing missense mutations in CBL or TET2 inconsistent with somaticism (that is, P -value > 0.001 in a binomial test of VAF = 0.5), (7) removing novel (not previously reported) variants that exhibited characteristics consistent with germline variants or sequencing errors. That is, we excluded variants that had a median VAF ≥ 0.35, since approximately 97% of previously reported variants (that is, from a recent study of CHIP by the TOPMed consortium 11 ) had a median VAF < 0.35. Beyond this, we evaluated the frequency distributions of known variants (stratified by effect—that is, missense or non-missense) to discern thresholds for newly identified variants (that is, AF (allele frequency) of novel variants ≤ AF of previously reported variants). Additionally, novel G>T or C>A SNV calls were evaluated for oxidation artifacts 46 . Specifically, variants with a maximum alternate allelic depth < 6 (across all samples) and < 2 supportive reads from F1R2 (C>A) or F2R1 (G>T) mate pairs were removed, respectively. Given that > 90% of mutations belonged to 23 recurrent CHIP-associated genes, we restricted to variants occurring within these genes as a final step to maximize the specificity of our callset. These genes consisted of the 8 most frequent mutated CHIP genes ( DNMT3A , TET2 , ASXL1 , PPM1D , TP53 , JAK2 , SRSF2 and SF3B1 ), a collection of CHIP-associated genes containing SNV hotspots ( BRAF , CSF3R , ETNK1 , GNAS , KRAS , GNB1 , IDH2 , MPL , NRAS , PHF6 and PRPF8 ), and CHIP-associated genes of haematological interest ( CBL , CALR , RUNX1 and SUZ12 ). Our final CHIP set of CHIP mutation carriers consisted of 29,669 CHIP mutations across 27,331 unique individuals from UKB, and 14,766 CHIP mutations across 12,877 unique individuals from GHS. Variant allele fraction (VAF) was calculated using AD/(reference allele depth (RD) + AD). Defining CHIP and mosaic phenotypes CHIP phenotypes were derived based on our mutation callset, whereas mosaic chromosomal alteration (mCA) phenotypes were derived based on previously published mCA calls from the UKB 4 , 7 , 8 . First, we used International Classification of Diseases (ICD) codes to exclude 3,596 samples from UKB and 1,222 samples from GHS that had a diagnosis of blood cancer prior to sample collection. We also excluded 13,004 individuals from GHS whose DNA samples were collected from saliva as opposed to blood. For all of the phenotypes we generated and analysed in this study, we used a combination of cancer registry data, hospital inpatient (HESIN) data, and data from general practitioner records to ascertain ICD10 codes. The majority of our cancer data came from the cancer registry, which we supplemented with the other sources. We then defined multiple CHIP and mosaic phenotypes based on whether carriers did (inclusive) or did not (exclusive) have other somatic phenotypes. For example, individuals with at least one CHIP mutation in our callset were defined as carriers for a CHIP_inclusive phenotype, whereas anyone with a CHIP mutation as well as an identified mCA was removed from this inclusive phenotype in order to define a CHIP_exclusive phenotype (20,606 cases and 342,869 controls). Our association analysis with CHIP used this CHIP_inclusive phenotype, which included 25,657 cases and 342,869 controls of European ancestry in UKB, and 11,821 cases and 135,106 controls of European ancestry in GHS. These counts reflect the samples with European ancestral origin that remain in each cohort after removing those with non-CHIP clonal haematopoiesis (60,991 in UKB and 0 in GHS, as we did not call mosaic chromosomal alterations in GHS), and those with missing meta data (348 in UKB and 4,893 in GHS). We defined mLOY carriers as male individuals with a Y chromosome mCA in the UKB mCA callset that had copy change status of loss or unknown, mLOX as individuals with an X chromosome mCA in the UKB mCA callset that had copy change status of loss or unknown, and mCAaut carriers as individuals with autosomal mCAs. We then refined these inclusive phenotypes to define exclusive versions, with mLOY_exclusive consisting of carriers with no X chromosome or autosomal mCAs (36,187 cases and 151,161 controls), mLOX_exclusive consisting of carriers with no Y chromosome or autosomal mCAs (10,743 cases and 364,072 controls), and mCAaut_exclusive consisting of carriers with no Y or X chromosomal alterations of any kind (11,154 cases and 364,072 controls). These exclusive phenotypes were used for all analyses comparing CHIP with mosaic phenotypes, as this approach facilitated the generation of four non-overlapping phenotypes (that is, CHIP, mLOY, mLOX, and mCAaut) that could be compared. We also defined CHIP gene-specific phenotypes by choosing carriers as those with mutations in our callset from a specific gene and no mutations in any other of the 23 CHIP genes defining our callset. For example, CHIP DNMT3A carriers were those with ≥ 1 somatic mutations in our callset within the DNMT3A gene, and no mutations in our callset in any of the other 23 CHIP genes we used for our final callset definition. The set of 364,072 controls used in UKB that had no evidence of any clonal haematopoiesis (that is, no CHIP or mCAs) was considered as our set of healthy controls, and was used across all association analyses in UKB. Genetic association analyses To perform genetic association analyses, we used the genome-wide regression approach implemented in REGENIE 47 , as described 10 . In brief, regressions were run separately for data derived from exome sequencing as well as data derived from genetic imputation using TOPMed 48 , and results were combined across these data sources for downstream analysis. Step 1 of REGENIE uses genetic data to predict individual values for the trait of interest (that is, a polygenic risk score), which is then used as a covariate in step 2 to adjust for population structure and other potential confounding. For step 1, we used variants from array data with a MAF > 1%, < 10% missingness, Hardy–Weinberg equilibrium test P -value > 10 −15 and LD pruning (1,000 variant windows, 100 variant sliding windows and r 2 < 0.9), and excluded any variants with high inter-chromosomal LD, in the major histocompatibility region, or in regions of low complexity. For association analyses in step 2 of REGENIE, we used age, age 2 , sex and age × sex, and 10 ancestry-informative principal components as covariates. For analyses involving exome data, we also included as covariates an indicator variable representing exome sequencing batch, and 20 principal components derived from the analysis of rare exomic variants (MAF between 2.6 × 10 −5 and 0.01). Significance cutoffs and rare variant burden testing were set according to the power calculations and logic outlined by Backman et al. 10 . In brief, we used P ≤ 5 × 10 −8 , P ≤ 7.14 × 10 −10 , P ≤ 3.6 × 10 −7 , for common, rare and burden associations, respectively. Results were visualized and processed using an in-house version of the FUMA software 49 . Association analyses were performed separately for different continental ancestries defined based on the array data, as described 10 . Replication of associations signals in the GHS cohort To calculate the power to achieve replication in the GHS cohort, we first adjusted for the effects of ‘winner’s curse’, which are expected when choosing significant associations signals on the basis of a genome-wide threshold 50 . To do this, we used the conditional likelihood approach described by Ghosh et al. 51 as implemented in the winnerscurse R package (version 0.1.1), which adjusts the estimated betas from genome-wide significant associations signals. These adjusted effect estimates are provided in Supplementary Table 2 (column Effect_adj). We then used these adjusted effect estimates to calculate the expected power to detect each lead signal in the GHS replication phase using the GHS sample size, allele frequencies, CHIP prevalence, and an alpha level of 0.05. To summarize our expected power across the replication phase, we summed the power across all lead variants and reported the number of SNPs that replicated at P < 0.05 as a proportion of the cumulative power to detect those variants. Identifying independent signals from association results We used three different approaches to identify independent signals across loci that associated with CHIP. First, we used a clumping and thresholding approach (C&T) 52 in which index SNPs at each significantly associated locus were defined greedily as those with the lowest P -value. Clumping was then done by extending linkage blocks laterally to include all SNPs that have P < 1 × 10 −5 and r 2 > 0.1 with the index SNP. Any SNP within a clump was then removed from further analysis. This process was repeated as long as there was ≥ 1 additional SNP in the locus with P ≤ 5 × 10 −8 . After all clumps were made, we merged any clumps (that is, LD blocks) with overlapping genomic ranges. Since this approach did not feature any iterative conditioning nor model variant effects jointly, we also used conditional joint analysis as implemented in GCTA COJO 53 and statistical fine-mapping as implemented in FINEMAP 54 to identify independent/causal signals. COJO was run with a subset of 10,000 unrelated European ancestry samples from UKB as an LD references, and with a COJO adjusted P -value threshold of 5 × 10 −6 , an info score threshold of 0.3, and a MAF cutoff of 0.01. FINEMAP was run with the shotgun stochastic search algorithm using a maximum of 30 causal variants. We included variants in the FINEMAP analysis that had P < 0.1 in inverse variance weighted meta-analysis, and MAF > 0.001. The LD matrices used for the FINEMAP analysis were constructed as weighted meta LD matrices derived from the LD matrices from UKB and GHS. The LD matrices from UKB and GHS were computed independently using the same sets of samples included in each GWAS. Fine-mapping variants at the LY75 locus To further evaluate whether the rare variant association at the LY75 locus (rs147820690-T) was independent of other common and rare variant signals, we performed joint fine-mapping (with FINEMAP) on common and rare variants at this locus while including rarer variants then used in our genome-wide fine-mapping. In contrast to the genome-wide fine-mapping described above, this fine-mapping sensitivity analysis was done only in the UKB, was focused on the LY75 locus, and included all variants in our dataset. That is, the fine-mapping analysis was run as described above, but with a MAF > 0.0000000001. While FINEMAP suggests 3 credible sets are most parsimonious at this locus (posterior probability = 0.8), which is consistent with the results we report when preforming genome-wide fine-mapping, the fourth credible set (posterior probability = 0.11) identifies rs147820690-T as the top signal (PIP = 0.133) among 9,417 variants in the 95% credible set. This fine-mapping approach also prioritizes rs78446341-A (CPIP = 0.92, CS = 2). Furthermore, the median pairwise LD between SNPs in this fourth credible set is very low (6.7 × 10 −4 , compared with 0.995, 0.962, and 0.831 for the first three credible sets, respectively). Therefore, these fine-mapping results provide additional support for both LY75 missense variants, as well as the fact that the rs147820690-T rare variant signal is not driven by the tagging of other rare variants. PheWAS across CHIP-associated variants Using 937 traits from the UKB, we queried association results for 171 SNPs from our GWAS of CHIP. These SNPs represent the union of those identified by clumping and thresholding, conditional analysis with GCTA COJO, and fine-mapping with FINEMAP (fine-mapped SNPs were chosen if they had one of the highest two posterior inclusion probabilities—that is, PIPs—in any credible set). While this group of SNPs does include signals with P < 5 × 10 −8 in our CHIP GWAS, these SNPs represent signals prioritized as conditionally independent and/or likely to be causal, and we therefore deemed them worthy of exploration via PheWAS. Some of these subthreshold signals featured many significant PheWAS associations ( P < 5 × 10 −8 in the PheWAS), and likely merit further evaluation (for example, ZFP36L2 / THADA locus on chromosome 2, and THRB locus on chromosome 3). The traits used in this PheWAS represent the subset of the 5,041 traits used in our cross-sectional analyses of phenotypic association with CHIP mutations carrier status for which we have previously reported common variant associations 10 . In brief, for ICD10-based phenotypes, cases were required to have one or more records of diagnosis in the electronic health records, death registry data implicating the disease, or two or more diagnosis in outpatient data mapped to ICD10. For non-ICD10 phenotypes (quantitative measures, clinical outcomes, survey and touchscreen responses, and imaging derived phenotypes), data were derived from the UKB Showcase. Participants who did not meet the case definition for a given ICD10-based phenotype were removed from the analysis if they had one diagnosis code in the outpatient data, and included as controls if they had no diagnosis in the outpatient data. Supplementary Table 10 includes ICD10 codes as well as trait names and descriptions. Genetic comparisons between CHIP subtypes For pairwise comparisons between CHIP gene mutation subtypes, we used the union set of index SNPs (that is, independent signals in genome-wide significant loci) from all of our CHIP and CHIP gene subtype associations. This resulted in 93 variants, which we used to compare effect sizes estimates between CHIP subtype pairs. Genetic correlations were calculated using LDSC version 1.0.1 with annotation input version 2.2 22 . Defining smoking phenotypes We derived smoking phenotypes from the lifestyle and environment questionnaire in the UKB and from the electronic health records in the GHS. Since smoking is difficult to ascertain and control for, we used a variety of data to code multiple smoking phenotypes for various analyses. These smoking phenotypes consisted of (1) pack years, (2) number of cigarettes smoked per day, (3) age started/stopped smoking (UKB only), (4) former/current smoker, (5) ever smoker and (6) heavy smoker (smoked ≥ 10 cigarettes a day). The ever smoker phenotype was maximally inclusive, and coded as cases all individuals with any evidence of prior smoking across the aforementioned phenotypes. For our longitudinal analyses in UKB, we used the ‘current smoker’ and ‘pack years’ (which captures the cumulative effect of smoking over one’s lifetime) as covariates in all models that did not stratify for smoking status. In the smoking stratified models, we stratified smokers based on the ‘ever smoker’ phenotype and further adjusted for pack years within the smokers subgroup. For our longitudinal analyses in GHS, we used the ‘ever smoker’ and ‘pack years’ phenotypes as covariates in all models that did not stratify for smoking status, and stratified smokers in the same manner as we did in the UKB analyses. For linear models that evaluated the overall relationship between age, sex, and smoking, we used the ‘heavy smoker’ coding. Otherwise, all other analyses used the aforementioned ‘ever smoker’ phenotype as a covariate. Phenotypic associations with CHIP To test for known as well as potentially novel associations, we used REGENIE 47 to perform Firth-corrected tests for association between our CHIP gene-specific phenotypes and 5,041 traits (2,640 binary traits and 2,401 quantitative traits) from the UKB (version 5). To do this, we coded each CHIP gene-specific phenotype as 1 if an individual had any somatic CHIP mutation in the gene and 0 otherwise and formatted these binary codings as pseudo-genotypes to analyse with REGENIE. Regression models were run as described previously, with age, sex, and genetic principal components as covariates 10 . After filtering out association tests where the total number of somatic carriers was <5, we were left with 83,779 total association tests (Supplementary Table 31 ). Only 22 out of 23 CHIP gene subtypes were tested for association across phenotypes as we did not have enough carriers of CSF3R mutations to meet our minimum threshold of 5 somatic carriers that were also disease cases. Quantitative traits were transformed using a reverse inverse normalized transformation (RINT); effect size estimates from these associations are in units of standard deviation. Traits used in this analysis did not exclude any samples on the basis of having a diagnosed haematological disease or malignancy prior to sequencing date. To visualize high-level phenotypic patterns across these CHIP gene-specific phenotypes (Fig. 3 ), we categorized phenotypes by disease group 10 , and calculated the proportion of phenotypes per disease group per gene that were associated at a P ≤ 0.05 alpha level (uncorrected). To visualize the most significant of these associations, we plotted effect sizes (Supplementary Fig. 7 ) by disease category for all associations with P ≤ 1 × 10 −5 . Risk modelling among CHIP carriers We performed longitudinal survival analyses using cox proportional hazard models (coxph function) as implemented in the survival R package. Given that CHIP is strongly correlated with age, models used age as the time scale with interval censoring with age at first assessment and age at event or censoring. This allows for an implicit adjustment for age within the proportional hazard models. In UKB, individuals with follow-up time in excess of 13.5 years (3% of the dataset) were censored due to departures from the proportional hazards model. Analyses were performed on individuals of European ancestral background. All models included 10 genetically determined European-specific principal components as covariates, and all analyses excluded individuals genetically determined to be third-degree relatives or closer. In GHS, we had limited sample size with which to perform these longitudinal analyses. This was because GHS samples were collected at later ages (due to the nature of the biobank and the timing of our partnership) and fewer patients had disease onset dates subsequent to sample collection (that is, the time period where the onset of CHIP can be evaluated). Furthermore, in GHS, we could not derive an all-cause mortality phenotype due to the nature of the EHR data available to us. This incomplete ascertainment may also explain why our odds ratio estimates for risk of haematologic malignancy among CHIP carriers are lower in the GHS cohort. We used a variety of CHIP codings as variables in our models to test for potential differences between high/low VAF CHIP and/or CHIP subtypes. First, we subset CHIP carrier status by gene ( DNMT3A , TET2 , ASXL1 , DNMT3A or TET2) and/or VAF (≥0.1) to test for potential differences between degree of clonal expansion (that is, high/low VAF CHIP) and/or CHIP subtypes. Additional analyses were run restricting CHIP mutation calls to previously reported variants (for example, Jaiswal et al. 2 ), as well as restricting to carriers of DNMT3A mutations with at least one mutation in another CHIP gene. Controls were defined with two approaches: (1) any individual without CHIP mutations (the coding used in the results we report) and (2) those without any genetic evidence of clonal haematopoiesis (that is, healthy controls, as defined above, which did not change our results). The CHIP gene-specific coding described above varies from the phenotypic coding definitions used in our GWAS/ExWAS, which required carriers to have mutations only in the specified CHIP gene and no mutations in any other CHIP genes. Since mutational exclusivity becomes less common as VAF increases (that is, carrying a single mutation with VAF ≥ 0.1 and no other mutations), and substantially lowers sample size, we chose this adjusted definition for these longitudinal analyses of disease incidence. For the composite phenotypes described below, we relied heavily on ICD10 codes from cancer registry data, hospital records and general practitioner records, and supplemented these with self-reported data and procedure codes (OPCS4). We defined prevalent disease on the basis of event codes occurring before sample collection and used this definition to exclude samples from longitudinal analysis of incident disease. For these main analyses, we did not use any minimum number of days to diagnosis from sample collection as an additional filtering criterion (see Supplementary Note 12 for more details). In UKB, cardiovascular disease was defined with the following ICD10 codes obtained from primary care, HES (hospital episode statistics), or death registry data: I21, I22, I23, I252, I256, Z951, Z955, I248, I249, I241, I251, I255, I258, I259, I630, I631, I632, I633, I634, I635, I637, I638, I639, I651, ICD9 codes: 410, 412, and OPCS codes: K40, K41, K44, K45, K46, K49, K502, K75 and K471. ICD9/ICD10/OPCS diagnoses or procedures recorded prior to enrolment date and self-report codes 1075 (heart attack/myocardial infarction), 1095 (cabg), 1523 (heart bypass), 1070 (coronary angioplasty or stent), 1583 (ischaemic stroke), 1083 (stroke) were used to identify prevalent CVD cases. These were chosen to best reflect the coding use by Bick et al. in their study of CHIP 6 . In GHS, we used ICD10 codes I20–I25 and I60–I69, CPT codes from 33510–33523 (CABG, not continuous), 33533–33536, 35500, 35572, 35600, and 92920–92975 (PCI, not continuous). We also adjusted the CVD coding in GHS to exclude cerebrovascular events (that is, excluded I60–I69); association results were similar. The CVD coding we used for our Mendelian randomization analysis was comparable to these definitions but did not include ICD10 codes for cerebrovascular events. For the CVD models, we included sex, LDL, HDL, pack years, smoking status (current vs former, determined by self-reported data), BMI, essential primary hypertension, and type 2 diabetes mellitus as covariates. The results we reported used a composite of myocardial infarction (MI), coronary artery bypass graft (CABG), percutaneous coronary intervention (PCI), and coronary artery disease (CAD), based on the coding described above, and also included death from any of these events. Results were similar when our composite included ischaemic stroke (ISCH.TR), as well as when we repeated analyses with a subset of recurrent CHIP mutations derived from Jaiswal et al. 2 or restricting carrier calls to variants in DNMT3A or TET2 . We also excluded samples with any diagnosis of malignant blood cancer prior to sequencing ( n = 3,596). Missing LDL and HDL values were median imputed, and individuals on cholesterol medication had their raw LDL values increased by a factor of 1/0.68, similar to Bick et al. 6 . IL6R missense variant (rs2228145-C) genotypes were modelled dominantly (coded as 1 for carriers of any allele and 0 otherwise), and we modelled the effect of this allele in CHIP -stratified proportional hazard models, and also tested for IL6R × CHIP interaction in a full (non-stratified) model. Models considering only the initial 50k UKB individuals restricted to intersection between our unrelated UKB sample set and the samples reported by Bick et al. 6 . For visualization, Kaplan–Meier estimates were generated with the survfit function in the aforementioned survival package (version 3.2.13) and plotted using the ggsurvplot function from the survminer package (version 0.4.9). For models of cancers and overall survival risk tested using all CHIP carriers, high-VAF (VAF ≥ 0.1) CHIP carriers, and carriers of specific CHIP gene mutations, we used unrelated European samples that did not have any cancer diagnoses prior to sample collection (N = 360, 051 after the removal of 33,816 samples with a prior diagnosis of cancer). Results were qualitatively the same when repeating these analyses without excluding samples that had a diagnosis of any malignant cancer prior to sample collection date. Cancer phenotype definitions were derived from medical records indicating the following ICD10 codes: C81–C96, D46, D47.1, D47.3, D47.4 for blood cancers, C81–C86, C91 for lymphoid cancers, C92, C94.4, C94.6, D45, D46, D47.1, D47.3, D47.4 for myeloid cancers, C50 for breast cancers, C34 for lung cancers, C61 for prostate cancers, C44 for non-melanoma skin cancers (NMSC), and C18 for colon cancers (five total solid cancers). Myeloid subtypes were defined as follow: AML (C92), MDS (D46), MPN (D47.1, D47.3, D47.4). Given the rareness and/or non-specificity of myeloid codings C93–95, and that the majority of these codings overlapped with those that we used for the myeloid composite described above (that is, we already captured these samples using the previously described codings), we did not include these codings in our composite. However, we performed sensitivity analyses that used a myeloid definition that did include C93–C95, with findings equivalent to those described in our main results (Supplementary Note 12 ). For our lymphoid composite, we decided to combine lymphoma with lymphoid leukaemia for multiple reasons. First, in some clinical diagnostic situations (for example, T cell lymphoblastic lymphoma and T cell lymphoblastic leukaemia; Burkitt lymphoma and mature B cell ALL), the distinction between ‘leukaemia’ and ‘lymphoma’ is made on the basis of blast percentage in bone marrow (that is, > 20% blasts diagnosed as leukaemia), and may not reflect meaningful biological differences. Consistently, 22% of C91 codings are already captured in our C81–C86 codings. Moreover, the majority of cases across these codings correspond to tumours derived from mature B cells, namely chronic lymphocytic leukaemia (CLL) and mature non-Hodgkin lymphoma. Given data supporting that mature T cell lymphomas and also some mature non-Hodgkin B cell tumours may arise from hematopoietic stem and progenitor cells 55 , 56 , 57 , we considered the relationship between a composite of mature lymphoid tumours and CHIP. For blood cancers, we also included cases that self-reported leukaemia, lymphoma, or multiple myeloma. These models included the same covariates as described for CVD (with the exception that we did not adjust cholesterol level based on medication usage). Additionally, models estimating risk for sex-specific cancers (that is, prostate and breast) restricted to individuals of the relevant sex and did not adjust for sex as a covariate. For smoking stratified modelling of blood and lung cancer, we used our stricter definition of smoking (ever vs never) and included pack years as a covariate in models testing risk among smokers. To test a more conservative cutoff for excluding patients with a diagnosis of haematologic malignancy prior to sequencing (that is, exclude individuals with a diagnosis prior to 90 days after DNA collection date rather than prior to the DNA collection date itself), we conducted sensitivity analyses for the longitudinal modelling of the risk among CHIP carriers of acquiring blood cancers (for example, blood cancer, myeloid, lymphoid, AML, MDS and MPN). These results were the same as those reported in our main results (Supplementary Note 12 ). Polygenic risk scores Polygenic risk scores were calculated with Plink 58 as a weighted sum of the effects across all conditionally independent variants we identified with GCTA COJO (74 variants, P ≤ 5 × 10 −6 ) We performed association tests using logistic regression, with binary phenotypes of interest (that is, our CHIP subtype phenotypes—for example, TET2 CHIP, and so on) as the dependent variable, this polygenic risk score as the independent variable of interest, and age, sex, smoking status (ever vs never), and 10 genetic principal components as covariates. Software The code is publicly available and can be found at . The REGENIE software for whole-genome regression, which was used to perform all genetic association analysis, is available at . GCTA v1.91.7 was used for approximate conditional analysis. SHAPEIT4.2.0 was used for phasing of SNP array data. Imputation was completed with IMPUTE5. Somatic calling was done with Mutect2 (GATK v4.1.4.0). We use Plink1.9/2.0 for genotypic analysis as well as for constructing polygenic risk scores. FINEMAP was used for fine-mapping, and genetic correlations were calculated using LDSC version 1.0.1 with annotation input version 2.2. Beyond standard R packages, visualization tools, and data processing libraries (for example, dplyr, ggplot2 and data.table), we used the survival (version 3.2.13) and survminer (version 0.4.9) packages for survival analyses, the MendelianRandomization package for Mendelian randomization (version 0.6.0), and the winnerscurse package (version 0.1.1; ) to adjust GWAS effect size estimates for the effects of Winner’s Curse. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Individual-level sequence data, CHIP calls and polygenic scores have been deposited with UK Biobank and are freely available to approved researchers, as done with other genetic datasets to date 10 . Individual-level phenotype data are already available to approved researchers for the surveys and health record datasets from which all our traits are derived. Instructions for access to UK Biobank data is available at . Summary statistics from UKB trait are available in the GWAS catalogue (accession IDs are listed in Supplementary Table 33 ). As described 10 , the HapMap3 reference panel was downloaded from ftp://ftp.ncbi.nlm.nih.gov/hapmap/ , GnomAD v3.1 VCFs were obtained from , and VCFs for TOPMED Freeze 8 were obtained from dbGaP as described in . Data used for replication, such as DiscovEHR exome sequencing and genotyping data, and derived CHIP calls, can be made available to qualified, academic, non-commercial researchers upon request via a Data Transfer Agreement with Geisinger Health System (contact person: Lance Adams, ljadams@geisinger.com). Change history 17 February 2023 A Correction to this paper has been published:
A team of researchers at Regeneron Pharmaceuticals has identified new genomic variants associated with clonal hematopoiesis of indeterminate potential (CHIP). In their paper published in the journal Nature, the group describes how they used exome-wide and genome-wide association analyses to study differences in the blood of some people with somatic mutations. Nature has also published a Research Highlights piece in the same journal issue, discussing the work done by the New York team. Hematopoiesis is a process that results in the formation of cellular blood components. And clonal hematopoiesis is the part of the process that is involved in the development of cell lineages. The importance of the overall process is highlighted by the fact that every person produces approximately 300 billion new blood cells every single day of their life. Prior research has suggested that there are variants associated with clonal hematopoiesis of indeterminate potential in certain people—each of which can have a unique impact. In this new effort, the team at Regeneron sought to find some of them by studying information held in very large datasets, such as the UK Biobank and the Geisinger MyCode Community Health Initiative. To find the variants they were after, the researchers focused their search efforts on 23 genes that have already been associated with CHIP. By searching through data on 628,388 individuals, they were able to identify 40,208 carriers of at least one variant associated with CHIP. They then conducted exome-wide and genome-wide studies of the carriers they had identified. In so doing, they were able to identify 24 loci—21 of which had not been seen before. The researchers also found that they were able to identify some variants that could be associated with clonal hematopoiesis and the length of telomeres in certain individuals. In another part of their study, the team analyzed health traits of people listed in the UK Biobank looking for associations between people who had CHIP variants and other issues. In so doing, they found associations between people who had clonal hematopoiesis variants and diseases such as COVID-19, heart problems, obesity and problems clearing infections of various types. They also found associations between individuals with CHIP and development of cancerous tumors and myeloid leukemias.
10.1038/s41586-022-05448-9
Medicine
Surgical, N95 masks block most particles, homemade cloth masks release their own
Sima Asadi et al. Efficacy of masks and face coverings in controlling outward aerosol particle emission from expiratory activities, Scientific Reports (2020). DOI: 10.1038/s41598-020-72798-7 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-72798-7
https://medicalxpress.com/news/2020-09-surgical-n95-masks-block-particles.html
Abstract The COVID-19 pandemic triggered a surge in demand for facemasks to protect against disease transmission. In response to shortages, many public health authorities have recommended homemade masks as acceptable alternatives to surgical masks and N95 respirators. Although mask wearing is intended, in part, to protect others from exhaled, virus-containing particles, few studies have examined particle emission by mask-wearers into the surrounding air. Here, we measured outward emissions of micron-scale aerosol particles by healthy humans performing various expiratory activities while wearing different types of medical-grade or homemade masks. Both surgical masks and unvented KN95 respirators, even without fit-testing, reduce the outward particle emission rates by 90% and 74% on average during speaking and coughing, respectively, compared to wearing no mask, corroborating their effectiveness at reducing outward emission. These masks similarly decreased the outward particle emission of a coughing superemitter, who for unclear reasons emitted up to two orders of magnitude more expiratory particles via coughing than average. In contrast, shedding of non-expiratory micron-scale particulates from friable cellulosic fibers in homemade cotton-fabric masks confounded explicit determination of their efficacy at reducing expiratory particle emission. Audio analysis of the speech and coughing intensity confirmed that people speak more loudly, but do not cough more loudly, when wearing a mask. Further work is needed to establish the efficacy of cloth masks at blocking expiratory particles for speech and coughing at varied intensity and to assess whether virus-contaminated fabrics can generate aerosolized fomites, but the results strongly corroborate the efficacy of medical-grade masks and highlight the importance of regular washing of homemade masks. Introduction Airborne transmission of infectious respiratory diseases involves the emission of microorganism-containing aerosols and droplets during various expiratory activities (e.g., breathing, talking, coughing, and sneezing). Transmission of viruses in emitted droplets and aerosols to susceptible individuals may occur via physical contact after deposition on surfaces, reaerosolization after deposition, direct deposition of emitted droplets on mucosal surfaces (e.g., mouth, eyes), or direct inhalation of virus-laden aerosols 1 , 2 . Uncertainty remains regarding the role and spatial scale of these different transmission modes (contact, droplet spray, or aerosol inhalation) for specific respiratory diseases, including for COVID-19 3 , 4 , 5 , 6 , 7 , in particular settings, but airborne transmission stems from the initial expiratory emission of aerosols or droplets. Consequently, the wearing of masks—in addition to vigilant hand hygiene—has been put forth as a means to mitigate disease transmission, especially in healthcare settings 8 , 9 , 10 , 11 . Much research has indicated that masks can provide significant protection to the wearer, although proper mask fitting is critical to realizing such benefits 12 , 13 , 14 , 15 . Alternatively, masks can potentially reduce outward transmission by infected individuals, providing protection to others 7 , 16 , 17 . There have been indications of asymptomatic carriers of COVID-19 infecting others 18 , 19 , 20 , leading to increasing, albeit inconsistent 21 , 22 , 23 , 24 , calls for more universal wearing of masks or face coverings by the general public to help control disease transmission during pandemics. It is therefore important to understand the efficacy of masks and face coverings of different types in reducing outward transmission of aerosols and droplets from expiratory activities. Results from epidemiological and clinical studies assessing the effectiveness of masks in reducing disease transmission suggest that mask wearing can provide some benefits 10 , 11 , especially with early interventions, but often the results lack statistical significance 25 , 26 , 27 , 28 , 29 , 30 , 31 . Laboratory studies provide another means to assess or infer mask effectiveness. Measurement of material filtration efficiencies can provide initial guidance on potential mask effectiveness for preventing outward transmission 15 , 32 , 33 , 34 , 35 , but do not directly address mask performance when worn. Early photographic evidence indicates masks can limit the spread of cough-generated particles 36 . Measurements using simulated breathing with an artificial test head showed the concentration of particles between 0.02 μm-1 μm decreases across masks of different types 37 . Also using simulated breathing, Green et al. 38 found surgical masks effectively reduced outward transmission of endospores and vegetative cells, with seemingly greater reduction of particles > 0.7 μm compared to smaller particles. Using volunteers, Davies et al. 32 found that surgical and home-made cotton masks substantially reduce emission of culturable microorganisms from coughing by healthy volunteers, with similar reduction observed over a range of particle sizes (from 0.65 μm to > 7 μm). Milton et al. 16 found that surgical masks substantially reduced viral copy numbers in exhaled “fine” aerosol (≤ 5 μm) and “coarse” droplets (> 5 μm) from volunteers having influenza, with greater reduction in the coarse fraction. This result differs somewhat from very recent measurements by Leung et al. 13 , who showed a statistically significant reduction in shedding of influenza from breathing in coarse but not fine particles with participants wearing surgical masks. They did, however, find that masks reduced shedding of seasonal coronavirus from breathing for both coarse and fine particles, although viral RNA was observed in less than half of the samples even with no mask, complicating the assessment. The above studies all indicate a strong potential for masks to help reduce transmission of respiratory illnesses. To date, however, none have investigated the effectiveness of masks across a range of expiratory activities, and limited consideration has been given to different mask types. Furthermore, no studies to date have considered the masks themselves as potential sources of aerosol particles. It is well established that fibrous cellulosic materials, like cotton and paper, can release large quantities of micron-scale particles (i.e., dust) into the air 39 , 40 , 41 , 42 . Traditionally, these particles have not been considered a potential concern for respiratory viral diseases like influenza or now COVID-19, since these diseases have been thought to be transmitted via expiratory particles emitted directly from the respiratory tract of infected individuals 43 . Early work in the 1940s indicated, however, that infectious influenza virus could be collected from the air after vigorously shaking a contaminated blanket 44 . Despite this finding, over the next 70 years little attention focused on the possibility of respiratory virus transmission via environmental dust; one exception was a study by Khare and Marr, who investigated a theoretical model for resuspension of contaminated dust from a floor by walking 45 . Most recently, work by Asadi et al. with influenza virus experimentally established that “aerosolized fomites,” non-respiratory particles aerosolized from virus-contaminated surfaces such as animal fur or paper tissues, can also carry influenza virus and infect susceptible animals 46 . This observation raises the possibility that masks or other personal protective equipment (PPE), which have a higher likelihood of becoming contaminated with virus, might serve as sources of aerosolized fomites. Indeed, recent work by Liu et al. demonstrated that some of the highest counts of airborne SARS-CoV-2 (the virus responsible for COVID-19) occurred in hospital rooms where health care workers doffed their PPE, suggesting that virus was potentially being aerosolized from virus-contaminated clothing or PPE, or resuspended from virus-contaminated dust on the floor 47 . It remains unknown what role aerosolized fomites play in transmission of infectious respiratory disease between humans, and it is unclear whether certain types of masks are simultaneously effective at blocking emission of respiratory particles while minimizing emission of non-expiratory (cellulosic) particles. Here, we report on experiments assessing the efficacy of unvented KN95 respirators, vented N95 respirators, surgical masks, and homemade paper and cloth masks at reducing aerosol particle emission rates from breathing, speaking, and coughing by healthy individuals. Two key findings are that (i) the surgical masks, unvented KN95 respirators, and, likely, vented N95 respirators all substantially reduce the number of emitted particles, but that (ii) particle emission from homemade cloth masks—likely from shed fiber fragments—can substantially exceed emission when no mask is worn, a result that confounds assessment of their efficacy at blocking expiratory particle emission. Although no direct measurements of virus emission or infectivity were performed here, the results raise the possibility that shed fiber particulates from contaminated cotton masks might serve as sources of aerosolized fomites. Methods Human subjects We recruited 10 volunteers (6 male and 4 female), ranging in age from 18 to 45 years old. The University of California Davis Institutional Review Board approved this study (IRB# 844,369–4), and all research was performed in accordance with relevant guidelines and regulations of the Institutional Review Board. Written informed consent was obtained from all participants prior to the tests, and all participants were asked to provide their age, weight, height, general health status, and smoking history. Only participants who self-reported as healthy non-smokers were included in the study. Experimental setup The general experimental setup used was similar to that in previous work 48 , 49 . In brief, an aerodynamic particle sizer (APS, TSI model 3321) was used to count the number of particles between 0.3 to 20 μm in aerodynamic diameter; the APS counting efficiency falls off below ~ 0.5 µm, and thus the particles counted between 0.3 and 0.5 µm likely underestimate the true number. The APS was placed inside a HEPA-filtered laminar flow hood that minimizes background particle concentration (Fig. 1 a). Study participants were asked to sit so that their mouth was positioned in front of a funnel attached to the APS inlet via a conductive silicone tube. They then performed different expiratory activities while wearing no mask or one of the masks shown in Fig. 1 b and described in more detail below. A microphone was placed immediately on the side of the funnel to record the duration and intensity of talking and coughing activities (Fig. 1 c). The participants were positioned with their mouth approximately 1 cm away from the funnel entrance; the nose rest used in our previous setup 48 , 49 was removed to prevent additional particle generation via rubbing of the mask fabric on the nose rest surface. The air was pulled in by the APS at 5 L/min, with 1 L/min (20%) focused into the detector to count and size the cumulative number of particles at 1-s intervals (Fig. 1 d). Note that the funnel is a semi-confined environment, and not all expired particles were necessarily captured by the APS. The wearing of masks may redirect some of the expired airflow in non-outward directions (e.g., out the top or sides of the mask 50 ). Accordingly, we use the terminology “outward emission” when referring the to the particle emissions measured here. Therefore, the measurements reported here do not represent the absolute number of emitted particles and may underestimate contributions from particles that escape out the sides of the masks, but do allow relative comparisons between different conditions. The particle emission rates reported here from the APS are likely smaller than the total expiratory particle emission rates by, approximately, the ratio of the exhaled volumetric flowrate that enters the funnel to the APS sample rate. Figure 1 ( a ) Schematic of the experimental setup showing a participant wearing a mask in front of the funnel connected to the APS. ( b ) Photographs of the masks used for the experiments. ( c ) Microphone recording for a participant (F3) coughing into the funnel while wearing no mask. ( d ) The instantaneous particle emission rate of all detected particles between 0.3 and 20 µm in diameter. Surg.: surgical; KN95: unvented KN95 respirator; SL-P: single-layer paper towel; SL-T: single-layer cotton t-shirt; DL-T: double-layer cotton t-shirt; N95: vented N95 respirator. The subject gave her written informed consent for publication of the images in ( b ). Full size image All experiments were performed with ambient temperature between 22 to 24 °C. The relative humidity ranged from 30 to 35% for most experiments; a second round of testing, comparing washed vs. unwashed homemade masks, was performed at 53% relative humidity. Given the approximately 3-s delay between entering the funnel and reaching the detector within the APS, under all these conditions the aqueous components of micron-scale respiratory droplets had more than sufficient time (i.e., more than ~ 100 ms) to evaporate fully to their dried residual (so-called “droplet nuclei” 51 ); see figure S3 of Asadi et al. 48 for direct experimental evidence of complete drying under these conditions. Although large droplets (> 20 µm) can require substantially more than 1 s to evaporate 52 , as shown here the vast majority of particles are less than 5 µm and thus unlikely to have originated at sizes larger than 20 µm. The size distributions presented here are based on the diameter as observed at the APS detector. Expiratory activities Participants were asked to complete four distinct activities for each mask or respirator type: (i) Breathing : gentle breathing in through the nose and out through the mouth, for 2 min at a pace comfortable for the participant. The particle emission rate was calculated as the total number of particles emitted over the entire 2-min period, divided by two minutes to obtain the average particles per second. (ii) Talking : reading aloud the Rainbow Passage (Fairbanks 53 and Supplementary Text S1), a standard 330-word long linguistic text with a wide range of phonemes. Participants read this passage aloud at an intermediate, comfortable voice loudness. Since participants naturally read at a slightly different volume and pace, the microphone recording was used to calculate the root mean square (RMS) amplitude (as a measure of loudness) and duration of vocalization (excluding the pauses between the words). The particle emission rate was calculated as the total number of particles emitted over the entire reading (approximately 100 to 150 s), divided by the cumulative duration of vocalization excluding pauses. Excluding the pauses accounts for person-to-person differences in the fraction of time spent actively vocalizing while speaking (approximately 82% ± 5%) so that individuals who simply pause longer between words are not characterized with an artificially low emission rate due to vocalization. (iii) Coughing : Successive, forced coughing for 30 s at a comfortable rate and intensity for the participant. Similar to the talking experiment, the microphone data was used to determine the RMS amplitude of each cough, the number of coughs, and cumulative duration of coughing (excluding the pauses between the coughs). The particle emission rate was calculated as the total number of particles emitted during the 30 s of measurement, divided either by the number of coughs (to obtain particles/cough) or by the cumulative duration of the coughs (to obtain particles/s). (iv) Jaw movement : moving the jaw as if chewing gum, without opening the mouth, for 1 min, while nose breathing, to test whether facial motion in the absence of more extreme expiration caused significant particle emission. This activity technically counts as an expiratory activity since the participant was nose breathing, but the main intent was to assess whether facial motion appreciably alters particle emission, due either to gentle friction between the skin and the facemask yielding enhanced particle emission, or variable gap distances between the mask and skin allowing more or less particles to escape. The particle emission rate was calculated as the total number of particles emitted over the 1-min period, divided by 60 s to obtain the average particles per second. Mask types Participants completed each of the four expiratory activities when they wore no mask or one of the 6 different mask or respirator types: (i) A surgical mask (ValuMax 5130E-SB) denoted as “Surg.”, tested by 10 participants. (ii) An unvented KN95 respirator (GB2626-2006, manufacturer Nine Five Protection Technology, Dongguan, China), tested by 10 participants. (iii) A homemade single-layer paper towel mask (Kirkland, 2-PLY sheet, 27.9 cm × 17.7 cm) denoted as “SL-P” and tested by 10 participants. (iv) A homemade single-layer t-shirt mask, “SL-T”, made from a new cotton t-shirt (Calvin Klein Men’s Liquid Cotton Polo, 100% cotton, item #1341469), tested by 10 participants. (v) A homemade double-layer t-shirt mask, “DL-T”, made from the same t-shirt material as the SL-T mask, and tested by 10 participants. (vi)A vented N95 respirator (NIOSH N95, Safety Plus, TC-84A-7448)) tested by 2 participants; shortages at the time of testing precluded a larger sample size. The primary difference between an N95 and KN95 respirator is where the mask is certified, in the US. (N95) or China (KN95). The homemade cloth masks (SL-T, and DL-T) were made according to the CDC do-it-yourself instructions for single- and double-layer t-shirt masks 54 . The homemade paper towel masks were made according to do-it-yourself instructions 55 . Photographs of all mask types are shown in Fig. 1 b. Prior to wearing each mask, participants were verbally given general guidance on how to put on each mask. In the case of surgical masks and KN95 respirators they were instructed to pinch the metal bar to conform the mask to the nose. No fit-testing of respirators, as mandated by OSHA standard (29 CFR Part 1910) 56 , was performed, with the intent of obtaining representative particle emission rates for untrained individuals without access to professional fitting assistance. Mask washing To test whether washing of the homemade cloth masks had any effect on the particle emission rate, a subset of 4 participants were asked to bring their double-layer t-shirt mask home and to hand-wash it with water and soap, rinse it thoroughly, and let it air-dry. These participants then returned and repeated the four activities with a brand-new DL-T mask and their washed DL-T mask to provide a direct comparison of washed versus unwashed fabric. Particle emission via hand-rubbing Besides the above experiments to measure the particle emission associated with different mask fabrics, we also performed a qualitative test of the friability of the masks by rubbing each mask by hand in front of the APS, using a procedure similar to that performed previously with paper tissues (cf. Figure 4 of Asadi et al. 46 ). Specifically, the mask was folded over on itself between thumb and index finger, and the mask material was rubbed against itself. A sample of each mask type was rubbed by hand by the same individual for 10 s in front of the APS, using to the best of their ability the same amount of force each time. The test was repeated 3 times for each mask type. The particle emission rate was calculated as the total number of particles emitted divided by the duration of rubbing (10 s). Note that this procedure does not preclude possible particle shedding from the skin of the experimentalist 57 ; the observed particle emission rates for different mask materials therefore represent only qualitative indications of the relative friability. Statistical analysis Box-and-whisker plots show the median (red line), interquartile range (blue box), and range (black whiskers). Stata/IC 15 was used to perform Shapiro–Wilk normality test on the particle emission rates for each activity. After log-transformation of the data, mixed-effects linear regression was performed to account for person-level correlations. Considering that we had only one primary random effect (person-to-person variability), all variances were set equal with zero covariances. Post hoc pairwise comparisons were performed and adjusted for multiple comparisons using Scheffe’s method. Scheffe groups are indicated with green letters below each box plot; groups with no common letter are considered significantly different ( p < 0.05). Results Particle emission rates for the four expiratory activities are shown in Fig. 2 . Focusing first on breathing (Fig. 2 a), when participants wore no mask, the median particle emission rate was 0.31 particles/s, with one participant (M6) as high as 0.57 particles/s, and another participant (F3) as low as 0.05 particles/s. This median rate and person-to-person variability are both broadly consistent with previous studies 48 , 51 . In contrast, wearing a surgical mask or a KN95 respirator significantly reduced the outward number of particles emitted per second of breathing. The median outward emission rates for these masks were 0.06 and 0.07 particles/s, respectively, representing an approximately sixfold decrease compared to no mask. Wearing a homemade single layer paper towel (SL-P) mask yielded a similar decrease in outward emission rate, although not as statistically significant as the medical-grade masks. Figure 2 Particle emission rates associated with ( a ) breathing, ( b ) talking, ( c ) coughing, and ( d ) jaw movement when participants wore no mask or when they wore one of the six mask types considered. Scheffe groups are indicated with green letters; groups with no common letter are considered significantly different ( p < 0.05). Surg.: surgical; KN95: unvented KN95; SL-P: single-layer paper towel; U-SL-T: unwashed single-layer cotton t-shirt; U-DL-T: unwashed double-layer cotton t-shirt; N95: vented N95. Note that the scales are logarithmic and the orders of magnitude differ in each subplot. Full size image Surprisingly, wearing an unwashed single layer t-shirt (U-SL-T) mask while breathing yielded a significant increase in measured particle emission rates compared to no mask, increasing to a median of 0.61 particles/s. The rates for some participants (F1 and F4) exceeded 1 particle/s, representing a 384% increase from the median no-mask value. Wearing a double-layer cotton t-shirt (U-DL-T) mask had no statistically significant effect on the particle emission rate, with comparable median and range to that observed with no mask. Turning to speech (Fig. 2 b), the overarching trend observed is that vocalization at an intermediate, comfortable voice loudness (Figure S1 a and Table S1 ) yielded an order of magnitude more particles than breathing. When participants wore no mask and spoke, the median rate was 2.77 particles/s (compared to 0.31 for breathing). The general trend of the mask type effect on the particle emission was qualitatively similar to that observed for breathing. Wearing surgical masks and KN95 respirators while talking significantly decreased the outward emission by an order of magnitude, to median rates of 0.18 and 0.36 particles/s, respectively. Likewise, wearing the paper towel mask reduced the outward speech particle emission rate to 1.21 particles/s, lower than no mask but representing a less pronounced decrease compared to surgical masks and KN95 respirators. In contrast, the homemade cloth masks again yielded either no change or a significant increase in emission rate during speech compared to no mask. The outward particle emissions when participants wore U-SL-T masks exceeded the no-mask condition by an order of magnitude with a median value of 16.37 particles/s. Wearing the U-DL-T mask had no significant effect. The third expiratory activity – coughing – again yielded qualitatively similar trends with respect to mask type (Fig. 2 c). We emphasize that participants coughed at different paces, and therefore the number of coughs, cumulative cough duration, and acoustic power varied between participants (Figure S1 b, Figure S2, and Table S2). Nonetheless, we observe that coughing with no mask produced a median of 10.1 particles/s, with most participants in the range of 3 to 42 particles/s. For comparison, given a coughing rate of 6 times per minute, the median outward particle count due specifically to coughing over that minute is slightly smaller than that from breathing, and an order of magnitude smaller than talking over a minute (see Fig. S3 for equivalent numbers of particles per cough). Similar general trends as for breathing and speaking were observed for coughing when wearing the different mask types. The surgical mask decreased the median outward emission rate to 2.44 particles/s (75% decrease), while the KN95 yielded an apparent but not statistically significant decrease to 6.15 particles/s (39% decrease). The SL-P mask yielded no statistically significant difference compared to no mask. In contrast, the homemade U-SL-T and U-DL-T masks however yielded a significant increase in outward particle emission per second (or per cough) compared to no mask, with median emission rates of 49.2 and 36.1 particles/s, respectively. Notably, one individual, M6, emitted up to two orders of magnitude more aerosol particles while coughing than the others, emitting 567 particles/s with no mask. Even when M6 wore a surgical mask he emitted 19.5 particles/s while coughing, substantially above the median value for no mask, although still a substantial decrease compared to no mask for this individual. Acoustic analysis of the coughing, both in terms of the root mean square amplitude (Figure S1 b) and the filtered power density, indicate that the coughs by M6 were not particularly louder nor more energetic than the others (see Figure S2 and Table S2). It is unclear what caused this individual to emit a factor of 100 more aerosol particles than average while coughing, although qualitatively, the coughs of M6 appeared to originate more from the chest, compared to other participants for whom coughs generally appeared to originate more from the throat; notably, this individual emitted a much closer to average amount of particles while speaking and breathing. Furthermore, the significantly higher aerosol particle emission compared to average during coughing for M6 persisted regardless of the mask type. Finally, Fig. 2 d shows the particle emission rate when participants moved their jaw, similar to chewing gum with their mouth closed, while only breathing through their nose. In general, jaw movement with nose breathing and no mask produced slightly fewer particles per second than the breathing activity (breathing in through nose and out through mouth), with a median rate of 0.12 particles/s for no mask. As participants were still breathing with closed mouth during the jaw movement, the lower particle production likely results from participants exhaling through the nose rather than through the mouth 48 , 51 . Wearing a surgical mask or KN95 respirator had no statistically significant effect on particle emission from jaw movement compared to no mask. In contrast, wearing all other types of homemade masks (SL-P, U-SL-T, and U-DL-T) substantially increased the particle emission rate, with the single-layer mask yielding the most at 1.72 particles/s. All of the above experiments were also repeated with vented N95 respirators, albeit with only 2 participants (due to shortages at the time of testing). The small sample size precludes significance testing, but in general the particle emission rates of the two tested were comparable to the surgical mask and unvented KN95 in terms of reduction in the overall emission rates. The emission rates presented in Fig. 2 represent the total for all particles in the size range 0.3 to 20 µm. We also measured the corresponding size distributions in terms of overall fraction for all trials (Fig. 3 ). In general, all size distributions observed here were lognormal, with a peak somewhere near 0.5 µm and decaying rapidly to negligible fractions above 5 µm. Breathing while wearing no mask emitted particles with a geometric mean diameter of 0.65 µm (Fig. 3 a), with 35% of the particles in the smallest size range of 0.3 to 0.5 µm. Regardless of the mask type, wearing masks while breathing significantly increased this fraction of particles in the smallest size range (e.g., to as high as 60% for KN95 respirator), shifting the geometric mean diameter toward smaller sizes. Talking with no mask yielded slightly larger particles compared to breathing, with mean diameter of 0.75 µm (Fig. 3 b). Wearing a mask while talking affected the size distribution in a qualitatively similar manner to that observed with breathing, in that a higher fraction of particles were in the smallest size range. Unlike breathing however, the U-SL-T and U-DL-T masks released the highest fractions of small particles (47% and 51%, respectively). Figure 3 Observed particle size distributions, normalized by particles/s per bin, associated with ( a ) breathing, ( b ) talking, ( c ) coughing, and ( d ) jaw movement when participants wore no mask or one of the five mask types considered. Each curve is the average over all 10 participants. The solid lines represent the data using a 5 -point smoothing function. Data points with horizontal error bars show the small particles ranging from 0.3 to 0.5 μm in diameter detected by APS with no further information about their size distribution in this range. Surg.: surgical; KN95: unvented KN95; SL-P: single-layer paper towel; U-SL-T: unwashed single-layer cotton t-shirt; U-DL-T: unwashed double-layer cotton t-shirt; N95: vented N95. Full size image The effect of wearing a mask was more pronounced on the size distribution of the particles produced by coughing (Fig. 3 c). For no mask, the mean diameter of cough-generated particles was 0.6 µm. The majority of particles emitted were in the smallest size range (up to 57%) during coughing while wearing homemade masks (SL-P, U-SL-T, and U-DL-T). We also note that for coughing, which produced the highest rates of particle emission for of all expiratory activities tested, wearing homemade masks considerably reduced the fraction of large particles (> 0.8 µm). Finally, for jaw movement the overall size distributions for no-mask and with-mask cases were similar except that the fraction of smallest particles was lowest for no-mask and the surgical mask (Fig. 3 d). To provide a direct comparison of the efficacy of medical-grade and homemade masks in mitigating the emission of particles of different sizes, we divided the entire size range measured by APS (0.3 – 20 µm) into three sub-ranges (smallest, 0.3 – 0.5 µm; intermediate, 0.5 – 1 µm; and largest, 1 – 20 µm), and calculated the corresponding percent change in the median particle emission rate of each sub-range during breathing, talking, and coughing compared to no mask (Fig. 4 ). For the smallest particles, Fig. 4 a shows that up to a 92% reduction in 0.3 – 0.5 µm particle emission rate occurred while wearing surgical and KN95 masks for breathing, talking, and coughing, with the KN95 yielding a smaller decrease of 20.5% in this size range. The SL-P mask caused a 60% reduction in 0.3 – 0.5 µm particle emission for talking and breathing, but yielded a 77% increase for coughing. The least effective masks in terms of minimizing emissions of the smallest particles were the U-SL-T and U-DL-T masks, with U-SL-T substantially increasing the emission of 0.3 – 0.5 µm particles by almost 600% for speech, and the U-DL-T mask yielding very slight changes for talking and breathing and an almost 300% increase for coughing. Qualitatively similar trends were observed for intermediate size particles in the range of 0.5 – 1 µm (Fig. 4 b), with the medical-grade masks yielding significant reductions. The main difference for this size range is that the SL-P mask yielded a 15.7% decrease in particle emissions for coughing, and the U-DL-T mask provided up to 34.1% reduction in particle emissions for breathing and talking. Figure 4 Percent change in median particle emission rate (N) for 10 participants compared to no-mask median, while wearing different mask types and while breathing (blue points), talking (red points), or coughing (green points), for particles in the following size ranges: ( a ) smallest, 0.3–0.5 µm; ( b ) intermediate, 0.5–1 µm; ( c ) largest, 1–20 µm; and ( d ) all sizes, 0.3–20 µm. The dashed lines are to guide the eye. Surg.: surgical; KN95: unvented KN95; SL-P: single-layer paper towel; U-SL-T: unwashed single-layer cotton t-shirt; U-DL-T: unwashed double-layer cotton t-shirt. Full size image As for the largest particle sizes (1 – 20 µm), the observed trends were again qualitatively similar to the intermediate particles (Fig. 4 c), with the medical-grade masks yielding large reductions. Notably, the U-DL-T mask emitted much fewer large particles for breathing and talking with approximately 60% reductions, but still a sizable 160% increase for coughing. The percent change in median particle emission over the entire size range of 0.3 – 20 µm is presented in Fig. 4 d, which shows that the homemade masks in general yielded more particles in total for coughing, and had mixed efficacy in reducing particle emissions for breathing and talking. The key point is that the surgical and KN95 masks effectively decreased the particle emission for all expiratory activities tested here over the entire range of particle sizes measured by the APS. To help interpret our findings we also quantified the particles emitted from manual rubbing of mask fabrics. The results (Fig. 5 a) show that, in the absence of any expiratory activity, rubbing a surgical mask fabric generated on average 1.5 particles per second, while KN95 and N95 respirators produced fewer than 1 particle per second. In contrast, rubbing the homemade paper and cotton masks aerosolized significant number of particles, with the highest values for SL-P (8.0 particles/s) and U-SL-T (7.2 particles/s) masks. Intriguingly, we found that the size distribution of the particles aerosolized from homemade mask fabrics via manual rubbing (Fig. 5 b) was qualitatively different from when participants wore the same masks to perform expiratory activities. An extra peak appeared at approximately 6 µm and the fraction of small particles dropped to below 27%, suggesting that the frictional forces of fibers against fibers helped fragment and dislodge larger particulates into the air. Importantly, however, manual rubbing produced a sizeable number of particulates in the size range of 0.3 to 2 µm, commensurate with the range observed while the masks were worn during expiratory activities. Note that the coarse skin particulates (> 2 µm) released from hand during the mask fabric rubbing experiments could have contributed to the observed particle counts 57 . However, since this factor was the same in all the manual rubbing experiments, and only facemask fabrics differed, it is difficult to explain the observed trends solely in terms of friction between skin and mask fabrics. Moreover, although in these experiments the applied tribological force was not strictly controlled or quantified, the presented results strongly suggest that cotton fabric masks have much more friable material, consistent with our observation that more particles are emitted when participants perform expiratory activities in those cotton fabric masks. Figure 5 ( a ) Number of particles emitted per second of manual rubbing for all masks tested. Each data point is time-averaged particle emission rate over 10 s of rubbing. ( b ) Corresponding size distribution for homemade paper and cotton masks for a total of 30 s of manual rubbing in front of the APS. The solid lines represent the data using a 5 -point smoothing function. Data points with horizontal whiskers show the small particles ranging from 0.3 to 0.5 μm in diameter detected by APS. Surg.: surgical; KN95: unvented KN95; SL-P: single-layer paper towel; U-SL-T: unwashed single-layer cotton t-shirt; U-DL-T: unwashed double-layer cotton t-shirt; N95: vented N95. Full size image Since the cotton masks were all prepared from fabric that was brand new and unwashed, as a final test we hypothesized that perhaps washing the masks would remove surface-bound dust and otherwise friable material and decrease the emission rate. Our experiments do not corroborate this hypothesis. Handwashing the double-layer t-shirt mask with soap and water followed by air-drying yielded no significant change in the particle emission rate as compared to the original unwashed masks (Fig. 6 ). Moreover, manual rubbing of a washed double-layer cotton mask aerosolized slightly more particles than the unwashed mask. These results suggest that a single washing has little impact on the presence of aerosolizable particulate matter in standard cotton fabrics. Note also that the ranges observed here accord qualitatively with the prior measurements taken with the same 4 participants on a previous day (compare the results for each category in Fig. 6 versus the U-DL-T columns for the respective expiratory activities in Fig. 2 ). This observation suggests that day-to-day variability for a given individual is less than the person-to-person variability observed for all expiratory activities and mask types tested. Figure 6 Particle emission rate from breathing, talking, coughing and jaw movement for 4 participants wearing unwashed or washed double-layer t-shirt masks (U-DL-T vs. W-DL-T). Last column shows the particles emission rates for manual rubbing of washed and unwashed masks (three 10-s trials for each mask). Full size image Discussion Our results clearly indicate that wearing surgical masks or unvented KN95 respirators reduce the outward particle emission rates by 90% and 74% on average during speaking and coughing, respectively, compared to wearing no mask. However, for the homemade cotton masks, the measured particle emission rate either remained unchanged (DL-T) or increased by as much as 492% (SL-T) compared to no mask for all of the expiratory activities. For jaw movement, the particle emission rates for homemade paper and cloth masks were an order of magnitude larger than that of no mask (Fig. 2 d). These observations, along with our results from manual mask rubbing experiments (Fig. 5 ), provide strong evidence of substantial shedding of non-expiratory micron-scale particulates from friable cellulosic fibers of the paper and cloth masks owing to mechanical action 40 . The higher particle emission rate for jaw movement than for breathing is an indication of greater frictional shedding of the paper towel and cotton masks during jaw movement compared to breathing, at least as tested here. Likewise, the difference in the size distributions of mask rubbing and with-mask expiratory activities is likely due to the vigorous frictional force applied by hand on the masks. Regardless of the larger particles (> 5 µm), rubbing mask fabrics generates a considerable number of particles in the range of 0.3–5 µm similar to that observed for the expiratory activities. This finding corroborates the interpretation that some proportion of the particulates observed during expiration were particulates aerosolized from the masks themselves. Another factor to consider is that masks can reduce the intelligibility of the speech signal 58 , and can reduce the intensity of sounds passed through them by a significant amount (e.g., > 10 dB in Saedi et al. 59 ). Likely as a response to this, people will speak louder and otherwise adjust their speech when wearing masks. Mendel et al. 60 found that the measured intensity of speech was approximately the same for a group of speakers with and without surgical masks, suggesting that speakers increased the actual intensity of their speech when wearing masks. Fecher 61 found that speakers will actually produce louder output through some types of masks in cases where they overestimate the dampening effects of the mask. It is also possible that speakers may produce Lombard speech when wearing certain types of masks 62 . Lombard speech is louder, has a higher fundamental frequency, and tends to have longer vowel durations, all characteristics that may contribute to an increase in the emission of aerosols 48 , 49 . Our results showed that the root mean square amplitude of speech, as measured externally when participants wore any type of mask, equaled or exceeded that of the no-mask condition (Figure S1 a), suggesting that participants were indeed talking louder with the mask. Although an increase in the intensity of the speech signal when wearing masks would result in greater output of particles in these conditions 48 , the difference in the intensity of speech across the different conditions was not very large (Figure S1 a). As a result, this mechanism alone cannot explain the increased particle output in some of the masked conditions. Intriguingly, the root mean square amplitude of coughing decreased for most of the participants after they wore a type of mask (Figure S1 b), suggesting that they do not cough louder when they wear a mask, i.e., there is no Lombard effect for coughing. The substantial particle shedding by the cloth masks confounds determination of the cloth mask efficacy for reducing outward emission of particles produced from the expiratory activity. Measured material filtration efficiencies vary widely for different cloth materials 32 , 34 , 35 , 63 . The influence of particle shedding on such determinations has not been previously considered; our results raise the possibility that particle shedding has led to underestimated material filtration efficiencies for certain materials. While the material efficiency of the cotton masks was not determined here, we note that the use of the double-layer cotton masks reduced the emission of larger particles (both on a normalized and absolute basis), indicating some reasonable efficacy towards reduction of the expiratory particle emission. Further work differentiating between expiratory and shed particles, possibly based on composition, can help establish the specific efficacy of the cloth masks towards expiratory particles. That the masks shed fibers from mechanical stimulation indicates care must be taken when removing and cleaning (for reusable masks) potentially contaminated masks so as to not dislodge deposited micro-organisms. We also note that the emission reduction due to surgical masks was greater than the corresponding reduction due to KN95 respirators, although this difference was only significant for coughing (p < 0.05). That the surgical masks appear to provide slightly greater reduction than the KN95 respirators is perhaps surprising, as KN95s are commonly thought of as providing more protection than surgical masks for inhalation. Both surgical masks and KN95 respirators typically have high material filtration efficiencies (> 95%) 63 , although the quality of surgical masks can vary substantially 64 . The fit of surgical and KN95 respirators differs substantially. Here, no fit tests were performed to ensure good seals of the KN95 respirators. It may be that imperfect fitting of KN95 respirators allows for greater escape of particles from the mask-covered environment compared to the more flexible surgical masks. Regardless, all surgical masks, KN95 and N95 respirators tested here provided substantial reduction of particle emission compared to no mask. A particularly important observation was the existence of a coughing superemitter, who for unknown reasons emitted two orders of magnitude more particles during coughing than average (Fig. 2 c, red points for M6). This huge difference persisted regardless of mask type, with even the most effective mask, the surgical mask, only reducing the rate to a value twice the median value for no mask at all. Although the underlying mechanism leading to such enhanced particle emission is unclear, these observations nonetheless confirm that some people act as superemitters during coughing, similar to “speech superemitters” 48 , and “breathing high producers” 65 . This observation raises the possibility that coughing superemitters could serve as superspreaders who are disproportionately responsible for outbreaks of airborne infectious disease. Notably, the coughing superemitter was not a breathing superemitter or speaking superemitter, indicating that testing only one type of expiratory activity might not necessarily identify superemitters for other expiratory activities. As a final comment, we emphasize that here we only measured the physical dynamics of outward aerosol particle emission for different expiratory activities and mask types. Redirected expiratory airflow, involving exhaled air moving up past the nose or out the side of the mask, were not measured here but should be considered in future work. Likewise, more sophisticated biological techniques are necessary to gauge mask efficacy at blocking emission of viable pathogens. Our work does raise the possibility, however, that virus-contaminated masks could release aerosolized fomites into the air by shedding fiber particulates from the mask fabric. Since mask efficacy experiments are typically only conducted with fresh, not used, masks, future work assessing emission of viable pathogens should consider this possibility in more detail. Our work also raises questions about whether homemade masks using other fabrics, such as polyester, might be more efficient than cotton in terms of blocking expiratory particles while minimizing shedding of fabric particulates, and whether repeated washings might affect homemade masks. Future experiments using controlled bursts of clean air through the masks will help to resolve the source of these non-expiratory particles. Nonetheless, as a precaution, our results suggest that individuals using homemade fabric masks should take care to wash or otherwise sterilize them on a regular basis to minimize the possibility of emission of aerosolized fomites. Conclusions These observations directly demonstrate that wearing of surgical masks or KN95 respirators, even without fit-testing, substantially reduce the number of particles emitted from breathing, talking, and coughing. While the efficacy of cloth and paper masks is not as clear and confounded by shedding of mask fibers, the observations indicate it is likely that they provide some reductions in emitted expiratory particles, in particular the larger particles (> 0.5 μm). We have not directly measured virus emission; nonetheless, our results strongly imply that mask wearing will reduce emission of virus-laden aerosols and droplets associated with expiratory activities, unless appreciable shedding of viable viruses on mask fibers occurs. The majority of the particles emitted were in the aerosol range (< 5 μm). As inertial impaction should increase as particle size increases, it seems likely that the emission reductions observed here provide a lower bound for the reduction of particles in the droplet range (> 5 μm). Our observations are consistent with suggestions that mask wearing can help in mitigating pandemics associated with respiratory disease. Our results highlight the importance of regular changing of disposable masks and washing of homemade masks, and suggests that special care must be taken when removing and cleaning the masks. Data availability The datasets generated during and/or analyzed during the current study are available in the Dryad Digital Repository, .
Laboratory tests of surgical and N95 masks by researchers at the University of California, Davis, show that they do cut down the amount of aerosolized particles emitted during breathing, talking and coughing. Tests of homemade cloth face coverings, however, show that the fabric itself releases a large amount of fibers into the air, underscoring the importance of washing them. The work is published today (Sept. 24) in Scientific Reports. As the COVID-19 pandemic continues, the use of masks and other face coverings has emerged as an important tool alongside contact tracing and isolation, hand-washing and social distancing to reduce the spread of coronavirus. The Centers for Disease Control and Prevention, or CDC, and the World Health Organization endorse the use of face coverings, and masks or face coverings are required by many state and local governments, including the state of California. The goal of wearing face coverings is to prevent people who are infected with COVID-19 but asymptomatic from transmitting the virus to others. But while evidence shows that face coverings generally reduce the spread of airborne particles, there is limited information on how well they compare with each other. Sima Asadi, a graduate student working with Professor William Ristenpart in the UC Davis Department of Chemical Engineering, and colleagues at UC Davis and Icahn School of Medicine at Mount Sinai, New York, set up experiments to measure the flow of particles from volunteers wearing masks while they performed "expiratory activities" including breathing, talking, coughing and moving their jaw as if chewing gum. Asadi and Ristenpart have previously studied how people emit small particles, or aerosols, during speech. These particles are small enough to float through the air over a considerable distance, but large enough to carry viruses such as influenza or coronavirus. They have found that a fraction of people are "superemitters" who give off many more particles than average. The 10 volunteers sat in front of a funnel in a laminar flow cabinet. The funnel drew air from in front of their faces into a device that measured the size and number of particles exhaled. They wore either no mask, a medical-grade surgical mask, two types of N95 mask (vented or not), a homemade paper mask or homemade one- or two-layer cloth mask made from a cotton T-shirt according to CDC directions. Credit: UC Davis Up to 90 percent of particles blocked The tests only measured outward transmission—whether the masks could block an infected person from giving off particles that might carry viruses. Without a mask, talking (reading a passage of text) gave off about 10 times more particles than simple breathing. Forced coughing produced a variable amount of particles. One of the volunteers in the study was a superemitter who consistently produced nearly 100 times as many particles as the others when coughing. In all the test scenarios, surgical and N95 masks blocked as much as 90 percent of particles, compared to not wearing a mask. Face coverings also reduced airborne particles from the superemitter. Homemade cotton masks actually produced more particles than not wearing a mask. These appeared to be tiny fibers released from the fabric. Because the cotton masks produced particles themselves, it's difficult to tell if they also blocked exhaled particles. They did seem to at least reduce the number of larger particles. The results confirm that masks and face coverings are effective in reducing the spread of airborne particles, Ristenpart said, and also the importance of regularly washing cloth masks.
10.1038/s41598-020-72798-7
Nano
Varying the sliding properties of atoms on a surface
"Frictional transition from superlubric islands to pinned monolayers." Nature Nanotechnology (2015) DOI: 10.1038/nnano.2015.106 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/nnano.2015.106
https://phys.org/news/2015-06-varying-properties-atoms-surface.html
Abstract The inertial sliding of physisorbed submonolayer islands on crystal surfaces contains unexpected information on the exceptionally smooth sliding state associated with incommensurate superlubricity and on the mechanisms of its disappearance. Here, in a joint quartz crystal microbalance and molecular dynamics simulation case study of Xe on Cu(111), we show how superlubricity emerges in the large size limit of naturally incommensurate Xe islands. As coverage approaches a full monolayer, theory also predicts an abrupt adhesion-driven two-dimensional density compression on the order of several per cent, implying a hysteretic jump from superlubric free islands to a pressurized commensurate immobile monolayer. This scenario is fully supported by the quartz crystal microbalance data, which show remarkably large slip times with increasing submonolayer coverage, signalling superlubricity, followed by a dramatic drop to zero for the dense commensurate monolayer. Careful analysis of this variety of island sliding phenomena will be essential in future applications of friction at crystal/adsorbate interfaces. Main Systems achieving low values of dry sliding friction are of great physical and, potentially, technological interest 1 , 2 , 3 , 4 . Superlubricity—the vanishing of static friction—and the consequent ultra-low dynamic friction between crystal faces that are sufficiently hard and mutually incommensurate 5 , 6 , is experimentally rare and has been demonstrated or implied in only a relatively small number of cases, including telescopic sliding among carbon nanotubes 7 , 8 , sliding graphite flakes on a graphite substrate 9 , 10 , 11 , cluster nanomanipulation 12 , 13 and sliding colloidal layers 14 , 15 . It is essential that we increase the understanding of this phenomenon and, in view of potential nanotechnology applications, examine new and more generic systems beyond these. Submonolayer islands of rare gas atoms adsorbed on crystal surfaces offer an excellent platform to address friction at crystalline interfaces. Despite much experimental 16 , 17 , 18 , 19 , 20 , 21 , 22 and theoretical 23 , 24 , 25 , 26 , 27 work, superlubricity is a phenomenon that remains poorly explored in such systems. In the submonolayer range (0 < θ < 1, where θ is the coverage) and at low temperatures, adsorbate phase diagrams versus coverage θ are well known to display phase-separated two-dimensional (2D) solid islands, usually incommensurate with the surface lattice, coexisting with the 2D adatom vapour 28 , 29 . Using a quartz crystal microbalance (QCM), the inertial sliding friction of these islands is measured by the inverse of the slip time τ s = (1/4π)[δ( Q −1 )/δ f ], the ratio of the adsorbate-induced change in inverse quality factor over the respective change in the substrate oscillation frequency 30 . The peak of the inertial force acting on an island deposited on the QCM is expressed as F in = ρ isl SA (2π f ) 2 (where ρ isl is the 2D density at the centre of an adsorbed island of area S , and A and f are the oscillation amplitude and frequency, respectively), and equals the viscous frictional force F visc = Mv / τ s, (where M is the mass of the island and v is speed). This means that superlubricity should indirectly show up as an unusually large slip time. For over two decades, QCM work has shown that physisorbed atoms or molecules condense and generally slide above a submonolayer coverage θ sf , and τ s may typically reach values from hundreds of picoseconds to a nanosecond. These results 17 , 19 , 30 and the corresponding pioneering atomistic simulations 23 , 31 have provided much valuable initial information about the temperature and system dependence of inertial friction. So far, however, crucial aspects that specifically address the island structure of the adsorbate, the edge-originated pinning and in particular the change in commensurability and superlubricity with coverage (issues that, in our view, are important to nanofriction) have not yet come under scrutiny. Here, we present a joint experimental and theoretical study of the sliding of adsorbate islands on a crystalline substrate, and reveal surprising information about the exceptionally easy sliding suggestive of superlubricity, about its limiting factors (caused by edges and defects), and its eventual spontaneous demise at full coverage. Our chosen example is physisorbed Xe on Cu(111), a system for which the phase diagram is well studied (as is the case for other rare gas adsorbates on graphite and metal surfaces) 28 . Between ∼ 50 and 90 K, Xe monolayers condense on Cu(111) as a commensurate 2D solid. We conventionally designate this as unit coverage θ = 1, characterized by a density where is the commensurate adatom spacing. Low-energy electron diffraction at 50 K locates the Xe atoms on top of surface Cu atoms 32 , the planar distance ( a 0 ) of which is close to the Xe–Xe spacing in bulk Xe ( a Xe = 0.439 nm). At lower temperatures, the full Xe monolayer is known from surface extended X-ray measurements to shrink into an ‘overdense’ ( ρ > ρ 0 ) incommensurate structure, reaching commensurability only at 50 K following thermal expansion 33 . Conversely, the 2D atom density in Xe monatomic islands, which coexist with the adatom 2D vapour at submonolayer coverage, is not specifically known, but is often assumed to be equal to ρ 0 . Our results in fact show that the 2D crystalline Xe islands ( θ < 1) are slightly ‘underdense’ ( ρ isl < ρ 0 ) and increasingly incommensurate with thermal expansion, reaching a 2D density 4% below ρ 0 near 50 K. In this incommensurate state, the 2D lattice inside the Xe islands should slide superlubrically over the Cu(111) substrate, as expected for a ‘hard’ slider. Indeed, even though the Xe–Xe attraction V Xe−Xe ≈ 20 meV is an order of magnitude smaller than the Xe–Cu(111) adhesion energy E a ≈ 190 meV (ref. 28 ), it is an order of magnitude larger, and thus harder, than the weak Cu(111) surface corrugation, E c ≈ 1–2 meV (ref. 34 ). We also find that the large Xe/Cu adhesion leads to another important consequence, the tribological impact of which has not generally been described. At monolayer completion (where the 2D adatom gas disappears), a positive 2D (spreading) pressure suddenly builds up as extra adatoms strive to enter the first layer and benefit from the substrate attraction, rather than forming a second layer where the attraction is much smaller. This process is clearly revealed in the QCM data for Xe/Ag(111) obtained by Krim's group 17 , which indicate a Xe density increase of ∼ 5% following monolayer formation. Ideally, as the submonolayer coverage grows, this spontaneous density increase process should start at θ c ∼ ρ isl / ρ 0 < 1 and continue until limited by either the build-up of 2D pressure, or by a strong accidental commensurability with the substrate, whichever comes first as θ grows beyond θ c . If corrugation, commensurability and entropic effects are ignored, and assuming for simplicity a first-neighbour Xe–Xe attraction –V , the potential energy density change following a monolayer density increase from ρ to ρ + δ ρ is roughly estimated as which is minimal when where λ and μ are the Xe monolayer Lamé coefficients, m is the mass of a Xe atom and v L is the monolayer longitudinal sound velocity. With parameters appropriate for the Xe monolayer and v L = 1.3 km s –1 ), this yields (δ ρ / ρ ) ≈ 6%, close to the experimental compression of Xe/Ag(111). We note that a compression of this magnitude would amount to several kbar in bulk Xe. In the present case of Xe/Cu(111), and unlike Xe/Ag(111), the 2D density upward jump from ρ isl is arrested to ρ 0 by commensurability, and we estimate its amount to ∼ 4%. The gist of these preliminary theoretical considerations is that, near 50 K, submonolayer Xe islands must be incommensurate and most probably superlubric, whereas the full monolayer is commensurate and probably pinned. QCM measurements The friction of the Xe monolayers was measured with a QCM. The microbalance consisted of an AT cut quartz disk whose principal faces were optically polished and covered by a gold keyhole electrode commercially evaporated on one face and a copper electrode on the other face ( Fig. 1 ). The QCM was driven at its fundamental mode with resonance frequency f res ≈ 5 MHz using a frequency modulation (FM) technique. An a.c. voltage V D was applied across the two electrodes at a frequency equal to that of its mechanical resonance and drove the two parallel faces of the quartz plate in an oscillating, transverse shear motion. Varying V D changed the power dissipated in the quartz and the amplitude A of the lateral oscillations of the electrodes. The latter quantity was calculated from the formula A = 1.4 Q S V D , where A is measured in picometres and V D is the peak driving voltage in volts (ref. 35 ). Figure 1a presents a family of resonance curves measured in vacuum and at T = 48 K for different values of A . On the horizontal axis the frequency of generator f is normalized to f res , and the vertical axis shows the corresponding amplified voltage V QCM normalized to the peak value V res . No variation in the resonance curve is detected within the amplitude range investigated. The continuous line is a nonlinear least-squares fit to the data, which yields a quality factor of the quartz of Q = 220,000 (ref. 20 ). The condensation of a film on the electrodes is signalled by a decrease in the resonance frequency f res . Any dissipation taking place at the solid/film interface is detected by a decrease in the corresponding resonance amplitude V res (ref. 36 ). Figure 1: The quartz crystal microbalance and characterization of the Cu(111) electrode. a , Normalized resonance curves of the QCM measured in vacuum and at T = 48 K for different oscillating amplitudes A. The red line is a fit to the data for A = 7 nm, and f res ≈ 5 MHz represents the series resonance of the quartz crystal. Top left inset: sketch of the QCM with Cu and Au electrodes. Bottom left inset: the shear motion of the QCM at resonance. Arrows represent lateral displacements. Top right inset: Xe gas dosing on the QCM. b , Pole figure (stereographic plot) of the Cu [111] peak. The units of the contour plot labels are counts per second (c.p.s.). ψ (sample tilt angle) is varied between 0 and 50° and ϕ (sample rotation angle) between 0 and 180°. c , Continuous line: ϕ -averaged Cu [111] intensity as a function of ψ . Dashed line: cos( ψ ) law normalized to Cu [111] intensity at ψ = 0°. d , STM derivative image of the Cu film. Image size: 150 × 150 nm. Full size image The Cu(111) electrode was prepared by depositing on the other bare quartz face a Cu (30 nm)/Cr (10 nm) bilayer at room temperature under ultrahigh vacuum conditions using Knudsen effusion sources. The Cr buffer layer was used to promote adhesion between the quartz substrate and the Cu film. The deposition rates were 2.2 Å s –1 and 4.7 Å s –1 for Cr and Cu, respectively. Before deposition, the QCM was heated for 30 min to 250 °C to remove the condensed impurities. After deposition, the surface cleanliness of the Cu film was checked by X-ray photoelectron spectroscopy (XPS) ( Supplementary Section ‘XPS characterization of Cu(111) surface’ ). Figure 1b shows the XRD intensity of the Cu [111] reflection as a function of sample orientation (a pole figure). A well-defined peak close to a tilt angle of 0° indicates that most of the Cu grains are oriented with the [111] crystal axis perpendicular to the sample surface. No other preferred orientations are detected. Geometrical effects due to the limited size of the sample with respect to the X-ray beam give rise to a broad peak at ψ = 0°, with a cos( ψ ) dependence. Figure 1c shows the [111] reflection intensity, averaged over the sample rotation angle ϕ , as a function of sample tilt angle ψ . To discriminate between preferential orientation and finite sample size effects, a cos( ψ ) law normalized to the maximum intensity of the peak is also shown in Fig. 1c . The measured intensity peak is clearly sharper than cos( ψ ), indicating that the ψ dependence of the [111] intensity is mostly due to the preferential orientation of the Cu grains. Figure 1d presents a scanning tunnelling microscope (STM) derivative image of the sample surface taken in situ immediately after deposition. Large flat grains 40–50 nm in lateral size are clearly visible. The typical area of the (111) terraces is A 0 ≈ 2.5 × 10 3 nm 2 . Xe was condensed directly onto the Cu(111) electrode of the QCM at temperatures between 47 and 49 K. Lower temperatures could not be reached due to the poor thermal coupling to the cold head of the cryocooler, and higher values were limited by evaporation of the Xe monolayer 22 . Within this very narrow temperature interval, no systematic and reproducible variations attributable to T were observed. Between consecutive deposition scans, the QCM was heated to ∼ 60 K to guarantee full evaporation of Xe and thermal annealing of the microbalance 37 . Figure 2 presents the measured slip time τ s of Xe at T = 47 K with a moderate oscillating amplitude of A = 7.4 nm for the Cu electrode. The coverage was deduced from the frequency shift, assuming for the monolayer an areal density of ρ 0 = 5.93 atoms nm –2 , corresponding to completion of the commensurate solid phase, equivalent to a frequency shift of 7.3 Hz. Besides some initial pinning ( τ s = 0) at the lowest coverages ( θ < 0.05) where Xe is known to condense at steps and defects 38 , the data show depinning with a rapid increase of τ s , reaching peak values of up to 4 ns, nearly an order of magnitude larger than the slip times measured with Xe on gold and on graphene at the same temperature 22 . This slipperiness of Xe islands on Cu(111) is all the more puzzling because it runs contrary to the pinning for commensurability that is, according to the literature 33 , supposed to be in place at 47 K. The large and rapidly rising submonolayer slip time is a dominant feature in Fig. 1 , which we will now qualify as evidence of incommensurability and superlubricity. The second and even more unusual feature is the sudden slip time collapse near θ ≈ 1, which also exhibits a mysterious variability between experiments. Both will be shown to signify an abrupt increase in density leading to a compressed monolayer. Figure 2: Slip time of Xe on Cu(111) as a function of film coverage. The scan was taken at T = 47 K with f res ≈ 5 MHz and at an oscillating amplitude of the Cu electrode of A = 7.4 nm. Inset: scans of Xe on Cu(111) taken for different Xe depositions on the same substrate at the same A and temperatures between 47 and 49 K. Note the sharp drop occurring at coverage near a full monolayer, albeit with large fluctuations. Full size image Theory and MD simulations of sliding islands The physics behind these experimental results can be directly addressed by frictional molecular dynamics (MD) simulations. The power P dissipated by a hard crystalline island of area S sliding on a crystal surface under a uniform force F is given by the sum P = P b + P e of an intrinsic ‘bulk’ term P b (the friction of an equal portion of infinite adsorbate with the same 2D density) and extrinsic or defect terms P e , representing corrections for the island finite size and substrate defects. The contribution to P e to take into account substrate imperfections and defects depends on the oscillation amplitude, and is reduced for smaller oscillation amplitudes, when fewer defects are overlapped by the moving island ( Supplementary Section ‘Slip times measured at different oscillating amplitudes’ ). The residual contribution to P e , present even on defect-free terraces and for all amplitudes, is the friction caused by the island finite size, and can be conventionally designated as an ‘edge’ contribution 27 . Quite generally, P e presents an area dependence different from that for P b , namely P b ∼ S versus P e ∼ S γ , where γ < ½ depending on the nature of the pinning centres 27 . The force dependencies of the two terms also differ. Extrinsic defects and/or island edges imply a small but non-zero static friction force F se , so P e will vanish for either F < F se (due to pinning) or for F ≫ F se (where the sliding becomes asymptotically free). The bulk, intrinsic frictional power P b depends strongly on the commensurability of the island 2D lattice structure with the crystal surface lattice. Hard incommensurate islands have zero bulk static friction, and will therefore slide superlubrically, that is, with a relatively small kinetic friction, growing viscous-like with force F ν , where ν ≈ 1. This bulk friction of a superlubric slider, negligible at low-speed sliding such as with the μm s –1 typical speed of atomic force microscopy (AFM) 13 , becomes accessible in QCM, where peak speeds are many orders of magnitude higher (here v ∼ ωA ≈ 0.23 m s –1 ). Commensurate systems are, on the other hand, pinned by static friction F sb , so P b is zero until the force reaches a large depinning force F ∼ F sb , at which point, as shown for example by colloidal simulations 15 , 39 , frictional dissipation has its peak. In our model, the Cu(111) substrate is treated as a fixed and rigid triangular lattice, exerting on the mobile Xe adatoms an average attractive potential of – E a = −190 meV and with a corrugation of 1 meV between the Cu on-top site (energy minimum for a Xe adatom) and the Cu hollow site (energy maximum). Each Xe adatom is thus subject to the overall potential V Xe−Xe + V Xe−Cu . The Xe–Xe interaction is modelled by a regular Lennard–Jones 12-6 potential, with parameters ε = 20 meV and σ = 3.98 Å. Smaller corrections due to three-body forces as well as substrate-induced modifications of this two-body force are ignored. The Xe–Cu interaction is modelled by the modified Morse potential: where z 0 = 3.6 Å (refs 40 , 41 ). We define the modulating function, normalized to span the interval from 0 (top sites) to 1 (hollow sites), as where the constant is the nearest-neighbour distance of surface Cu atoms. The Morse potential energy parameter is given by α ( x , y ) = – E a + M ( x , y ) E c . The inverse length β in equation (3) is obtained by equating the second derivative of the potential to the experimental spring constant: where m is the atomic mass of xenon. With a perpendicular vibration energy of ω ≃ 2.8 meV (ref. 40 ), we obtain β = 0.8 Å −1 . The equations of motion are integrated using a velocity–Verlet algorithm, coupled to a Langevin thermostat with a damping coefficient γ = 0.1 ps −1 , a damping whose value is not critical and which is not applied to the translational degrees of freedom of the Xe island centre of mass. Islands are obtained by a circular cutting of a Xe monolayer, the radius of which determines the island size. The so-formed islands are deposited on the Cu substrate with a random orientation angle, as this is expected to occur experimentally and is not critical to the results. The simulation protocol for slip-time calculation starts with the system being heated at 48 K for 100 ps. The Xe centre-of-mass velocity along the x -axis is then set at v i = 100 m s –1 and the simulation evolves until motion stops. The slowdown is fit very well by an exponential, indicating a purely viscous friction. The slip time is extracted, after skipping the initial transient, by an exponential fit of the form , as shown in Fig. 3 for an island of diameter ∼ 60 nm. Figure 3: Spontaneous frictional slowdown of a 60 nm circular island. An island of density ρ isl / ρ 0 = 0.96 was initially kicked at large speed and T = 48 K and then set free to move without thermostatting. The wide green line is obtained by superposition of five simulations. The excellent exponential fit (orange curve) confirms the viscous sliding of the island. Full size image The excellent exponential fit confirms that the island sliding is indeed viscous, with a slip time τ s as large as 5 ns, directly comparable with the experimental values of Fig. 2 . Moreover, even when no extrinsic defects were included, the slip time obtained still varied with the island area S , well fit by τ s −1 = ( a + bS γ −1 ), with γ ≈ ¼ (as shown in Fig. 4 ). This sublinear term exponent is the same as that found for sliding clusters 13 and is similar to that found in a recent study on adsorbate static friction, where it was due to the finite size of the island 27 . Under the reasonable assumption that increasing coverage corresponds to an increase in average island size, eventually reaching at full coverage ( θ ≈ 1) the size of the largest Cu(111) terraces ( ∼ 50 nm), the increase of the experimental slip time with coverage shown in Fig. 2 can be attributed to the progressively decreasing role of edges 27 . The intrinsic, defect-free slip time asymptotically reached in the large size limit is that dictated by the ideal Xe lattice, which is incommensurate, hard and superlubrically sliding with a friction growing linearly with speed. We conclude that the unusually large slip times at large submonolayer coverages signify precisely that Xe islands sliding on Cu(111) are asymptotically superlubric. Figure 4: Theoretical slip time from sliding simulations for incommensurate Xe islands of growing size, on a perfectly periodic potential representing Cu(111). The fit (dashed green line) shows that the size-controlled defect friction (here due to the island edge) gives way to very long slip times arising from bulk superlubricity in the large size limit ( τ s bulk ≈ 5.75 ns). The vertical line marks full monolayer coverage for an estimated experimental Cu(111) terrace size of 60 nm. When islands reach this size, we expect a spontaneous density increase with sudden slip time collapse due to commensurabiliy. Full size image The second striking experimental feature is the sudden drop of slip time near monolayer coverage. The physics behind this also emerges from simulation, where additional adatoms added near full coverage θ ≈ 1 are spontaneously incorporated into the first monolayer rather than forming a second layer. As trial simulations also confirm, the extra compressional energy cost implied by this incorporation is overcompensated by the adhesive energy gain, in agreement with equation (4). The resulting 4% growth in 2D density at monolayer completion, not far from the full theoretical 6%, is arrested by the intervening, and accidental, exact commensurability, well established experimentally 32 , 34 . As a result of this, the slip time of Xe/Cu(111) falls (unlike that of, for example, Xe/Ag(111), which remains incommensurate after densification 17 ), as seen in the experimental ( Fig. 2 ) and indicated in the theoretical ( Fig. 4 ) results. The 2D density jump, which destroys superlubricity with increasing adsorbate coverage near one monolayer, is a sudden, first-order event. As such, it is expected to occur with hysteresis, which implies a difference between atom addition and atom removal, as well as occasional differences between one compressional event and another. As shown in the inset of Fig. 2 , the Xe coverage at which the sudden slip time drop occurs is experimentally rather erratic, in agreement with this expectation. Experimental verification of this hysteresis is difficult because of the negligible pressure of the bulk vapour in equilibrium with the film, which makes it impossible to decrease the Xe coverage by pumping gas out at the temperature of the scan. Simulated insertion/extraction of a Xe atom into/out of a full underdense monolayer ( ρ / ρ 0 = 0.96) yields very asymmetric energy evolutions, actually ending with highly defected, poorly reproducible states. That result supports hysteresis and suggests, moreover, the explanation for the randomness in the slip time drop observed in the real process of Fig. 2 being probably due to the relatively long, statistically distributed time needed for the intra-monolayer defects to heal away during the spontaneous compression process. Finally, the temperature behaviour is one important parameter that we cannot vary in our experiment, but that is worthy of comment. The compressed Xe/Cu(111) monolayer has a much stronger thermal expansion than bulk Xe, which causes it to evolve from incommensurate and slightly overdense between 20 and 45 K (ref. 32 ) to commensurate at 50 K (refs 31 , 32 ) and 60 K (ref. 33 ), pushing it incommensurate again, and now slightly underdense, at higher temperatures (a fine feature hard to pick up by pioneering low energy electron diffraction (LEED) studies 42 ). Incommensurability and superlubricity at 77 K are strongly suggested by the exceedingly long slip times on the order of 20 ns observed by Coffey and Krim 18 . From superlubric islands to pinned monolayers The inertial sliding of submonolayer Xe islands on the Cu(111) surface just investigated offers an ideal playground to delve deeper into some important frictional phenomena. The long slip times and theoretically demonstrated incommensurability between adsorbed and substrate lattices characterize the sliding as asymptotically superlubric for large island areas, limited only by defect- and edge-related friction. A sudden spontaneous compression at monolayer completion caused by strong adhesion to the substrate is predicted and observed. Specific to Xe/Cu(111) is the ensuing commensuration accidentally reached during compression, causing a peculiar transition from superlubric sliding with a large slip time to its sudden vanishing in the dense pinned state. Both the island superlubricity and the compressional transition, the latter generally leading from one to another incommensurate state, are general phenomena characterizing sliding friction for adsorbates on crystalline surfaces, and are not specific to the system under study. Technological nanodesign addressing the control of crystal friction properties at the most intimate level will have to take these elementary mechanisms into account.
It's possible to vary (even dramatically) the sliding properties of atoms on a surface by changing the size and "compression" of their aggregates: an experimental and theoretical study conducted with the collaboration of SISSA, the Istituto Officina dei Materiali of the CNR (Iom-Cnr-Democritos), ICTP in Trieste, the University of Padua, the University of Modena e Reggio Emilia, and the Istituto Nanoscienze of the CNR (Nano-Cnr) in Modena, has just been published in Nature Nanotechnology. (Nano)islands that slide freely on a sea of copper, but when they become too large (and too dense) they end up getting stuck: that nicely sums up the system investigated in a study just published in Nature Nanotechnology. "We can suddenly switch from a state of superlubricity to one of extremely high friction by varying some parameters of the system being investigated. In this study, we used atoms of the noble gas xenon bound to one another to form two-dimensional islands, deposited on a copper surface (Cu 111). At low temperatures these aggregates slide with virtually no friction," explains Giampaolo Mistura of the University of Padua. "We increased the size of the islands by adding xenon atoms and until the whole available surface was covered the friction decreased gradually. Instead, when the available space ran out and the addition of atoms caused the islands to compress, then we saw an exceptional increase in friction." The study was divided into an experimental part (mainly carried out by the University of Padua and Nano-Cnr/University of Modena and Reggio Emilia) and a theoretical part (based on computer models and simulations) conducted by SISSA/Iom-Cnr-Democritos/ICTP. "To understand what happens when the islands are compressed, we need to appreciate the concept of 'interface commensurability'," explains Roberto Guerra, researcher at the International School for Advanced Studies (SISSA) in Trieste and among the authors of the study. "We can think of the system we studied as one made up of Lego bricks. The copper substrate is like a horizontal assembly of bricks and the xenon islands like single loose bricks," comments Guido Paolicelli of the CNR Nanoscience Institute. "If the substrate and the islands consist of different bricks (in terms of width and distance between the studs), the islands will never get stuck on the substrate. This situation reproduces our system at temperatures slightly above absolute zero where we observe a state of superlubricity with virtually no friction. However, the increase in surface of the islands and the resulting compression of the material causes the islands to become commensurate to the substrate – like Lego bricks having the same pitch – and when that happens they suddenly get stuck." sample of crystalline copper used as a ‘sliding’ substrate. Credit: Nano-Cnr, Modena The study is the first to demonstrate that it is possible to dramatically vary the sliding properties of nano-objects. "We can imagine a number of applications for this," concludes Guerra. "For example, nanobearings could be developed that, under certain conditions, are capable of blocking their motion, in a completely reversible manner."
10.1038/nnano.2015.106
Medicine
Trial suggests inducing labor over 'wait and see' approach for late term pregnancies
Induction of labour at 41 weeks versus expectant management until 42 weeks (Swedish post-term induction study, SWEPIS): multicentre, open label, randomised superiority trial, BMJ (2019). DOI: 10.1136/bmj.l6131 , www.bmj.com/content/367/bmj.l6131 Journal information: British Medical Journal (BMJ)
http://dx.doi.org/10.1136/bmj.l6131
https://medicalxpress.com/news/2019-11-trial-labor-approach-late-term.html
Abstract Objective To evaluate if induction of labour at 41 weeks improves perinatal and maternal outcomes in women with a low risk pregnancy compared with expectant management and induction of labour at 42 weeks. Design Multicentre, open label, randomised controlled superiority trial. Setting 14 hospitals in Sweden, 2016-18. Participants 2760 women with a low risk uncomplicated singleton pregnancy randomised (1:1) by the Swedish Pregnancy Register. 1381 women were assigned to the induction group and 1379 were assigned to the expectant management group. Interventions Induction of labour at 41 weeks and expectant management and induction of labour at 42 weeks. Main outcome measures The primary outcome was a composite perinatal outcome including one or more of stillbirth, neonatal mortality, Apgar score less than 7 at five minutes, pH less than 7.00 or metabolic acidosis (pH <7.05 and base deficit >12 mmol/L) in the umbilical artery, hypoxic ischaemic encephalopathy, intracranial haemorrhage, convulsions, meconium aspiration syndrome, mechanical ventilation within 72 hours, or obstetric brachial plexus injury. Primary analysis was by intention to treat. Results The study was stopped early owing to a significantly higher rate of perinatal mortality in the expectant management group. The composite primary perinatal outcome did not differ between the groups: 2.4% (33/1381) in the induction group and 2.2% (31/1379) in the expectant management group (relative risk 1.06, 95% confidence interval 0.65 to 1.73; P=0.90). No perinatal deaths occurred in the induction group but six (five stillbirths and one early neonatal death) occurred in the expectant management group (P=0.03). The proportion of caesarean delivery, instrumental vaginal delivery, or any major maternal morbidity did not differ between the groups. Conclusions This study comparing induction of labour at 41 weeks with expectant management and induction at 42 weeks does not show any significant difference in the primary composite adverse perinatal outcome. However, a reduction of the secondary outcome perinatal mortality is observed without increasing adverse maternal outcomes. Although these results should be interpreted cautiously, induction of labour ought to be offered to women no later than at 41 weeks and could be one (of few) interventions that reduces the rate of stillbirths. Trial registration Current Controlled Trials ISRCTN26113652 . Introduction Adverse perinatal outcomes gradually increase after 40 gestational weeks and are substantially increased post-term (≥42 weeks (≥294 days)). 1 2 The risk of stillbirth has been shown to increase after term, 1 2 3 4 5 and worldwide as much as 14% of stillbirths are associated with prolonged pregnancy. 2 Furthermore, maternal complications also increase with duration of pregnancy after 40 weeks. 1 To date, no agreement exists on how to manage late term (41 weeks+0 days to 42 weeks+0 days) pregnancies. The World Health Organization recommends induction of labour at 41 weeks, 6 and many countries offer induction of labour between 41 and 42 weeks to avoid prolonged pregnancy. 7 8 Randomised controlled trials have compared induction of labour with expectant management in prolonged pregnancies, most with inconclusive results for perinatal mortality and major morbidity. 9 The results from the latest Cochrane review (2018) showed lower rates of caesarean delivery and perinatal death but a higher rate of operative vaginal delivery in the induction group compared with the expectant management group. 9 After the latest Cochrane review and after the initiation of the present study, 10 two large randomised controlled trials examining low risk pregnancies have been published. A large trial from the United States, ARRIVE (A Randomized Trial of Induction Versus Expectant Management), compared induction of labour in nulliparous women at 39 weeks+0 days to 39 weeks+4 days with expectant management until 41 weeks+0 days. 11 No significant difference was found in perinatal outcome between groups, whereas the frequency of caesarean delivery was significantly lower in the early induction group. Another large recent trial from the Netherlands, INDEX (INDuction of labour at 41 weeks with a policy of EXpectant management until 42 weeks), compared induction of labour at 41 weeks+0 days to 41 weeks+1 day with expectant management until 42 weeks+0 days. 12 The results could not confirm non-inferiority for adverse perinatal outcome of expectant management, instead a significantly higher risk of adverse perinatal outcome was found in the expectant management group. No significant difference in the rate of caesarean delivery was found. The current practice in many centres in the United Kingdom and Scandinavia is to induce delivery no later than 42 weeks, but several studies suggest that the risk of perinatal mortality and morbidity has actually already increased significantly at 41 weeks. 3 4 5 The risk of stillbirth increases gradually from 39 weeks of gestation 13 and increases exponentially as the pregnancy approaches 42 weeks, 3-5 13 whereas the risk of neonatal mortality is not increased until 42 weeks according to most studies. 3-5 13 We therefore found it clinically justified to compare induction of labour at 41 weeks with expectant management and induction at 42 weeks for maternal and perinatal outcomes. At the start of the present trial, only two studies (one was an abstract) out of 30 included in the Cochrane review specifically compared induction of labour at 41 weeks with expectant management until 42 weeks. 14 15 We evaluated if induction of labour at 41 weeks+0-2 days compared with expectant management and induction of labour at 42 weeks+0-1 days was superior in terms of perinatal outcome in healthy women with a low risk pregnancy. Methods Study design SWEPIS (SWEdish Post-term Induction Study) was a multicentre, open label, randomised controlled superiority trial conducted in Sweden from May 2016 to October 2018. The trial was register based, with randomisation and most data collection done by using the Swedish Pregnancy Register. 16 Fourteen hospitals with antenatal clinics linked to the register were involved in the trial. Five of the hospitals were university clinics and nine were county hospitals comprising about 60 000 deliveries per year of the around 115 000 to 120 000 annual deliveries in Sweden. The trial was conducted according to the CONSORT guidelines. The protocol is available online ( ) and as a publication. 10 The trial was undertaken within the Swedish Network for National Clinical Studies within Obstetrics and Gynaecology (SNAKS). Participants Pregnant women were eligible for participation if they were aged 18 or more, understood oral and written information, and had a singleton pregnancy with a fetus in cephalic presentation at 40 weeks+6 days to 41 weeks+1 day according to ultrasound based dating in the first or early second trimester or for pregnancies after assisted reproduction according to the day of oocyte retrieval. Exclusion criteria were previous caesarean delivery or other uterine surgery, pregestational and insulin dependent gestational diabetes, hypertensive disorder of pregnancy, known oligohydramnios (amniotic fluid index <50 mm or deepest vertical pocket <20 mm) or small for gestational age fetus (estimated fetal weight ≤2 standard deviations according to the sex and gestational age specific Swedish reference), 17 diagnosed fetal malformation, contraindication to vaginal delivery, and any other maternal condition affecting the progress of the pregnancy to 42 weeks. Study logistics General information about the study was provided in the form of posters or videos in the waiting rooms at the antenatal clinics and by advertising in local newspapers. More detailed information was provided on the study website. When the pregnancies were at around 40 weeks, the midwives provided women with an oral account of the study in Swedish or written information in any of 17 other languages applicable to women who were non-Swedish. In the Stockholm region (five clinics), women were enrolled in association with a 41 week ultrasound scan, which is offered to all pregnant women in the region. This is a voluntary procedure, with almost 100% coverage, aiming to confirm a normal pregnancy (defined as mean fetal abdominal diameter >110 mm and normal amniotic fluid) before proceeding to 42 weeks. The midwife performing the ultrasonography answered questions about the study and handled the randomisation after written informed consent was obtained. In all other centres, women interested in taking part were invited to visit a research midwife who managed patient consent and randomisation. Outside the Stockholm region, 41 week scans were not routinely offered. Randomisation and masking Randomisation was done between 40 weeks+6 days and 41 weeks+1 day. Enrolled women were allocated to the induction group or expectant management group (controls). In the induction group, labour was induced within 24 hours of randomisation (ie, same or next day) but not earlier than 41 weeks+0 days. In the expectant management group, labour was induced at 42 weeks+0 days to 42 weeks+1 day. Allocation to a trial group, 1:1, was done with central online randomisation by dynamic allocation, a method that actively minimises the imbalance between the groups for each new patient that is randomised. Centre and parity (primiparity versus multiparity) were used as minimisation variables. The Swedish Pregnancy Register 16 set up the randomisation module, which was incorporated in the register but separate from the register data. Access to the randomisation module used a separate log-in system. The module also included an electronic case report form. After delivery and the neonatal period, we used the women’s unique personal identification number to retrieve data on antenatal, delivery, and neonatal characteristics from the Swedish Pregnancy Register and Swedish Neonatal Quality Register. 18 Because most variables in the study were included in the quality registers, the study could be performed relatively fast and at low cost. Owing to the nature of the intervention it was not possible to blind participants or care givers. Strategies Induction of labour was carried out in the same way in both groups. At admission, the women were examined for blood pressure, proteinuria, fetal presentation by abdominal palpation, cervical status, and fetal wellbeing by cardiotocography. Amniotomy was performed if the fetal head was well engaged and the cervix was ripe (Bishop score ≥6 for primiparous women and ≥5 for multiparous women), followed by oxytocin infusion after 1-2 hours without spontaneous regular contractions. If the fetal head was not engaged or the cervix was less ripe, any of the following methods was used according to local routines: mechanical dilation with a Foley-like catheter, prostaglandin E1 (misoprostol, oral or vaginal), or prostaglandin E2 (dinoprostone, vaginal). After randomisation, no monitoring was offered within the framework of the trial. In Sweden, most antenatal clinics offer one follow-up visit after term, usually around 41 weeks, including measurement of blood pressure, fundal height, and fetal heart rate by doptone. Further examinations, induction of labour, or caesarean delivery are initiated for usual obstetric indications, such as decreased fetal movements, suspected fetal growth restriction, or pre-eclampsia. After 41 weeks, the threshold for interventions is low. Indication for a scheduled caesarean section included undiagnosed breech or transverse presentation with failed external version. Fetal scalp blood sampling (pH or lactate) was performed during labour when indicated. Outcomes The primary outcome was a composite perinatal outcome of mortality and morbidity. Perinatal mortality was defined as stillbirth and neonatal death (days 0-27). Neonatal morbidity was defined as one or more of several outcomes: Apgar score less than 7 at five minutes, pH less than 7.00 or metabolic acidosis (pH <7.05 and base deficit >12 mmol/L) in the umbilical artery, hypoxic ischaemic encephalopathy grades 1-3, intracranial haemorrhage, convulsions, meconium aspiration syndrome, mechanical ventilation within 72 hours, or obstetric brachial plexus injury. Secondary neonatal outcomes were the individual components of the primary perinatal outcome, admission to a neonatal intensive care unit, Apgar score less than 4 at five minutes, birth weight, macrosomia (≥4500 g), neonatal jaundice, therapeutic hypothermia, pneumonia, or sepsis. Secondary maternal outcomes were use of epidural anaesthesia, caesarean delivery, operative vaginal delivery, duration of labour (from onset of regular contractions to delivery), chorioamnionitis, shoulder dystocia, third or fourth degree perineal tear, postpartum haemorrhage (>1000 mL), wound infection, urinary tract infection, endometritis, sepsis, and breastfeeding at discharge from hospital and at four weeks post partum. Exploratory neonatal outcomes were neonatal hypoglycaemia, birth trauma (fracture of long bone, clavicle, or skull, other neurological injury, retinal haemorrhage, or facial nerve palsy), small for gestational age, 17 and large for gestational age. 17 Exploratory maternal outcomes were cervical tear, uterine rupture, hypertensive disorders of pregnancy (pre-eclampsia, gestational hypertension, eclampsia), venous thromboembolism, duration of stay in hospital, admission to intensive care unit, and mortality within 42 days. Data collection We retrieved data on maternal background, pregnancy and delivery characteristics, and perinatal outcomes from the Swedish Pregnancy Register and the Swedish Neonatal Quality Register. 16 18 Both are certified national quality registers initiated by Swedish healthcare professionals. Data prospectively entered in standardised electronic medical records by midwives and clinicians during pregnancy, delivery, and post partum is forwarded to the Swedish Pregnancy Register from all antenatal clinics and most delivery clinics. In the same way, the Swedish Neonatal Quality Register collects data on all newborns admitted to neonatal intensive care units at birth or within 28 days of life. We obtained vital statistics on maternal and neonatal mortality from Statistics Sweden. Study data were linked with data from the Swedish Pregnancy Register, Swedish Neonatal Quality Register, and Statistics Sweden using the unique personal identification number allocated to each person in Sweden at birth or after immigration. 19 For all newborns with a primary outcome we collected and scrutinised the medical records. The same process was undertaken in the women with a diagnosis of endometritis to rule out misclassification of sepsis. To estimate selection bias we compared the baseline characteristics and pregnancy outcomes of our study population with those of the Swedish background population. Monitoring Before the trial started, an independent Data and Safety Monitoring Board comprising a statistician, senior obstetrician, and senior midwife was formed to supervise the trial through regular reviews. The principle investigators reported serious adverse events immediately to the Data and Safety Monitoring Board, defined as any of perinatal or maternal death; need for neonatal intensive care because of meconium aspiration syndrome, asphyxia, intracranial haemorrhage, or other severe condition; severe maternal morbidity with admission to intensive care unit; and complication associated with induction of labour, such as placental abruption at insertion of Foley catheter, or uterine rupture. An interim analysis was planned when 50% of the women had been recruited and had delivered. Sample size and statistical analyses To reduce the primary outcome by one third, from 2.7% to 1.8% (superiority testing, level of significance 0.05, power 80%) by induction of labour at 41 weeks compared with expectant management until induction at 42 weeks, we needed a sample size of 10 038 women, 5019 in each randomisation group. This calculation assumed that for 10% of the women, management would not be consistent with the assigned strategy, thus also covering the same power for the per protocol analysis as for the intention to treat analysis. The composite primary outcome of 2.7% was based on data on perinatal outcomes included in our primary outcome in one Swedish region (Region Skåne) between 2000 and 2010. The statistical analyses were carried out according to a prespecified analysis plan. Main analyses were performed on the intention to treat population. The primary statistical analysis was the comparison between the induction group and the expectant management group for the primary perinatal composite outcome, with Fisher’s exact test (lowest one sided P value multiplied by 2) at a significance level of 0.05. To compare secondary outcomes, we used Fisher’s exact test for dichotomous variables, Fisher’s non-parametric permutation test for continuous variables, Mantel Haenszel χ 2 test for ordered categorical variables, and Pearsons’s χ 2 test for non-ordered categorical variables. For the primary efficacy variable (the perinatal composite outcome) and dichotomous secondary variables we calculated relative risks with corresponding 95% confidence intervals between the groups. For continuous secondary variables we calculated mean differences with 95% confidence intervals between the groups. Data are presented as means with standard deviations, medians with interquartile ranges, and numbers with percentages, as appropriate. The intention to treat population included all randomised women except those who withdrew consent or were lost to follow-up. In the intention to treat group we included women with spontaneous labour or prelabour rupture of membranes after randomisation but before induction, or with pregnancy complications necessitating interventions for medical reasons. A post hoc sensitivity analysis for the primary efficacy analysis was performed adjusted for the minimisation variables centre and primiparity or multiparity using multivariable logistic regression analysis with centre as fixed effect. Complementary analyses were performed for comparison of the primary perinatal composite outcome and secondary efficacy outcomes on the per protocol population. This population comprised all randomised women who completed the study without important deviations from the protocol. We defined the criteria for protocol deviation before data were analysed. For the induction group, protocol deviation was defined as induction at less than 41 weeks+0 days; labour induction, spontaneous labour, or caesarean delivery at more than 41 weeks+2 days because of scheduling error or delivery room unavailability; patient or provider preference; and non-medically indicated elective caesarean delivery. For the expectant management group, protocol deviation was defined as induction at more than 42 weeks+1 day, induction of labour at less than 42 weeks owing to scheduling error or patient or provider preference, and non-medically indicated elective caesarean delivery. Prespecified subgroup variables were maternal age (≥35 years), nulliparity, and body mass index (≥30). Logistic regression with treatment subgroup variable and the interaction term treatment×subgroup variable was used to test whether the effect of treatment differed between subgroups. All significance tests were two sided at the 0.05 level. Statistical analyses were performed with SAS System Version 9 for Windows (SAS, Cary, NC). Patient and public involvement Pregnant women were not involved in the design, outcome measures, or recruiting plans of the study, and they were not asked to give advice on interpretation of results. The results of the research will be disseminated to the participants and public through broadcasts, popular science articles, and newspapers. Results On 2 October 2018 the Data and Safety Monitoring Board strongly recommended the SWEPIS steering committee to stop the study owing to a statistically significant higher perinatal mortality in the expectant management group. Although perinatal mortality was a secondary outcome, it was not considered ethical to continue the study. No perinatal deaths occurred in the early induction group but six occurred in the expectant management group (five stillbirths and one early neonatal death; P=0.03). Recruitment took place from 20 May 2016 to 13 October 2018. Oral and written informed consent was obtained from 2762 women, who underwent randomisation. Overall, 1383 women were assigned to induction at 41 weeks and 1379 were assigned to expectant management until induction at 42 weeks ( fig 1 ). Supplementary table A shows recruitment according to trial centre. After randomisation but before intervention, two women in the induction group withdrew their consent to participate and for their data to be used, thus 1381 women in the induction group and 1379 women in the expectant management group were included in the intention to treat analysis. The two groups were similar at baseline ( table 1 ). Fig 1 Flowchart of eligibility, randomisation, delivery, and assessment Download figure Open in new tab Download powerpoint Table 1 Baseline characteristics of intention to treat population. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline Compared with the Swedish background population, women in the study groups had a higher level of education and were more often born in Sweden (see supplementary table B). In the induction group, 14.1% (195/1381) of the women had spontaneous onset of labour, 85.5% (1181/1381) underwent induction, of whom 76.6% (905/1181) had cervical ripening, and 0.4% (5/1381) had a scheduled caesarean delivery ( table 2 ). Table 2 Delivery outcomes in intention to treat population. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline In the expectant management group, 66.7% (920/1379) of the women had spontaneous onset of labour and 33.1% (457/1379) were induced, of whom 74.4% (340/457) had cervical ripening and 0.1% (2/1379) had a scheduled caesarean delivery. Management was not consistent with the assigned strategy in 3.5% (48/1381) of women in the induction group and 2.0% (28/1379) in the expectant management group ( fig 1 ). Median time from randomisation to delivery was 2 days (interquartile range 1-2 days) in the induction group and 4 (2-7) days in the expectant management group ( table 2 , fig 2 ). Median gestational age at delivery was 289 (288-289) days in the induction group and 292 (289-294) days in the expectant management group. Fig 2 Gestational age at delivery in intention to treat groups. The induction group included 1380 women because one woman was incorrectly randomised before 40 weeks+6 days and delivered before 40 weeks+6 days Download figure Open in new tab Download powerpoint Primary outcome The primary outcome occurred in 2.4% (33/1381) of women in the induction group and 2.2% (31/1379) of women in the expectant management group (relative risk 1.06, 95% confidence interval 0.65 to 1.73; P=0.90) ( table 3 ). Table 3 Perinatal outcome in intention to treat groups. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline No stillbirths or neonatal deaths (0-27 days) occurred in the induction group (mortality rate 0.0%), whereas there were five stillbirths and one neonatal death (mortality rate 0.4%) in the expectant management group (P=0.03) between 41 weeks+2 days and 41 weeks+6 days. One stillbirth occurred on the labour ward soon after admittance. The postmortem examination showed a cardiovascular malformation, which according to specialists in paediatric cardiology could not be considered as lethal. In the other four stillbirths there were no explanations. One stillborn neonate was small for gestational age and the other stillborns had birth weights within normal range. The neonatal death was due to hypoxic ischaemic encephalopathy in a large for gestational age neonate. The number needed to treat with induction of labour at 41 weeks to prevent one perinatal death was 230. A low Apgar score (<7 at five minutes) was the main contributor to the primary outcome: 1.3% (18/1381) in the induction group compared with 1.2% (16/1374) in the expectant management group (relative risk 1.12, 95% confidence interval 0.57 to 2.19; P=0.88). The post hoc sensitivity analysis for the primary outcome with adjustment for the minimisation variables centre and parity showed similar results (1.05, 0.65 to 1.59; P=0.85). Secondary neonatal outcomes Table 3 shows the secondary neonatal outcomes. An Apgar score of less than 4 at five minutes occurred in 0.2% (3/1381) in the induction group and 0.1% (1/1374) in the expectant management group (relative risk 2.98, 0.31 to 28.66; P=0.63). Fewer newborns in the induction group were admitted to a neonatal intensive care unit: 4.0% (55/1381) in the induction group versus 6.0% (82/1374) in the expectant management group (0.67, 0.48 to 0.93; P=0.02). If neonates with a major birth defect (n=10) were excluded (antenatally detected major birth defect was an exclusion criterion at study entry) there was no significant difference in admittance to a neonatal intensive care unit. Fewer neonates in the induction group had jaundice treated with phototherapy or exchange transfusion: 1.2% (16/1381) in the induction group versus 2.3% (32/1374) in the expectant management group (relative risk 0.50, 95% confidence interval 0.27 to 0.90; P=0.03). Fewer neonates in the induction group had macrosomia: 4.9% (68/1381) in the induction group versus 8.3% (114/1379) in the expectant management group (0.60, 0.45 to 0.80; P<0.001). Other secondary outcomes did not differ. Maternal outcomes Tables 2 and 4 present the secondary maternal outcomes. Use of epidural anaesthesia was higher in the induction group: 52.8% (729/1381) in the induction group versus 48.5% (669/1379) in the expectant management group (relative risk 1.09, 95% confidence interval 1.01 to 1.17; P=0.03). The median duration of labour was shorter in the induction group (5.7 hours (interquartile range 2.9-10.3 hours) v 6.9 (3.8-11.5) hours in the expectant management group; P<0.001). Mode of delivery was similar in both groups: the rate of caesarean delivery was 10.4% (143/1381) in the induction group and 10.7% (148/1379) in the expectant management group (relative risk 0.96, 95% confidence interval 0.78 to 1.20; P=0.79). Indications for caesarean delivery did not differ between the groups. Table 4 Maternal adverse outcomes in intention to treat population. Values are numbers (percentages) unless stated otherwise View this table: View popup View inline Endometritis occurred in 1.3% (18/1381) of women in the induction group and 0.4% (6/1379) in the expectant management group (relative risk 3.00, 95% confidence interval 1.19 to 7.52; P=0.02). Other secondary adverse maternal outcomes, including postpartum haemorrhage and perineal tears grades 3 and 4, were similar between the groups ( table 4 ). Hypertensive disorders of pregnancy after randomisation (exploratory outcome) occurred in 1.4% (19/1381) of women in the induction group compared with 3.0% (42/1379) of women in the expectant management group (relative risk 0.45, 95% confidence interval 0.26 to 0.77; P=0.004). Per protocol analysis The prespecified analysis of the per protocol population included 1333 women in the induction group and 1351 women in the expectant management group. Figure 1 shows the reasons for violation of the protocol. Baseline characteristics were similar between the groups (supplementary table C). The primary perinatal adverse outcome occurred in 31 pregnancies in the induction group and 31 in the expectant management group (relative risk 1.01, 95% confidence interval 0.62 to 1.66; P=1.0) (supplementary table E). No stillbirths or neonatal deaths (0-27 days) occurred in the induction group (mortality rate 0.0%), whereas there were five stillbirths and one neonatal death (mortality rate 0.4%) in the expectant management group (P=0.03). Supplementary tables D to F show the secondary neonatal and maternal outcomes. Subgroup analyses Prespecified subgroup analyses on the primary outcome and selected secondary outcomes according to parity (parity 1 v parity >1), maternal age (<35 years v ≥35 years), and body mass index (BMI) (<30 v ≥30) were performed on the intention to treat population. In the intention to treat population, analyses of the primary outcome showed no significant difference in the treatment effect according to parity, age, or BMI (P=0.29, P=0.70, P=0.51, respectively, for the interaction). In total, five stillbirths and one early neonatal death occurred, all in the expectant management group; in 0.8% (6/753) of the nulliparous women versus 0% (0/626) in parous women, 1.1% (3/279) in women aged 35 or older versus 0.3% (3/1100) in women younger than 35, and 1.1% (2/184) in women with a BMI of 30 or higher versus 0.4% (4/1081) in women with a BMI less than 30. Because of the low mortality rate (n=6) no interaction analysis on mortality could be performed. Among nulliparous women, the rate of caesarean delivery was 16.7% (127/762) in the induction group and 17.3% (130/753) in the expectant management group (P=0.81). When testing if the effect of induction versus expectant management was similar across centres (Stockholm centres versus other centres—that is, offering or not offering a routine ultrasound scan at 41 weeks) no significant interaction effect was found for the primary outcome (P=0.19) in the intention to treat population. Perinatal mortality in the expectant management group was 0.0% (0/557) in Stockholm centres versus 0.7% (6/822) in the other centres. Discussion In this large randomised trial, comparing induction of labour at 41 weeks with expectant management and induction at 42 weeks, we found no significant difference in the primary composite adverse perinatal outcome—2.4% in the induction group and 2.2% in the expectant management group (relative risk 1.06, 95% confidence interval 0.65 to 1.73, P=0.90). Perinatal mortality was, however, significantly lower in the induction group (no deaths) than expectant management group (five intrauterine deaths, one neonatal death; P=0.03). Furthermore, the induction group had lower admittance to a neonatal intensive care unit, fewer infants with neonatal jaundice requiring therapy, and fewer macrosomic infants. We found no significant difference in caesarean delivery rates between groups. Comparison with previous studies Post-term pregnancy (≥42 weeks) is associated with an increased risk of adverse perinatal morbidity and mortality. 3 4 5 The risk appears to increase gradually after 40 weeks. 3 4 13 Results from most meta-analyses indicate that a policy of induction before 42 full weeks is associated with decreased perinatal mortality. 9 22 23 24 In our study all perinatal deaths occurred in nulliparous women. Nulliparity is not always recognised as a factor conferring increased risk of perinatal mortality, 2 25 26 but our results agree with a Swedish register study where stillbirths were significantly more common in nulliparous than multiparous women and the increase in neonatal mortality was seen at 41 full weeks in nulliparous women but not until 42 weeks in multiparous women. 3 If this finding can be replicated in future studies, it could mean that nulliparous women may require particular attention, and interventions such as labour induction might be even more important in this group. The benefit of early induction is supported by a recently published open label multicentre randomised trial (INDEX) from the Netherlands including 1801 women, in which induction at 41 weeks was associated with a lower composite adverse perinatal outcome (1.7%) compared with expectant management until 42 weeks (3.1%; P=0.045). 12 The perinatal mortality rate did not, however, differ significantly between the groups, with one death in the 41 weeks group and two in the 42 weeks group. It could be argued that the higher mortality in the expectant management group in our study is partly due to lack of routine fetal surveillance with cardiotocography or ultrasonography between 41 and 42 weeks unless there were clinical signs of complications. In general, however, the adverse perinatal outcomes were not higher in the expectant management group in our trial compared with the INDEX trial, and the median gestational age at delivery was higher in the expectant management group in our trial (292 days) than in the INDEX trial (289 days), which could augment mortality rates. No perinatal deaths occurred among women recruited in the Stockholm region, where all women are offered a routine ultrasound scan at 41 weeks (before randomisation), with the aim of identifying women with an increased risk for adverse outcomes. However, the rarity of perinatal death limits the power of a subanalysis by centre. Furthermore, two of the five cases of hypoxic ischaemic encephalopathy occurred in Stockholm and the composite neonatal morbidity was similar between Stockholm (24/1122=2.1%) and the other centres (35/1633=2.1%), which does not support that the 41 week ultrasound scan was critical. It is also uncertain to what extent ultrasonography or cardiotocography usually performed at two or three day intervals can prevent intrauterine or neonatal deaths, 7 26 27 and the evidence supporting that fetal monitoring prevents complications of post-maturity is considered weak. 7 The occurrence of endometritis was significantly higher in the induction group than expectant management group, which was unexpected but might well be a chance finding. Recent studies indicate that infectious morbidity is not higher for mechanical methods than for drugs for cervical dilation, 28 and the occurrence of endometritis is similar or lower in our trial than reported in most studies on labour induction. 28 29 30 Furthermore, the frequency of other maternal infections (chorioamnionitis, wound infections, urinary tract infections) and neonatal infections (sepsis, pneumonia) was not higher in the induction group. Strengths and weaknesses of this study We carried out a large national multicentre randomised controlled trial comparing induction at 41 weeks with expectant management and induction at 42 weeks, the latter being standard of care in Sweden at present. Regardless that only a minority of eligible women were informed or accepted participation ( fig 1 ), the study population was representative of a Swedish low risk population according to most baseline characteristics (supplementary table B). Another strength is that the participants were managed at the same level of care and methods of induction were applied irrespective of allocation arm, which was not always the case in previous randomised trials on post-term pregnancies. 12 31 Our trial does have some limitations. Although it could seem contradictory that a significant difference was found between groups in perinatal mortality, we found no difference in the composite adverse neonatal outcome. However, five of the six deaths were stillbirths in our trial, which have a quite different cause and array of risk factors 32 compared with neonatal mortality and morbidity. 33 Placental abnormality or dysfunction, umbilical cord complications, and growth restriction are considered causes of stillbirth 2 32 that could well be of increasing importance in late and post-term pregnancies. Another problem is that the composite primary outcome was defined somewhat broadly, predominated by an Apgar score of less than 7 at five minutes, which according to recent data might be a relatively weak predictor of more serious outcomes such as neurological morbidity and mortality, therefore an Apgar of less than 4 at five minutes is probably preferable. 34 The advantage of composite outcomes, however, is that the number of cases in each arm can be reduced, and carrying out the study becomes more realistic. Pregnant women were not involved in the design of our trial, which is a limitation 35 despite our impression that management of late term and post-term pregnancies is a prioritised area of research for many women. In a separate survey, to be published, we will be addressing pregnant women’s experiences in the 41 and 42 week groups. The fact that half of the women (those recruited in the Stockholm region) underwent ultrasound measurement of amniotic fluid volume and abdominal diameter at 41 weeks, whereas such examinations were not performed systematically at the other centres might be regarded as both a limitation and a strength. It is difficult to determine whether outcomes were affected by this difference in policy, whereas such a management increases generalisability and reflects current obstetric practice in Sweden. 36 It is not clear whether the results are broadly generalisable. The study did include university, regional, and local hospitals, and women from 17 countries were eligible for inclusion. Different methods for labour induction, according to local practice, were allowed, and one large region used an extra ultrasound scan in gestational week 41 before inclusion. All these strategies increase the generalisability of the results. Although we performed several significance tests, also for secondary and exploratory outcomes, we have not corrected for multiple comparisons owing to the risk of not finding differences of high clinical importance for women. Conclusions and policy implications Our study found that induction of labour at 41 weeks compared with expectant management and induction at 42 weeks does not alter the composite perinatal outcome, the primary outcome of this study. However, a reduction of the secondary outcome perinatal mortality is observed without increasing adverse maternal outcome. The number needed to treat with induction of labour at 41 weeks to prevent one perinatal death was 230, which is lower than previous estimates. 9 22 23 Although these results should be interpreted cautiously, based on previous reports and the results of the present trial we suggest that labour induction should be offered to women at 41 weeks+0 days 12 or earlier 11 37 and could be one (of few) interventions that reduces the rate of stillbirths. What is already known on this topic Meta-analyses comparing induction of labour at or beyond term with expectant management have shown a generally improved perinatal outcome with induction It is not known if induction at 41 weeks results in a better outcome than expectancy and induction at 42 weeks What this study adds Induction of labour at 41 full weeks in low risk pregnancies is associated with a decreased risk of perinatal mortality compared with expectant management and induction of labour at 42 full weeks Other neonatal outcomes or caesarean delivery did not differ between groups Women with low risk pregnancies should be informed of the risk profile of induction of labour versus expectant management and offered induction of labour no later than 41 full weeks Acknowledgments Jonas Eriksson Söderling provided data from the Swedish Pregnancy Register and performed the statistical analysis for the Data Safety Monitoring Board reports, Stellan Håkansson provided data from the Swedish Neonatal Quality Register, Jesper Brodin provided data from Statistics Sweden, and Agneta Cedefors-Blom helped with secretarial assistance. Mattias Molin and Per Ekman, the Statistical Consulting Group, Gothenburg, performed the statistical analyses. Therese Svanberg at the Medical Library at Sahlgrenska University Hospital performed the literature search. Thanks to the members of the Data Safety Monitoring Board, Hans Wedel (chairman), Lars-Åke Mattson, and Elisabeth Jangsten for their assistance and to the women who participated in the trial. The SWEPIS study group: the midwives and doctors responsible at the local centres were: Uppsala University Hospital: Irina Sylwe; South Älvsborg Hospital: Lena Loubelo, Carolina Bergerum, and Serney Bööj; Department of Gynaecology Närhälsan, Mölndal: Maria Bullarbo; Sahlgrenska University Hospital, Göteborg: PhD candidates Anna Wessberg and Helena Nilver, and Pia Hempel, Martina Söderlund, Erica Ginström Ernstad, and Monica Eriksson Orrskog; Stockholm: Karolinska University Hospital Huddinge and Solna, South Hospital, Danderyd Hospital, South BB, Södertälje Hospital: Helen Fagraeus, Annelie Sjölund, and Eva Itzel Wiberg; Halland Hospital: Elisabeth Johansson, Sandra Holmström, Åsa Ponten, and Maud Ankardal; Örebro Hospital: Inger Nydahl, Sofia Saarväli, and Camilla Hartin; Falun Hospital: Elisabeth Nordström and Kerstin Fransson; Visby Hospital: Madelen Jacobsson; and North Älvsborg Hospital: Maria Olsson and Anna Hagman. Footnotes Contributors: UBW and SS are joint first authors and contributed equally to the study. UBW, HH, VS, and HE conceived and designed the study. UBW, HH, AW, SS, AKW, MJ, HF, and JW oversaw recruitment of study participants and collection of data at the local centres. UBW, HH, CB, HE, OS, and SS wrote the statistical analysis plan together with two statisticians (Mattias Molin and Nils-Gunnar Pehrsson, the Statistical Consulting Group, Gothenburg). UBW and MA did the data cleaning together with statistician Mattias Molin and Per Ekman. UBW, HH, CB, SS, MA, LL, VS, SBW, OS, GW, HE, and AW interpreted the data. UBW, MA, AW, SS, and HH wrote the first draft of the manuscript, which was then critically reviewed and revised by the other coauthors. HE, OS, and HH are joint senior authors. All authors approved the final version of the manuscript for submission. UBW, SS, HE, OS, and HH are guarantors. All authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. The corresponding author attests that all listed authors meet authorship and that no others meeting the criteria have been omitted. Funding: This study was supported by the Swedish state under the agreement between the Swedish government and the county councils, the ALF-agreement (ALFGBG-440301, ALFGBG-718721, ALFGBG-70940, ALFGBG-426401), the Health Technology Centre at Sahlgrenska University Hospital, the Foundation of the Health and Medical care committee of the Region of Vastra Gotaland, Sweden (VGFOUREG387351, VGFOUREG640891, VGFOUREG854081), Hjalmar Svensson Foundation, the foundation Mary von Sydow, born Wijk donation fund, Uppsala-Örebro regional research council (RFR-556711, RFR-736891), Region Örebro County research committee (OLL-715501), the ALF-agreement in Stockholm (ALF-561222, ALF-562222, ALF-563222), and Centre for Clinical Research Dalarna-Uppsala University, Sweden (CKFUU-417011). The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The researchers were independent of the funders. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: no support from any organisation for the submitted work; no financial relationship with any organisation that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. AKW has received free reagents (PlGF) from Roche for a prediction study of pre-eclampsia. Ethical approval: This study was approved by the regional ethics board in Gothenburg in May 2014 (Dnr: 285-14) and later its complementary applications (T 905-15, T 291-16, T 1180-16, T 330-17, T 1066-17, T 087-18, T 347-18, T 961-18, T 1110-18). All participants gave informed written consent before taking part in the study. Data sharing: The full dataset is available from the corresponding author on reasonable request. The corresponding author (UBW) affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained. The corresponding author (UBW) had the final responsibility for the decision to submit for publication. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
Inducing labour at 41 weeks in low risk pregnancies is associated with a lower risk of newborn death compared with expectant management (a "wait and see" approach) until 42 weeks, suggests a trial published by The BMJ. Although the overall risk of death at 42 weeks is low, the researchers say induction of labour should be offered to women no later than 41 full weeks. It is generally accepted that there is an increased risk of problems ("adverse perinatal outcomes") for both mother and baby at or beyond 42 weeks of pregnancy. Some studies have suggested that inducing labour from 41 weeks onwards improves these outcomes, but there is no international consensus on how to manage healthy pregnancies lasting more than 41 weeks. Current practice in the UK and Scandinavia is to induce delivery for all women who have not gone into labour by 42 weeks. So researchers in Sweden set out to compare induction of labour at 41 weeks with expectant management until 42 weeks in low risk pregnancies. The trial involved 2,760 women (average age 31 years) with an uncomplicated, single pregnancy recruited from 14 Swedish hospitals between 2016 and 2018. Women were randomly assigned to induction of labour at 41 weeks (1,381) or expectant management (1,379) until induction at 42 weeks if necessary. The main outcome was a combined measure of babies' health, including stillbirth or death in the first few days of life (known as perinatal death), Apgar score less than 7 at five minutes, low oxygen levels, and breathing problems. Other outcomes included admission to an intensive care baby unit, Apgar score less than 4 at five minutes, birth weight, pneumonia, or sepsis. Type of delivery and mothers' health just after giving birth were also assessed. For the main outcome measure, the researchers found no difference between the groups (2.4% of women in the induction group had an adverse perinatal outcome compared with 2.2% in the expectant management group). Other outcomes, such as caesarean sections and mothers' health after giving birth, also did not differ between the groups. However, six babies in the expectant management group died compared with none in the induction group, and the trial was stopped early. The researchers estimate that, for every 230 women induced at 41 weeks, one perinatal death would be prevented. They point to some limitations, such as differences in hospital policies and practices, that could have affected the results. But they say women with low risk pregnancies "should be informed of the risk profile of induction of labour versus expectant management and offered induction of labour no later than at 41 full weeks. This could be one (of few) interventions that reduces stillbirth," they conclude. This view is supported in a linked editorial by Professor Sara Kenyon and colleagues, who say induction at 41 weeks "looks like the safer option for women and their babies." They stress that choice is important within maternity care, and say "clear information about available options should be accessible to all pregnant women, enabling them to make fully informed and timely decisions."
10.1136/bmj.l6131
Medicine
Whole genome sequencing reveals genetic structural secrets of schizophrenia
Matthew Halvorsen et al, Increased burden of ultra-rare structural variants localizing to boundaries of topologically associated domains in schizophrenia, Nature Communications (2020). DOI: 10.1038/s41467-020-15707-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-15707-w
https://medicalxpress.com/news/2020-04-genome-sequencing-reveals-genetic-secrets.html
Abstract Despite considerable progress in schizophrenia genetics, most findings have been for large rare structural variants and common variants in well-imputed regions with few genes implicated from exome sequencing. Whole genome sequencing (WGS) can potentially provide a more complete enumeration of etiological genetic variation apart from the exome and regions of high linkage disequilibrium. We analyze high-coverage WGS data from 1162 Swedish schizophrenia cases and 936 ancestry-matched population controls. Our main objective is to evaluate the contribution to schizophrenia etiology from a variety of genetic variants accessible to WGS but not by previous technologies. Our results suggest that ultra-rare structural variants that affect the boundaries of topologically associated domains (TADs) increase risk for schizophrenia. Alterations in TAD boundaries may lead to dysregulation of gene expression. Future mechanistic studies will be needed to determine the precise functional effects of these variants on biology. Introduction Since the first major study over 70 years ago 1 , twin, family, and adoption studies have strongly and consistently supported the existence of a genetic basis for schizophrenia 2 , 3 , 4 . Its inheritance is complex with both genetic and non-genetic contributions indicated by estimates of pedigree-heritability (60–65%) 3 , 4 and twin-heritability (81%) 2 that are well under 100%. Although these genetic epidemiological results were fairly consistent, their validity was dependent on multiple assumptions and contained specific information about genetic architecture. In the past decade, genome-wide association (GWA) studies that genotyped hundreds of thousands of single-nucleotide polymorphisms (SNPs) in tens of thousands of cases and controls have directly evaluated the common-variant SNP-heritability of schizophrenia 5 , 6 , 7 . In the most recent study of 40,675 schizophrenia cases and 64,643 controls, the SNP-heritability of schizophrenia was 24.4% (SE 0.0091, liability scale), and 145 significant loci were identified 6 . SNP array data can also be used to assess rare copy number variants (CNVs). In the largest study to date of 21,094 cases and 20,227 controls 8 , eight CNVs reached genome-wide significance: CNV deletions at 1q21.1, 2p16.3 ( NRXN 1), 3q29, 15q13.3, and 16p11.2 (distal) and 22q11.2 plus CNV duplications at 7q11.23 and 16p11.2 (proximal). These events were uncommon and any one of these eight CNVs were present in 1.42% of cases and 0.15% of controls. There is evidence that rare coding single-nucleotide variants (SNVs) and insertion–deletions (indels) contribute to risk in a low percentage of cases although few genes have been implicated from exome sequencing 9 , 10 , 11 . Thus, after a decade of increasingly larger studies, the discovered genetic variants that confer risk for schizophrenia are primarily common variants with subtle effects on risk 6 , 7 , 9 , 10 . The interpretation of common variant findings is markedly improved via the addition of functional genomic data from brain 7 , 12 , 13 ; nonetheless, there remains a gap between the pedigree- and twin-heritability estimates for schizophrenia and its SNP-heritability. Some argue that this gap is irrelevant as these different types of heritability are incompatible and as biological insights have always been the core goal of GWA for schizophrenia rather than accounting for twin/pedigree heritability. It is also possible that the heritability gap is informative, that SNP array and WES are missing etiologically important genetic variation. GWA genotyping directly captures 500K-1M SNPs followed by imputation to indirectly assess 7–10 M variants. This process is imprecise as some regions of the genome are not well covered, and some non-SNP types of genetic variation are missed. WES provides data on the protein-coding fraction of the genome (~3%) and will miss many regulatory features. By evaluating high-coverage WGS data on 21,620 individuals in the TOPMed study, Wainschtein et al. 14 reported recovery of nearly all of the pedigree heritability for height and body mass index. The missing heritability was found to reside in rarer genetic variation (minor allele frequency (MAF) 0.0001–0.1) in regions of relatively low linkage disequilibrium (LD) and often outside of protein-coding portions of the genome. The fundamental reason for the missing heritability of height and body mass may merely be technical: the least expensive technologies only partly assess the genome with inexpensive SNP arrays capturing common variants in high LD regions and WES capturing much of the known protein-coding genome. The Wainschtein et al. finding is consistent with prior observations that rarer and evolutionarily younger SNPs have higher SNP-heritability for multiple complex traits 15 . To capture genetic variation as comprehensively as possible, WGS is required. WGS provides nucleotide-level resolution throughout the accessible genome along with detection of most structural variants (SVs). Many types of genetic variation are discoverable by WGS without regard to local LD, and these include SNVs and indels in low LD regions, uncommon or rare regulatory variants, rare SVs missed by SNP arrays and WES due to small size or complexity, and common SVs missed by SNP arrays. The NHLBI TOPMed Program recently published high-coverage (30×) WGS data of 53,831 diverse individuals that included ~381 M SNVs and ~29 M indels 16 . TOPMed WGS identified 16% more variants than low-coverage WGS (6×), with essentially all new variants being rare (MAF < 0.005); and 17% more coding variants than both low-coverage WGS and WES (30×). The distribution of variant sites in TOPMed WGS revealed that the vast majority of human genetic variation is rare and noncoding. There are a few published WGS studies of schizophrenia (Supplementary Table 1 ). Of these studies, many employed family-based designs and the largest case–control WGS study had 321 schizophrenia cases and 148 controls. In this study, we analyze high-coverage WGS from 1162 schizophrenia cases and 936 ancestry-matched population controls. WGS data are generated using identical protocols at the same facility and all WGS data are jointly processed and analyzed. The schizophrenia cases also have SNP array 17 , 18 and exome sequencing data 9 , 10 which is compared to WGS to assess data quality. Our main objective is to evaluate the contribution to schizophrenia etiology from variants that are revealed by WGS but not by GWA and WES. To quantify phenotypic variance explained by rare variants, we estimate heritability using WGS. To identify the role of noncoding variants, we focus on empirically determined maps of sequence constraints 19 , 20 and functional genomic annotations generated in human brain 12 , 13 . We particularly focus on ultra-rare variants as this frequency class has a notable impact on schizophrenia risk in WES and CNV studies 8 , 10 . We replicate key prior reported excess in schizophrenia of loss-of-function (LOF) ultra-rare sequence variants in LOF-intolerant genes. We find an increased burden in schizophrenia of ultra-rare SVs that affect the boundaries of topologically associated domains. Results Overview Figure 1 gives an overview of the study. Our workflow was designed to evaluate the contribution of directly genotyped genetic variation across the allelic spectrum and evaluate genetic variation missed by prior approaches. Fig. 1: Overview of WGS analysis. WGS data were generated using identical protocols at the same facility and all WGS data were jointly processed and analyzed. The schizophrenia cases also had GWA SNP array and exome-sequencing data for comparison for the purpose of quality assessment. We started with 1165 schizophrenia cases and 942 ancestry-matched population controls. After QC, 1162 cases and 936 controls remained. Variant annotation focused on empirically determined annotation methods. Full size image Study samples and sequencing Following quality control, we analyzed WGS on 1162 schizophrenia cases and 936 ancestry-matched population controls from Sweden (total 2098 subjects). Cases were selected to have typical Swedish ancestry, unequivocal schizophrenia case status, and without a known pathogenic CNV (e.g., 22q11 deletion). Controls were group matched to cases by ancestry. The median WGS coverage per sample was 36.62 reads per base (Supplementary Fig. 1 ). For each group, we constructed a curve for mean fraction of bases covered deeper than a specified threshold as a function of depth of coverage. The shapes of the mean curves were similar between cases and controls (Supplementary Fig. 2 ). Principal components analysis confirmed the relative homogeneity of the sample (Supplementary Fig. 3 ). We took multiple steps to minimize chances of spurious associations with schizophrenia: (1) WGS for all subjects was performed at the same facility using identical procedures; (2) all WGS data were jointly processed; (3) variant calling was conducted jointly for all subjects; (4) all subjects were ethnic Swedes of similar empirical ancestry (Supplementary Fig. 3 ); and (5) in association analyses, we controlled for empirically determined potential confounders to mitigate impact on spurious association signals (Methods). As discussed more fully below, we did not find evidence of inflation (e.g., for common variant case-controls tests, λ GC was 1.03 and LD score regression intercept was 0.997 (SE = 0.0065), which are inconsistent with systematic biases). WGS variant identification SNV/indels: We detected 33,746,530 SNVs and 4,551,507 indels across the autosomes of the 2098 cases and controls. Individual subjects had a mean of 2,358,544 SNVs (range: 2,063,297–2,513,607), and 383,929 indels (range 329,678–409,893; mean insertion size 3.09 bp and mean deletion size 3.59 bp). Of the full set of unique SNVs and indels detected in the WGS data, 45.43% of SNVs and 37.03% of indels were detected in only one individual as heterozygotes (singletons). These data included many variants not found in imputation reference panels. For example, when requiring exact match for chromosome, position, reference, and alternative allele, 15,688,760 SNVs are not in Haplotype Reference Consortium (HRC r1.1) reference panel 21 (stratified by MAF: 286,599 MAF > 0.05, 163,346 MAF 0.005–0.05, 15,238,815 MAF < 0.005); 21,574,998 SNVs are not in 1000 Genomes Project phase 3 (1000GP p3v5) reference panel 22 (61,993 MAF > 0.05, 309,989 MAF 0.005–0.05, 21,203,016 MAF < 0.005), and 12,341,197 SNVs are not in TOPMed 16 Freeze 3a (stratified by MAF: 28,179 MAF > 0.05, 29,233 MAF 0.005–0.05, 12,283,785 MAF < 0.005). We also called 57,785 SNVs and 8270 indels on chrX, and subjects had a mean of 6084 SNVs (range 5035–7183) and 1753 indels (range 1334–2091). To evaluate the capacity of WGS to detect SNVs or indels, we compared our WGS data to independent exome sequencing data on 1154 of the 1162 schizophrenia cases 10 . We estimated genotype accuracy by calculating the concordance rate between genotypes from WGS and WES 10 for all autosomes. For SNVs, genotype accuracy was 0.9999, 0.999, and 0.997 for homozygous reference, heterozygous, and homozygous non-reference genotypes (Supplementary Table 2a ). For indels, genotype accuracy was 0.998, 0.984, and 0.984 for homozygous reference, heterozygous, and homozygous non-reference genotypes (Table S2a ). When stratified by MAF, genotype accuracy estimates were consistent across common, low-frequency, rare, and ultra-rare variants, and similar to the overall genotype accuracy (Supplementary Table 2b–e ). SVs: We detected 17,895 deletion (DEL) sites, 4129 tandem duplication (DUP) sites, 4458 inversion (INV) sites, and 27,808 mobile element insertions sites (MEI, including 23,432 ALU, 1429 SVA, and 2956 LINE1). The sizes of DEL, DUP, and INV ranged from 500 bp to 1 Mb, with median sizes of 2592 bp for DEL, 7179 bp for DUP, and 3265 bp for INV (Supplementary Fig. 4 ). The sizes of MEI ranged from 15–6019 bp, with a median size of 279 bp for ALU, 1162 bp for SVA, and 1780 bp for LINE1 (Supplementary Fig. 5 ). For any non-reference genotype, subjects carried a mean of 1241 DEL (range 657–1357), 183 DUP (range 157–209), 373 INV (range 321–878), 2663 ALU (range 2077–3439), 82 SVA (range 56–107), and 249 LINE1 (range 196–302). To evaluate the capacity of WGS to detect SVs, we compared WGS data to prior copy number variant data from GWA SNP array 18 and WES 23 on 1085 of the 1162 schizophrenia cases. First, INV, MEI, and common SVs are largely inaccessible to SNP arrays 18 and WES studies 23 . Second, prior GWA SNP array studies were limited to deletions and duplications >100 kb; however, >95% of DEL and >77% DUP detected from WGS were <20 kb. Consequently, SNP arrays found only 3.5% of DEL variants and 17.7% of DUP variants found by WGS (requiring 50% reciprocal overlap). Third, when restricted to exons, WES found only 13.7% of exonic DEL and 35.6% of exonic DUP variants found by WGS (based on 50% reciprocal overlap). Finally, for DEL and DUP variants that are comparable between technologies, we computed concordance rates between WGS and SNP array or WES (Supplementary Table 3 ). When compared to SNP arrays, we estimated that the concordance rate was 0.992 for DEL and 0.965 for DUP. When compared to WES, we estimated that the concordance rate was 0.987 for DEL and 0.967 for DUP. Repeat expansions: WGS can detect pathogenic disease-associated repeat expansions (e.g., the HTT CAG repeat that causes Huntington’s disease), which are inaccessible to SNP arrays. We screened our samples for repeat expansions in 16 genes that are established causes of disease, and found that 16 cases and 7 controls had modest repeat expansions just within the predicted pathogenic range (Supplementary Table 4 ). Because no case or control had a register diagnosis consistent with these generally highly penetrant disorders, we assumed these were false positives or the modest repeat expansions were not long enough to cause disease. Burden analysis of ultra-rare SNV/indels Consistent with recent studies 10 , we focused on ultra-rare sequence variants (URVs) including ultra-rare SNVs and indels. We defined URVs as found once in the WGS case/control cohort and absent from independent population cohorts (i.e., gnomAD r2.0.2 allele count = 0 and non-psychiatric subset of ExAC r0.3 allele count = 0) 24 , 25 . From theory 26 and our calculations (Supplementary Fig. 6 ), power is low for single-variant analysis for MAF < 0.01. Collapsing methods are key approaches for rare variants and can enhance power by accumulating information across different rare variants that impact a gene/locus or a set of genes/loci 27 . We used burden testing as the primary analytical tool to contrast cases and controls for total event counts in genomic loci of interest. Burden testing is appropriate when most variants across a set of genetic loci impact phenotype in the same direction and with similar magnitude 27 . We estimated statistical power for burden tests and found that we had ≥80% power to detect association of URVs when the aggregated minor allele count (MAC) was 20 (i.e., aggregated MAF = 0.01) and the genotypic relative risk was ≥4.9 (assuming a type I error level of 1 × 10 −5 ). As a final step in quality control and following an approach previously established in the full Swedish sample 10 , we pruned samples that had an outlier total URV count mostly because of relatively higher ancestry heterogeneity 10 (Methods, Supplementary Figs. 7 and 8 ). We conducted burden analyses of URVs in 1104 cases and 921 controls (mean URV counts in cases vs controls: 4262 vs 4249, P = 0.4225, Supplementary Fig. 7 ). The total number of qualifying URVs in these samples was 8,073,782, of which 7,991,557 (98.9%) were noncoding. Full results are listed in Supplementary Table 5 and summarized below. For multiple-testing adjustment, we applied the Benjamini and Hochberg false discovery rate (BH-FDR) method to the family of hypotheses involving ultra-rare SNV/indels which included a total of 74 tests (Supplementary Table 5 ). Confirmation of prior results: We first evaluated the prior WES finding that schizophrenia cases have an excess of damaging protein-coding URV (odds ratio [OR] = 1.07; 4877 cases and 6203 controls) 10 . As shown in Fig. 2 , we found an excess of LOF URVs in schizophrenia cases (OR = 1.082, P = 0.0002, BH-FDR multiple-testing adjusted P = 0.0049). This excess was notable (OR = 1.203, P = 0.0005, adjusted P = 0.0092) in genes that are intolerant to LOF variation (defined as pLI > 0.9 in the non-psychiatric subset of ExAC 24 , where pLI is the probability that a gene is intolerant to a LOF mutation). Increased burden was prominent in the subset of LOF-intolerant genes that are risk genes from WES for neurodevelopmental disorders 11 (OR = 2.983, P = 0.0011, adjusted P = 0.0163). A key advantage of WGS over WES for protein-coding regions is independence of design, coverage, and performance of exome capture baits 16 . The exome capture baits used in WES are imperfect, however, after multiple testing correction, we did not find any significantly increased burden of coding URVs outside of targeted exonic sequences of LOF-intolerant genes (Supplementary Fig. 9 , Supplementary Table 5 ). Fig. 2: Burden of coding ultra-rare SNVs and indels. X -axis: annotation class. Y -axis: odds ratio. Legend: exomes: coding variants in all genes; LOFtol: in genes tolerant to loss-of-function variation; LOFintol: in genes intolerant to loss-of-function variation. For each specific burden test, we used a vertical line to indicate the 95% confidence interval of odds ratio and a dot at the center of the vertical line to indicate the point estimate of odds ratio. Full size image Burden analysis of noncoding ultra-rare SNV/indels in constrained regions: We defined variants as putatively noncoding if they did not alter sequence content of coding regions or splice dinucleotides of GENCODE protein-coding transcripts. These noncoding variants may confer risk via a variety of mechanisms (e.g., by altering an unannotated protein-coding transcript, untranslated regions, splicing, transcription factor binding, or an epigenetic site). We evaluated burden of noncoding variants that are more likely to be deleterious by focusing on ultra-rare noncoding variants that are likely to be subject to purifying selection in a manner similar to coding URVs. We compared case/control burden of noncoding URVs across binned regions by sequence constraint for the human species 19 and by constraint across mammalian species 20 (Supplementary Fig. 10 ). The human constraint was built upon the context-dependent tolerance score (CDTS) which indicates the degree of depletion of genetic variation at the population level using 11,257 human genomes (the lower the percentile rank of CDTS the more constrained the region) 19 . The mammalian constraint was based on genomic evolutionary rate profiling (GERP) score which quantifies substitution deficits in multiple alignments (the higher the GERP score, the more constrained the region) 20 . We concentrated subsequent noncoding URV analyses on variants in regions that were highly constrained according to one of these two metrics (CDTS < 1% or GERP ≥ 4) due to prior observation that the overlap between CDTS (conservation in the current human population) and GERP (interspecies conservation) was limited and heavily enriched for protein-coding regions 19 . We did not observe a case excess of noncoding URVs that survived multiple test correction based on this criterion alone (OR = 1.009, P = 0.0342, adjusted P = 0.2819, Supplementary Fig. 10 ). Burden in annotations experimentally derived from human brain: Annotations from appropriate tissues help predict functional variants 7 , 28 . We compared case/control burden of noncoding URVs in constrained regions (as defined above CDTS <1% or GERP ≥ 4) within functional annotations experimentally derived from human brain tissue known to effect gene expression. These annotations include open chromatin regions from ATAC-seq, frequently interacting regions (FIREs), topologically associating domains (TADs), and chromatin interactions from Hi–C; epigenetic marks from ChIP-seq (CTCF, H3K27ac, and H3K4me3). We also included annotations of brain-expressed exons identified from long-read RNA-seq data 29 , as constrained noncoding URVs inside brain exons could impact functional noncoding elements within untranslated regions of annotated transcripts or protein-coding sequences from unannotated transcripts. We did not identify any single annotation with a significant case excess of URVs within constrained regions (Supplementary Fig. 11 , Supplementary Table 5 ). Burden in promoter regions: A recent study focused on de novo SNV/indels found evidence for a contribution to autism spectrum disorder from variants in constrained nucleotides within promoter regions 30 . Defining promoter regions the same way as in An et al. 30 (2 kilobases (kb) upstream of an annotated transcription start site), we compared case/control burden of noncoding URVs within constrained nucleotides (as defined above) in promoter regions of genes that are putatively LOF-intolerant (as defined above). No significant case excess was observed (OR = 0.966, P = 0.9812, adjusted P = 0.9907). A similar result was obtained when performing this test specifically on the subset of LOF-intolerant genes previously described as neurodevelopmental risk genes 11 (OR = 0.99, P = 0.5551, adjusted P = 0.6956). To take the three-dimensional genome into account, we used brain chromatin interaction data to identify any cis-regulatory elements (e.g. promoter and enhancers) connected with LOF-intolerant genes. No significant case excess was observed (OR = 1.006, P = 0.1904, adjusted P = 0.427). X chromosome: We tested male cases and controls to determine if the coding variant excess replicated in chrX genes. We did not detect a significant difference in synonymous variant burden or LOF variant burden but note that power was low. Burden analysis of ultra-rare SVs We performed analyses of ultra-rare SVs on the full sample (1162 cases and 936 controls) and on the subsample used for URV burden testing (1104 cases and 921 controls). Note that cases with known pathogenic CNVs or unusually high CNV burden were excluded 18 . Because all results were similar, we report the analysis results using the full sample. The total number of ultra-rare SVs in our sample was 6809 for DEL, 1917 for DUP, and 729 for INV. The sizes of these ultra-rare SVs were smaller than those from SNP arrays (DEL mean 15.2 kb for cases and 13.8 kb for controls; DUP mean 56.5 kb for cases and 52.6 kb for controls; and INV mean 100 kb for cases and 76 kb for controls). Confirmation of prior results: Higher genome-wide burden of rare SVs in schizophrenia cases has been repeatedly observed in studies using SNP arrays 8 , 18 (i.e., rare, large SVs with MAF < 0.01 and size > 100 kb). Burden was greater for SVs that were deletions, larger, or rarer. To calibrate our analyses, we verified this general pattern of findings using WGS SV calls (Supplementary Table 6 ). Genome-wide burden of ultra-rare SVs: Using the DEL, DUP, and INV genotypes described above, we evaluated the genome-wide burden of ultra-rare SVs (Supplementary Fig. 12 and Supplementary Table 7 ). We defined ultra-rare SVs as found once in the WGS case/control cohort and absent from independent population cohorts 31 , 32 . Consistent with previous reports 8 , 18 , ultra-rare DEL were significantly enriched in cases (OR = 1.086, P = 0.0001, BH-FDR multiple-testing adjusted P = 0.0029). The burden of ultra-rare DUP and INV were similar between cases and controls (DUP: OR = 1.06, P = 0.0920, adjusted P = 0.2052; INV: OR = 1.015, P = 0.2903, adjusted P = 0.4009). Most of these ultra-rare SVs were noncoding (Supplementary Fig. 13 , 87.2% for DEL, 71.1% for DUP, and 89.4% for INV). When stratified by coding/noncoding status, the results were similar (Supplementary Table 7 ). Burden in epigenomic annotations from human brain: We hypothesized that the elevated genome-wide burden of ultra-rare SVs may be partitioned across functional elements with evidence for gene regulation in the brain 13 . We focused on ultra-rare SVs that intersected ≥10% of the functional elements (Fig. 3 , Supplementary Table 8 ). Burden tests found a significant enrichment of ultra-rare SVs in schizophrenia cases that impacted TAD boundaries from adult (OR = 1.613, P = 0.0037, adjusted P = 0.0283) and fetal brain (OR = 1.581, P = 0.0039, adjusted P = 0.0283). No significant enrichment was found for any other class of functional elements. TAD boundaries have been shown to be under purifying selection. Multiple studies suggest that altering TAD boundaries results in the disarrangement of enhancer and promoter contacts, thus impacting local gene expression. Disruption of TAD boundaries by SVs have been associated with developmental disorders 33 , 34 . Fig. 3: Burden of ultra-rare SVs in brain epigenomic annotations and related analysis. For each specific burden test, we use a vertical line to indicate the 95% confidence interval of odds ratio and a dot at the center of the vertical line to indicate the point estimate of odds ratio. Labels on X -axis indicate the specific annotations that were considered. “TADbou”: TAD boundaries. Epigenomic annotations include TADbou.AdultBrain, TADbou.FetalBrain, ATACseq.AdultBrain, FIRE.AdultBrain, CTCF, H3K27ac, and H3K4me3. Regulatory elements connected with schizophrenia risk loci are labeled as “gene.set.name_HiC.loops.int”. For example, “CELF4_ HiC.loops.int” means regulatory elements of the CELF4 gene set identified via chromatin interaction (a.k.a HiC loops) data in human brain. Detailed information about gene sets considered can be found in Methods. Amongst the loci tested, only TAD boundaries derived from both fetal and adult brain tissue showed a significant degree of evidence for excess in cases relative to controls. Full size image Burden in regulatory elements connected with schizophrenia risk loci: We hypothesized that the elevated genome-wide burden of ultra-rare SVs may be partitioning to regulatory elements within schizophrenia risk loci. To take the three-dimensional genome into account, we used chromatin interaction data from adult brain to identify regulatory elements connected with schizophrenia risk loci, capturing any empirically defined cis-elements either nearby or distal 13 . As above, we performed a burden test using the 10% overlap criterion for any ultra-rare DEL, DUP, or INV in the regulatory elements of these schizophrenia risk loci. No significant enrichment in schizophrenia cases was found (Fig. 3 , Supplementary Table 8 ). Validation and analysis of ultra-rare TADs-affecting SVs To gain a deeper understanding, we followed up on the finding of significantly increased burden of ultra-rare SVs that affected TAD boundaries. We found that a higher rate of variants in cases versus controls was present when those variants were stratified by coding or non-coding status (Supplementary Fig. 14 , Supplementary Table 9 ) or by variant type (i.e., DEL, DUP, or INV; Supplementary Fig. 15 , Supplementary Table 10 ). Burden was greater for those variants that were DEL, or had larger overlap with TAD boundaries. Next, we attempted to verify the validity of those TADs-affecting ultra-rare DEL and DUP that were detected in schizophrenia cases. First, we looked up the GWA array data in the same samples (Supplementary Table 11 ). We found that 27.9% of these DEL and 52.6% of these DUP were concordant with GWA array data (50% reciprocal overlap) and were additionally confirmed by inspecting their WGS read alignments using IGV 35 (Supplementary Figs. 16 and 17 ). The remaining variants that were not found from GWA array data were notably smaller in size (median 7.6 kb) than those concordant (median 181 kb), suggesting that they may have been missed by GWA array technology. Second, for variants not verifiable using GWA arrays, we manually inspected their WGS read alignments using IGV 35 (Supplementary Figs. 18 and 19 ), and all were confirmed. Finally, we evaluated genomic features nearby those TADs-affecting ultra-rare SVs that were detected in schizophrenia cases (Supplementary Table 12 ). We found that these SVs span 4 – 995 kb and 71% of them (67 out of 94) overlapped ≥1 genes. There was a notable difference between TADs-affecting ultra-rare DEL and DUP: 44.7% (17 out of 38) of DUP overlapped genes had high pLI scores or were genes implicated in schizophrenia or neurodevelopmental disorders, whereas 16.3% (7 out of 43) of DEL overlapped genes had high pLI scores or were implicated in neurodevelopmental disorders (H 0 : no difference between DEL and DUP, Fisher’s exact test P = 0.0072). Furthermore, 36.8% (14 out of 38) of the DUP connected with 43 genes with high pLI scores or implicated in neurodevelopmental disorders via a high-confidence regulatory chromatin interaction (HCRCI); whereas 18.6% (8 out of 43) of the DEL connected with 18 genes with high pLI scores or implicated in neurodevelopmental disorders via a HCRCI (Fisher’s exact test P = 0.0824). Our observations are consistent with a previous report that duplications display a more complex relationship with chromatin features than deletions 36 . INV was similar to DUP (Supplementary Table 12 ; H 0 : no difference between INV and DUP, Fisher’s exact test P = 0.53). Common variants with large effects were not identified Because SNP arrays do not cover the entire genome even with imputation, we performed single-variant association analysis for all common variants obtained from WGS. Given the sample size of 2098 (1162 cases and 936 controls), we estimated that our sample had ≥80% power to detect risk variants with MAF = 0.25 and genetic relative risks ≥2.0, assuming a type I error level of 5 × 10 −8 (Supplementary Fig. 6 ). SNV/indels: We analyzed 7,895,148 SNVs and 1,368,675 indels with MAF > 0.01 for association with schizophrenia (Supplementary Fig. 20 ). We obtained a λ GC value of 1.03 and LD score regression intercept of 0.997 (SE = 0.0065), indicating no departure from null expectations or uncontrolled bias. Single-variant association analysis was done using logistic regression assuming an additive genetic model including PC2 as covariate for autosomes, and sex and PC2 as covariate for chrX. A number of variants exceeded genome-wide significance but, upon review, all 15 were false positives due to lack of read alignment support. These results are consistent with accumulated experience in schizophrenia genomics where larger sample sizes are required to detect common variant associations 7 . We believe that this null result is important: we have excluded the possibility of common variants (MAF > 0.01) with large effects in less accessible parts of the genome that have not been evaluated by GWA SNP arrays. SVs: Association analysis of common SVs has the potential to identify causative mutations leading to actionable findings, and much of this class of variants is inaccessible to SNP-based studies. Here we performed association analysis for SVs with MAF > 0.01, using logistic regression models and covariates as described above. The main analysis was for 2199 common DEL (Supplementary Fig. 21 ) but no association reached genome-wide significance. We then inquired into common DUP, INV, ALU, LINE1, and SVA, but also found no significant associations (Supplementary Figs. 22 – 26 ). Heritability estimation using WGS Heritability is the proportion of phenotypic variance explained by genetic factors. Understanding the sources of missing heritability for schizophrenia – the discrepancy between pedigree-heritability of 60–65% 3 , 4 and common-variant SNP-heritability of 24% 6 – is important for experimental designs to identify additional trait loci and possibly for subsequent precision medicine initiatives. Using WGS data for height and body mass index, Wainschtein et al. recently found WGS-heritability very close to twin/pedigree heritability 14 . WGS allowed them to include effects in genomic regions of low MAF and low LD, precisely the regions that are poorly captured by typical SNP arrays or imputation. Following Wainschtein et al.’s approach 14 , we estimated schizophrenia heritability from our WGS data using 1151 cases and 911 controls (post-QC subjects and pairwise genetic relatedness < 0.05), and 17,364,971 sequence variants (post-QC autosomal SNV/indels observed ≥ 3 times or MAF ≥ 0.0007). To evaluate the effect of progressive inclusion of more variants, we computed heritability in different ways by selecting WGS variants that corresponds to variant locations in HapMap3 37 , those imputable from 1000 G p3v5 22 and HRC r1.1 21 , and finally by including all WGS variants. First, we assessed common SNP-heritability in the WGS sample using the GREML single-component method implemented in GCTA 38 , 39 . Using 1,189,077 SNPs from WGS that corresponds to the SNP locations in HapMap3, the SNP-heritability was 0.45 (standard error [SE] 0.089, liability scale assuming lifetime risk of 1%). Using 7,141,717 SNV/indels from WGS that corresponds to the variant locations imputable from 1000GP p3v5, the SNP-heritability was 0.48 (SE 0.091). These estimates are numerically greater than that estimated from SNP arrays in the full Swedish sample (5001 cases; GCTA SNP-heritability using HapMap3 data: 0.32, SE 0.03, and using 1000 Genomes data: 0.33, SE 0.03) 6 , presumably due to the fact that more stringent evidence of schizophrenia was used for samples selected for WGS than that in the full sample (Methods). Next, we evaluated SNP-heritability using 8,498,854 SNV/indels from WGS that corresponds to the variant locations imputable from HRC r1.1. We used the recommended GREML-LDMS method in GCTA 39 , 40 because it is unbiased regardless the properties (e.g. MAF and LD) of the underlying causal variants (Supplementary Fig. 27 a). The estimated SNP-heritability was 0.52 (SE 0.22). Finally, we used all sequence variants (17,364,971 as above) from WGS and the GREML-LDMS method 39 , 40 to estimate WGS-heritability and partition additive genetic variance. We found the estimated WGS-heritability was 0.56 (SE 0.51). The point estimate of 0.56 is closer to pedigree-heritability (0.6–0.65, refs. 3 , 4 ), but the SE is large. For rare variants with MAF 0.0007–0.01, WGS variants in the low-LD group contributed to 0.40 of the phenotypic variance whereas variants in the high-LD group contributed to 0.01 of the variance (Supplementary Fig. 27 b). In contrast, for HRC-imputable variants, 0.06 and 0.03 of the phenotypic variance was contributed by variants in the low- and high-LD groups for MAF 0.0007–0.01 (Supplementary Fig. 27 a). The contribution to phenotypic variance from rare variants in low-LD with nearby variants was only revealed by WGS. These variants could only be directly assayed by WGS as they are not present in SNP arrays and their imputation is not accurate 14 . In sum, the point estimates for heritability were progressively larger as we included more variants and there was a sizable contribution from rare variants with low-LD metrics that are accessible only via WGS. However, our estimates of SNP- and WGS-heritability had large standard errors. This was due to limited sample size and case-control study design (i.e. not continuous trait as height or body mass). The WGS-heritability estimate had the largest SE which was additionally due to the large number of rare variants with low MAF and low LD. The sampling variance of SNP-based heritability estimate is approximately inversely proportional to sample size and is proportional to the effective number of independent variants 41 , 42 Furthermore, we likely underestimated WGS-heritability, especially the contribution from rare variants with MAF < 0.001: First, Wainschtein et al. 14 was able to include WGS variants with MAF as low as 0.0001 (corresponding to MAC ≥ 3 in TOPMed data with 21K subjects), whereas in this study we were limited to a minimum MAF of 0.0007 (corresponding to MAC ≥ 3 in 2K subjects). Second, based on a simulation using AbCD 43 assuming 2062 EUR individuals and 30x WGS, we have >99% power to detect variants at MAF > 0.001 but only >53% power to detect variants for MAF 0–0.001. The lowest MAF bin (MAF 0.0007–0.001) that we were able to consider in this study likely included only half of the variants that could have been observed in a sample with 10,000 subjects. We believe it notable that, although not conclusive, our results for schizophrenia are consistent with those of Wainschtein et al. for height and body mass 14 , 16 . These results imply that, with larger schizophrenia samples (e.g. a sample size of >33,000 is needed to obtain an SE of 0.02, refs. 14 , 41 , 42 ), WGS data may be able to fully recover the total additive genetic variance with desired precision and will allow further partitioning of the genome to finer MAF/LD groups as well as a variety of functional annotations 14 , 42 . The still missing heritability of schizophrenia may be only misplaced, in precisely the blind spots of SNP arrays as has been anticipated for over a decade 44 . Discussion We have generated and analyzed a collection of WGS data for a set of patients ascertained for schizophrenia that to our knowledge is the largest described in a publication. The high depth and uniformity of coverage across the genome for these case data allowed us to detect the large majority of genetic variation that are present in the genome, including SNVs, indels, CNVs, mobile element insertions, and inversions. In addition, the availability of similar WGS data from Swedish controls allowed us to systematically measure the burden of these different classes of variation in a case/control manner. Through the analysis of these data, we were able to replicate key prior reported excess in schizophrenia of LOF URVs in genes that are putatively LOF-intolerant as well as excess of rare deletions genome-wide. This means that we can be more confident that the load of such variants, while modest compared to the identified contribution of common variation to schizophrenia risk, are a subset of the total schizophrenia genetic risk architecture. Our finding that ultra-rare SVs in schizophrenia cases are enriched at TAD boundaries is not surprising. These variants seem to confer a level of relative risk comparable to protein-coding variants for which we have replicated an excess in schizophrenia. These regions have been reported as being depleted of deletions in human populations relative to the rest of the noncoding genome 36 , and clear phenotypic consequences associated with deletion of these elements have already been demonstrated in a number of other diseases 34 . TAD boundaries are critical to the formation and maintenance of chromatin structure 13 . The disruption of these boundaries has the potential to rearrange spatial orientation of regulatory elements that are needed for proper expression, as well as lead to the formation of entirely new TADs. Functional examples of such effects have already been described in mouse models for limb malformation 33 . Based on these prior observations it is unsurprising that of all noncoding loci, the burden of these SVs appears to be highest relative to controls in TAD boundaries. While our data support an excess of TAD-affecting ultra-rare SVs in schizophrenia cases relative to controls, the precise impact of these variants on gene expression and regulation has yet to be determined. Many of these SVs overlapped genes including some of the risk genes for neuropsychiatric disorders. Mechanistic studies are needed to clarify the precise genomic consequences of these TADs-affecting SVs in human brain. A possible future investigation would be to work with patient derived cells with these TADs-affecting SVs that we have identified and figure out what promoter-enhancer pairing looks like, and if there are any potential changes in gene expression. Our study has highlighted a specific hypothesis for future functional analyses. It will be critical to determine the precise functional effects of these variants on biology, which, in a manner similar to common variant risk, are likely to converge on higher order architectures of gene regulation 7 . We chose not to analyze rare mobile element insertions because variant calling for these variants appear to be noisy from our 30× WGS and there was a lack of external dataset or analytic approach for the need of quality control. Increased somatic L1 insertions have been recently reported in neurons of schizophrenia patients using postmortem brain tissues 45 . The detection of somatic L1 insertions required very deep WGS (e.g. 200×) and tailored analytic methods (e.g. machine learning 45 ). For similar reasons, we also chose not to evaluate translocations and complex SVs in this study as we feel that these variants can be better detected from WGS using long-insert jumping libraries, deeper coverage, and targeted capture of breakpoints 46 . The analysis of noncoding variants from WGS data is challenging due to the sheer volume of the noncoding genome and limited methods to predict functional changes 28 , 30 , 46 . Recently the category-wise association study (CWAS) framework has been developed and applied to WGS studies of autism spectrum disorder using 7608 samples from 1902 families 28 , 30 , 46 . The CWAS approach applies multiple annotation methods to define tens of thousands of annotation categories each of which are tested for association and accounted for multiple testing. However, there is a trade-off between false positives and false negatives. In this study we adopted the spirit of the CWAS approach and focused on empirically determined annotation methods including (1) conservations of DNA sequence that were estimated from cataloging and comparing genetic variation across human and mammalian species 19 , 20 , (2) multiple epigenomic annotations that were experimentally generated from human brain 12 , 13 , (3) genes and regions that were empirically associated with psychiatric disorders. This approach combined with the relative homogeneity of the Swedish sample helped improve the power to identify functional variants while controlling for false discovery rate. We failed to detect an excess of risk variation beyond a couple of specific classes of variation, and we believe that this is largely due to a lack of power. Prior data has demonstrated that power to implicate common variation with schizophrenia risk is only sufficient with a much larger case/control cohort, on the order of N case/control >10,000 3 , 6 . This also applies to implication of genomic loci based on ultra-rare variation. Cohorts larger than ours have failed to implicate burden of ultra-rare coding variants in individual genes with schizophrenia risk 10 , and implication of SVs with schizophrenia at locus level resolution required cohorts far larger than ours 8 . Since we can assume that noncoding ultra-rare SNVs and indels will have a smaller relative risk conferred than damaging coding variants, it is clear that implication of this class of variation both across the genome and at locus level resolution will also require a far larger cohort size. Furthermore, larger samples will be necessary to ensure findings are replicable 30 . In sum, to effectively identify the subset of rare variation across the genome that confers schizophrenia risk in patients, we will need to follow the blueprint constructed for common variant GWAS. Substantial collaborative effort will be critical. WGS is expensive and generates a large quantity of sequence data that are difficult to efficiently store and analyze en masse. The financial and computational burden inherent to a case/control WGS analysis with sufficient power for discovery is too much for individual groups or institutions, and will only be feasible through collaborative work in meta-analyzing case/control WGS datasets. The WGS data we have generated are meant to be included in these future efforts. Methods Ethics We have complied with all relevant ethical regulations. The study protocol and all procedures on data from human research subjects were approved by the appropriate ethical committees in Sweden and the US (University of North Carolina [Institutional Review Boards], Karolinska Institutet [Regionala Etikprövningsnämnden, Stockholm], University of Uppsala [Regionala Etikprövningsnämnden, Uppsala]). All participants gave their written informed consent. All genomic coordinates are given in NCBI Build 37/UCSC hg19. Subjects All schizophrenia cases included this study are from the Swedish Schizophrenia Study (S3). Detailed descriptions of S3 procedures are available elsewhere 17 and are briefly summarized here. S3 cases were identified via the Swedish Hospital Discharge Register that captures >99% of all inpatient hospitalizations in Sweden 47 .The register is complete from 1987 and augmented by psychiatric data from 1973 to 1986. The sampling frame is thus population-based and covers all hospital-treated patients. The Hospital Discharge Register contains dates and ICD discharge diagnoses for each hospitalization, and captures the clinical diagnosis made by attending physicians. Case inclusion criteria: ≥2 hospitalizations with a discharge diagnosis of schizophrenia or schizoaffective disorder, both parents born in Scandinavia, and age ≥18 years. Case exclusion criteria: hospital register diagnosis of any medical or psychiatric disorder mitigating a confident diagnosis of schizophrenia as determined by expert review, and included removal of 3.4% of eligible cases due to the primacy of another psychiatric disorder (0.9%) or a general medical condition (0.3%) or uncertainties in the Hospital Discharge Register (e.g., contiguous admissions with brief total duration, 2.2%). The validity of this case definition of schizophrenia is strongly supported as described in 17 . Ethical committees in Sweden and in the US approved all procedures and all subjects provided written informed consent (or legal guardian consent and subject assent). We also obtained permissions from the area health board to which potential subjects were registered. Potential cases were contacted directly via an introductory letter followed by a telephone call. If they agreed, a research nurse met them at a psychiatric treatment facility or in their home, obtained written informed consent, obtained a blood sample, and conducted a brief interview about other medical conditions in a lifetime. The S3 included more than 5000 schizophrenia cases, from which we selected 1165 cases for whole-genome sequencing (WGS) in the current study. Our main goal in selection was typical Swedish ancestry and clear schizophrenia caseness. Cases carrying known pathogenic copy number variants (CNVs) (e.g. 22q11del, 16p11dup) were not selected as a primary question of this study is to evaluate the contribution of novel loci on schizophrenia risk. DNA was extracted from peripheral blood samples. Specifically, our selection procedures required the following case inclusion criteria to be met: (1) have high-quality/sufficient DNA that satisfied all criteria: concentration ≥ 80 µg/ml, volume ≥ 150 µl, and purity ratio 1.7–2.2; (2) used in GWA study 17 ; (3) have typical Swedish ancestry defined by the first two PCs used in 17 ; (4) do not carry known large pathogenic CNVs and are not outliers for total number of CNVs as identified in Szatkiewicz et al. 18 ; (5) have stringent evidence of schizophrenia that satisfied all criteria: >8 inpatient or outpatient psychiatric treatment contacts for schizophrenia or schizoaffective disorder, ≥30 inpatient days for schizophrenia, ≥5 redeemed prescriptions for antipsychotics, and few or no treatment contacts for bipolar disorder. Institutional Review Boards at University of North Carolina and regional ethics committee at Karolinska Insitutet (Regionala Etikprövningsnämnden, Stockholm) approved all study procedures and all subjects provided written informed consent. All control subjects included this study are from the SweGen project, a population-based high-quality genetic variant dataset for the Swedish population. One of the aims of SweGen is to enable WGS association studies for national patient cohorts studies in Sweden, by providing data on well-matched national controls selected on the basis of the genetic structure of the Swedish population. Detailed description of the SweGen subjects are available elsewhere 48 and are briefly summarized here. SweGen project included a total 1000 individuals, of which 942 individuals were selected from The Swedish Twin Registry (STR) 49 and 58 from The Northern Swedish Population Health Study (NSPHS) 50 . Both STR and NSPHS are population-based collections and were approved by local ethics committees. STR is a national registry of Swedish born twins established in the 1960s and, at present, holds information on 85,000 twin pairs. In total, 11,000 individuals from the STR (one per monozygous twin pairs) participated in TwinGene and had existing SNP array genotyping. The Twingene study is a nation-wide and population-based study of Swedish born twins agreeing to participate. The TwinGene sample collection represents the Swedish geographic population density distribution. Based on principal component analysis (PCA), 942 unrelated individuals were selected from TwinGene participants for whole-genome sequencing, mirroring the density distribution. All participants gave their written informed consent and the TwinGene study was approved by the regional ethics committee (Regionala Etikprövningsnämnden, Stockholm, dnr 2007-644-31, dnr 2014/521-32). NSPHS is a health survey in the northern Swedish country of Norrbotten. Based on PCA, 58 individuals were selected from NSPHS. The NSPHS study was approved by the local ethics committee at the University of Uppsala (Regionala Etikprövningsnämnden, Uppsala, 2005:325 and 2016-03-09). All participants gave their written informed consent to the study including the examination of environmental and genetic causes of disease in compliance with the Declaration of Helsinki. Given the selected 1000 subjects that constitutes SweGen, a PCA using genotypes from high-density SNP arrays was performed and confirmed that the SweGen control cohort captured the diversity in the country. Furthermore, since STR and NSPHS are already established national sample collections that do not reflect recent migration patterns, the SweGen control cohort is likely to reflect the genetic structure of Swedish individuals that have been present in Sweden for at least one generation. From the SweGen subjects, we selected the 942 STR/TwinGene individuals as controls in this study because of their matched ancestry with selected schizophrenia cases. Phenotype data was not allowed in the SweGen project in order to make a less restrictive access policy possible. Consequently, we were unable to screen for the presence of individuals with schizophrenia. However, we estimate that at most 1 control individual may carry a schizophrenia diagnosis (given the estimated schizophrenia prevalence of 0.0009 in the full STR/TwinGene project of 11,000 individuals). Misclassification of a single control subject will not likely affect the results or the power of the study. DNA for the STR/TwinGene individuals was extracted from blood. All S3 subjects, including those in this WGS study, had GWA SNP array genotyping 17 and exome sequencing 9 , 10 . DNA was extracted from peripheral venous blood for all subjects. GWAS array genotyping was done in six batches at the Broad Institute of MIT and Harvard using Affymetrix 5.0 (3.9%), Affymetrix 6.0 (38.6%), and Illumina OmniExpress (57.4%). Exome sequencing was done at the Broad Institute of MIT and Harvard in twelve separate waves. The first wave used Agilent SureSelect Human All Exon Kit and Illumina GAII. Other waves used a newer version Agilent SureSelect Human All Exon v.2 Kit and Illumina HiSeq 2000 and HiSeq 2500 instruments. Paired-end reads of 76 bp were used across all waves. Analyses of SNP array and exome sequencing data are previously published. Data on common SNPs is published in Ripke et al. 17 . Data on exonic SNVs and indels is published in Genovese et al. 10 . Data on large rare CNVs are published in Szatkiewicz et al. 18 . All data are in NCBI build 37/UCSC hg19 coordinates. Whole-genome sequencing and data processing Library preparation and sequencing was performed by the National Genomics Infrastructure platform in Sweden. All cases and controls were processed using identical library preparation and sequencing protocols at two facilities. WGS libraries were prepared from ~1 μg DNA using Illumina TruSeq PCR-free DNA sample preparation kits targeting an insert size of 350 bp. Library preparation was performed according to the manufacturer’s instructions. The protocols were automated using an Agilent NGS workstation and Beckman Coulter Biomek FXp. WGS clustering was done using cBot, and paired-end sequencing with 150 bp read length was performed on Illumina HiSeqX (HiSeq Control Software 3.3.39/RTA 2.7.1) with v2.5 sequencing chemistry. Identical analysis pipelines (including software tool versions) were used for processing all case and control samples together For alignment, the workflow engine Piper 51 (v1.4.0) was used to perform pre-processing and variant discovery, coordinated using the National Genomics Infrastructure pipeline framework. Following the GATK guidelines, raw reads were aligned to the GRCh37 human reference genome (human_g1k_v37.fasta) using bwa mem 52 (v0.7.12). The resulting alignments (.BAM) were sorted and indexed using SAMtools 53 (v0.1.19). Alignment quality control statistics were gathered using qualimap 54 (v2.2). Alignments for the same sample from different flowcells and lanes were merged using Picard MergeSamFiles (v1.120, ). For quality control of aligned sequence reads, we ran FastQC 55 on the BAM-files in order to understand sequencing quality and to identify outlier samples which might be subject to contamination. We analyzed a number of sequencing QC metrics (e.g., adapter content, per base N nucleotide content, per base sequence content, per base sequence quality, per sequence GC content, per sequence quality scores, sequence duplication level, and sequence length distribution). We analyzed a number of sequence coverage QC metrics produced by SAMtools flagstat (e.g., sequencing depth, percentage of mapped reads, percentage of properly paired reads, percentage of singletons, percentage of duplicates, and percentage of paired end reads with one mate mapped to a different chromosome). Finally, we checked uniformness of read coverage using BEDTools genomecov 56 , based on which we required that samples with good coverage have ≥80% of bases be covered at least 20× for confident variant calling. These procedures identified one outlier sample (a schizophrenia case). We confirmed the identity of all subjects by comparing SNP genotypes from WGS to those from GWA SNP array genotyping 17 and exome sequencing 10 . Identity-by-decent was estimated using PLINK 57 (v1.9) for each sample between WGS-based genotypes and array- or WES-genotypes in overlapping SNPs. Based on this analysis, identity was confirmed for all samples (i.e. no sample swap was found). The identity of SweGen subjects have been confirmed previously in 48 . Variant discovery and genotyping - SNV and indels We processed all case and control BAM files together and performed joint genotyping of SNVs and indels across all samples using GATK (v3.3) 58 . The raw alignments were then processed following GATK best practices with GATK (v3.3). Alignments were realigned around indels using GATK RealignerTargetCreator and IndelRealigner, duplicate marked using Picard MarkDuplicates (v1.120), and base quality scores were recalibrated using GATK BaseRecalibrator. Finally, gVCF files were created for each sample using the GATK HaplotypeCaller (v3.3). Reference files from the GATK v2.8 resource bundle were used throughout. All these steps were coordinated using Piper (v1.4.0). Joint genotyping was conducted on all cases and controls as recommended by GATK 58 . Due to the large number of samples, 22 batches of 100 samples were merged into 22 separate gVCF files using GATK CombineGVCFs. The 22 individual gVCF files were split by chromosome and further combined with CombineGVCFs. As a result, a single gvcf file was obtained which was used as input for GATK GenotypeGVCF. Subsequently, SNVs and indels were extracted from the resulting gVCF files. To further select high-quality genetic variants, GATK VQSR filtering was executed on SNPs and indels separately using GATK VariantRecalibrator and ApplyRecalibration walkers. VQSR sensitivity thresholds were selected based on maximization of sensitivity of variant discovery in comparison with WES data previously performed on the same samples. GATK Variant Quality Score Recalibration (VQSR) was used to filter variants as recommended by GATK guidelines. The SNV VQSR model was trained using SNP sites from HapMap3.3 37 , 1000 Genomes Project (1000GP) sites found to be polymorphic on Illumina Omni 2.5 M SNP arrays 59 , 1000GP Phase 1 high-confidence SNPs 60 , and dbSNP 61 (v138). A 99.6% sensitivity threshold was applied to filter variants resulting in a Ti/Tv ratio of 2.001. The indel VQSR model was trained using high-confidence indel sites from 62 , 1000GP and dbSNP (v138) and a 99.0% sensitivity threshold was used. The sensitivity thresholds were determined empirically by comparing to WES data in the same samples to optimize sensitivity and specificity of variant detection. We kept only the ‘PASS’ variants based on results of VQSR. Variant calling on sex chromosomes was performed separately from the autosomes. GATK Haplotype Caller walker was executed with ploidy = 1 flag on male samples except for PAR regions which were done with ploidy = 2. CombineGVCFs and GenotypeGVCFs were performed by analogy with the processing of the autosomes, see above. VQSR filtering was performed with the sensitivity thresholds inferred from the autosomes. To assess the robustness of the callset, we evaluated hard filters in comparison to VQSR filter. We constructed histograms of 16 variant quality metrics reported by GATK GenotypeGVCFs, manually selected reasonable thresholds for good quality variants, and performed hard filtering according to the selected thresholds. We found that these two filtering strategies, VQSR and hard filtering, gave nearly identical results confirming robustness of the final variant call set. Variant discovery and genotyping—structural variants We applied three complimentary algorithms for the discovery and genotyping of structural variants (SVs). These algorithms were chosen for their established performance in the 1000GP 32 . We processed all case and control genomes together using protocols recommended by specific algorithms. We used ExpansionHunter 63 (v2.5.5) with default parameters to identify expansions of short tandem repeats. Using PCR-free WGS, ExpansionHunter can accurately genotype known pathogenic repeat expansions even when the expanded repeat is larger than the read length. With ExpansionHunter v2.5.5, the catalog of known pathogenic repeat expansions covers repeats in 16 genes: AR , ATN1 , ATXN1 , ATXN10 , ATXN2 , ATXN3 , ATXN7 , C9ORF72 , CACNA1A , CSTB , DMPK , FMR1 , FXN , HTT , JPH3 , and PPP2R2B . The sizes of the pathogenic repeat expansions are documented in the literature (Table S3 ). Using the disease thresholds, we identified pathogenic repeat expansions, and the number of cases and the number of controls carrying these pathogenic repeat expansions. We used Delly 64 (v0.7.7) with default parameters to detect and genotype three types of SV call sets: deletions, tandem duplications, and inversions that are between 500 bp and 500 Mb. We ran the default protocol for germline DNA and high-coverage sequencing. Specifically, for each type of SV, we (1) discover SV sites per sample using paired-end mapping signature and split-read refinement; (2) merge SV sites into a unified site list following strategies used by 1000GP 32 (i.e., for deletions and duplications: 70% reciprocal overlap and a max. breakpoint offset of 250 bp; for inversions: 90% reciprocal overlap and a max. breakpoint offset of 50 bp); (3) genotype the unified SV sites in all samples; (4) merge all genotyped samples to get a single VCF; and (5) apply the default germline SV filters to identify confident SVs (i.e., min. fractional ALT support = 0.2, min. SV size = 500 bp, max. SV size = 500 Mb, min. fraction of genotyped samples = 0.75, min. median GQ for carriers and non-carriers = 15, max. read-depth ratio of carrier vs. non-carrier for a deletion = 0.8, min. read-depth ratio of carrier vs. non-carrier for a duplication = 1.2, and “PASS” variants). Finally, we kept only high-confident genotypes that passed the per-sample genotype filter (i.e., FORMAT/FT = PASS), and had additional support from read-depth-based copy number estimates (i.e., FORMAT/CN < 2 for deletions, CN > 2 for duplications, and CN = 2 for inversion genotypes). We used the Mobile Element Locator Tool (MELT, v2) 65 to detect and genotype three types of mobile element insertions (MEI) including ALU, SVA, and LINE1. We used the MELT-SPLIT workflow with default parameters which consists four steps: (1) MEI discovery in individual samples; (2) group analysis whereby discovery information are merged across all samples to build models containing all available evidence for each candidate MEI site; (3) genotyping all WGS samples using the merged MEI discovery information; (4) final filtering and merging of individual samples into final VCF. We used the default filters (no-call filter, 5′ and 3′ evidence filter, discordant pair overlap filter, low complexity filter, and allele count 0 filter) and included in the final VCF only those variants that passed the default filtering of MELT. Evaluation of variant detection For SNV/indels, we used variant calls from exome sequencing to evaluate genotype accuracy from WGS. We focused on the autosomes and estimated genotype accuracy by calculating the concordance rate between WGS-based genotypes and those obtained from exome sequencing across variants that overlapped between the two technologies. We calculated the overall concordance rate as well as concordance rates when WES-based genotypes are homozygous reference, heterozygous, and homozygous non-reference. In all calculations, only genotypes with sequencing depth ≥ 10 and GQ ≥ 20 were included in the comparison. Python code “concordance.py” ( ) was used for this analysis. For deletions and duplications, we evaluated concordance using prior data from GWA SNP array or exome sequencing. Previously GWA genotyping arrays detected large and rare deletions and duplications genome-wide and WES detected rare exonic deletions and duplications in the same samples. We compared the concordance between WGS-based genotypes with those based on either GWAS array or exome sequencing across overlapping variants. Any overlapping variants must have ≥50% reciprocal overlap and occur in the same individual. We calculated the overall concordance rate as well as concordance rates when genotypes from the GWA array or exome sequencing are heterozygous and homozygous non-reference. Variant overlap was performed using BEDTools (v2.28.0). Quality control For subject quality control, we used PLINK (v1.9). In sum, subject QC excluded 9 subjects for failed sequencing quality metrics (1 case excluded), sex mismatch (1 control excluded), sex chromosomal abnormality (2 cases with XXY excluded), and one of any pair of subjects with high relatedness \(\hat \pi \, > \, 0.2\) (5 controls excluded). These procedures resulted in a final sample size of 2098 subjects (1162 schizophrenia cases and 936 controls), all of whom had SNV/indel missing rate per sample < 0.01 and heterozygosity rate < 0.1. In selection of the schizophrenia cases, we excluded carriers of known large pathogenic CNVs and abnormally high total number of CNVs as identified by Szatkiewicz et al. 18 using SNP arrays. We confirmed this fact using SV calls from WGS. Sex check was performed using heterozygosity rate of sex chromosomes and by examining the coverage of sex chromosomes. This identified a sex mismatch when the reported sex does not match the biological sex and chromosomal abnormality when extra chromosomes were present. Relatedness testing and principal component analysis (PCA) were done following established pipelines using eligible bi-allelic autosomal SNPs using PLINK (v1.9). Of all bi-allelic autosomal SNPs, we removed variants that had minor allele frequency < 0.05, missing rate per variant >0.01, missing rate per variant in cases and controls >0.02 or P < 0.005, Hardy–Weinberg equilibrium false discovery rate (FDR) < 1×10 −6 (controls) or <1×10 −10 (cases), or were in linkage disequilibrium ( r 2 > 0.05). Relatedness testing identified any pairs of subjects with \(\hat \pi \, > \, 0.2\) , based on which we removed one member of each relative pair. PCA estimated 20 PCs which were used in empirical evaluation of covariates to be included in association analyses. Furthermore, for quality control purpose, we performed PCA of our data together with 1000 Genomes Project data on HapMap individuals and SweGen data on NSPHS individuals. The same quality steps were followed for the identification of eligible SNPs in the combined data. For SNV/indel quality control, we removed variants if missing rate per variant > 0.01 (before sample removal) and applied genotype QC by setting low quality genotypes with DP < 10 or GQ < 20 as missing. We then removed variants that were: monomorphic, missing rate per variant > 0.02 (after genotype QC and sample removal), missing rate per variant difference in cases and controls >0.02 or P < 0.005, Hardy–Weinberg equilibrium FDR < 1×10 −6 (controls) or <1×10 −10 (cases). After QC, we extracted variants with minor allele frequency (MAF) ≥ 0.01 for common variant association analysis and the remaining for rare variant aggregated association analysis. These QC procedures were done using PLINK (v1.9). For SV quality control, we followed established pipelines 8 . SVs were removed if they overlapped by more than 66% with large genome gaps (e.g., centromeres), segmental duplications, or regions subject to somatic V(D)J recombination in white blood cells, with the logic that these variant calls are likely artifactual. Finally, we extracted variants with MAF ≥ 0.01 for common variant association analysis and the remaining for rare variant aggregated association analysis. These QC procedures were done using PLINK (v1.07). Annotation of variants We used VEP 66 (v91), vcfanno 67 (v0.2.9), and AnnotSV 68 (v1.1.1) for variant annotations. For population allele frequency annotations, we annotated SNV/indels using population allele frequencies from gnomAD r2.0.2 genomes and ExAC r0.3 non-psych exomes 24 , 25 . For SVs, we annotated the variants using population allele frequencies from 1000GP and Database of Genomic Variants (DGV) 31 , 32 . We used the default settings in AnnotSV, i.e. a SV from 1000GP or DGV is reported if an overlap of >70% is found with a SV to annotate. For sequence constraints in humans, we annotated variants using the context-dependent tolerance score (CDTS) using the map of sequence constraint for the human species 19 . Files containing CDTS were downloaded from . The downloaded CDTS scores were presented in 10 bp bins in hg38, which was liftover to GRh37/hg19 for the analyses in this study. When a variant spans multiple CDTS bins, mean CDTS was computed and used to annotate the variant. For sequence constraints in mammals, we used the genomic evolutionary rate profiling (GERP) score 20 . For transcript-level annotations, we annotated variants with VEP (v91) using Ensembl transcripts from GENCODE 69 (v16). For SNVs/indels, we further annotated the variants using annotation database dbNSFP 3.5_a. Exonic SNV/indels are classified into groups following criteria based on those used in Genovese et al. 10 : synonymous, missense non-damaging, missense damaging (dbNSFP_MetaSVM_pred = “D” and dbNSFP_fathmm_MKL_coding_pred = “D”), and loss-of-function (stop-gain, frameshift, or splice donor/acceptor). For brain exons annotations, we obtained a dataset of long-read RNAseq data from a published dataset of long-read RNA sequencing of human brain tissue 29 . The data came in the form of a BED file where each interval represents a uniquely observed exonic region in the data, along with the total number of reads aligning to the region. We took the subset of exons with at least 10 overlapping reads, sufficient support for the exon coming from an isoform that is unlikely to be mere transcriptional noise. We split exons into (1) those within coding loci, and (2) those outside coding loci by simply subsetting intervals on gene-based merged translation start/stop intervals, representing a space where a novel coding exon could potentially be found. For brain epigenomics annotations, we relied on empirically generated annotations that have shown to be important to gene regulation in the brain. Epigenomic data are restricted to the autosomes. First, we used the open chromatin regions obtained from ATAC-seq on adult prefrontal cortex brain samples as reported in Bryois et al. 12 . ATAC-seq was performed on adult prefrontal cortex brain samples from 135 individuals with schizophrenia and 137 controls. A total of 118,152 high-confidence ATAC-seq peaks were identified. Second, we used the “easy-HiC” readouts obtained from adult temporal cortex as described in Giusti-Rodríguez et al. 13 . “Easy Hi–C” was applied to six postmortem samples ( N = 3 adult temporal cortex and N = 3 fetal cerebra) and 1.323 billion high-confidence cis-contacts were used for analyses. Three major read-outs were generated including frequently interacting regions (FIREs), chromatin interactions (a.k.a. Hi–C loops), and topologically associating domains (TADs). FIREs were defined as 40-kb genomic bins with significantly more Hi–C interactions (FIRE score P < 0.05). Chromatin interactions were defined as intra-chromosomal chromatin interactions between 10 kb bins that were >20 kb apart (i.e., not contiguous) and ≤2 Mb apart. FIREs are a small subset of all chromatin interactions, which have considerably more three-dimensional contacts. Chromatin interactions have a strong tendency to occur within TADs (discrete megabase-scale regions with less frequent interactions outside the regions). TAD boundaries are defined in 40 kb bins. Finally, we further included epigenetic marks (i.e. CTCF, H3K27ac, and H3K4me3) obtained from ChIP-seq using postmortem brain tissue from fetal and adult samples that were generated in 13 . Using gene model defined by GENCODE (v16), we assessed gene sets previously implicated in schizophrenia and neurodevelopmental disorders including: Loss-of-function (LOF) intolerant genes: we used genes from Lek et al. 24 . Calcium Channel gene set: we used the 26 genes from voltage-dependent calcium channel, available at . CELF4 gene set: we used genes with “iCLIP occupancy” >0.2 from Supplementary Table 4 of Wagnon et al. 70 . CHD8 gene set: we used genes from Cotney et al. 71 . FMRP Darnell gene set: we used the 842 mouse genes from Supplementary Table 2 A of Darnell et al. 72 , including all genes with FDR < 0.01. NMDARC: we used a list of combined NMDAR and ARC complexes genes from Supplementary Table 9 of Kirov et al. 73 . PSD gene set: we used a gene list generated from human cortex biopsy data from Bayes et al. 74 . PSD-95 gene set: we used a gene list generated from human cortex biopsy data from Bayes et al. 74 . RBFOX gene sets: we selected RBFOX1/2/3 genes from Supplementary Table 1 of Weyn-Vanhentenryck et al. 75 . Genes.ID/DD/ASD: we selected 288 genes implicated in de novo variant studies from Supplementary Tables 15 – 18 of Nguyen et al. 11 , based on q- value < 0.05 for developmental delay (DD), q -value < 0.1 for autism spectrum disorder (ASD), q -value < 0.1 for intellectual disability (ID), and q -value < 0.5 for epilepsy (EPI). SCZGWAS: genes implicated in schizophrenia common variant association studies, for which we used genes from the 145 regions known to be associated with schizophrenia from Pardinas et al. 6 . CMCqval05: The CommonMind Consortium (CMC) sequenced RNA from dorsolateral prefrontal cortex of schizophrenia cases ( N = 258) and control subjects ( N = 279), from which we selected genes implicated to have differential expression in human brain between cases and controls based on q -value < 0.05 76 . For certain tests of SNV/indel burden we focused on burden within gene regions of a generalized coding transcript structure, broadly defined as 35 kb upstream of the most distal transcription start site to 10 kb downstream of the most distal transcription start site (transcript_35kb_10kb). Variant subsetting Protein coding sequences are defined using protein-coding transcripts from GENCODE (v16). We focused coding SNV/indel analyses on a set of variants which to a high degree of confidence impact bases involved in the production of a functional protein. Coding variants have an at least one transcript-level IMPACT classification of LOW, MODERATE or HIGH according to VEP (v91). We defined noncoding SNV/indels if they did not alter sequence content of coding regions or splice dinucleotides of GENCODE protein-coding transcripts. Noncoding variants only have IMPACT classifications of MODIFIER according to VEP (91). For SVs, we followed criteria used in Brandler et al. 77 to define coding versus noncoding variants. Protein coding sequences are defined using the consensus coding sequence from GENCODE (v16). Coding deletions, duplications, or mobile element insertions are defined as those affecting any protein-coding sequences. Coding inversions are either having one or both breakpoints inside a protein-coding exon of a gene, or having breakpoints in two different introns of a gene and overlapped with at least one coding exon, or having one breakpoint in an intron of a gene and the other breakpoint outside of that gene. Inversions that inverted an entire gene or genes but had intergenic breakpoints were considered noncoding. From the post-QC variant callsets, we defined ultra-rare SNV/indels as being a singleton within our WGS cases/control cohort (allele count = 1 in the 2098 post-QC subjects) and absent from independent population cohorts (gnomAD genomes allele count = 0 and non-psychiatric subset of ExAC allele count = 0) 24 , 25 . This is because the full ExAC and gnomAD exome cohort include exome sequence data derived from schizophrenia case samples included in this study, and applying any MAF constraints using the full cohorts could bias association analysis results against schizophrenia cases. Subsetting of noncoding ultra-rare SNV/Indels on annotations was done using in-house python scripts, VCFscreen v0.1 ( ), based on interval overlap with annotations defined by genomic coordinates. From the post-QC SV callset, we defined ultra-rare SVs as being single occurrence in our case/control cohort (allele count = 1 in the 2098 post-QC subjects), as well as being absent in independent population cohorts including 1000GP and DGV 31 , 32 . Based on the default setting of AnnotSV 68 , a SV was absent in population cohorts if it did not overlap or overlapped <30% of any variant in the population databases. Subsetting of ultra-rare SVs on annotations was done using PLINK (v1.07) based on interval overlap. Power calculation and correction for multiple comparisons We used the R/gap package (v1.2.1, ) to estimate statistical power for association analyses. We assumed an additive model, lifetime risk of schizophrenia of 1%, and two type I error levels: (1) 5 × 10 −8 as an established genome-wide significance threshold for single-variant association, (2) 1 × 10 −5 as in Werling et al. 46 . We computed the minimal detectable genotypic risk ratio to achieve 20%, 80% power over a range of frequency of risk alleles in the population. For single-variant association test, the X -axis of the power plot represents the frequency of a single variant. For burden test, the X -axis of the power plots represents the aggregated frequency of a set of variants aggregated for a target region of interest. To correct for multiple comparisons in the analysis of common variant association, we used the established genome-wide significance threshold of 5 × 10 −8 . To correct for multiple comparisons in burden analyses of ultra-rare variants, we applied the Benjamini and Hochberg false discovery rate (BH-FDR) method to the family of hypotheses involving ultra-rare SNV/indels which included a total of 74 tests summarized in Supplementary Table 5 , and to those involving ultra-rare SVs which included a total of 29 tests summarized in Supplementary Tables 7 and 8 . We used the p.adjust function in R (v3.2.2., ) to implement the BH-FDR method. We used a threshold of 0.05 on the FDR adjusted P values (a.k.a. q values) to consider statistical significance. Burden of ultra-rare SNV/Indels Given that the large majority of ultra-rare SNVs and indels (URVs) are not assumed to confer risk in schizophrenia cases, we first tested the null that the total rate of these variants is not a significant predictor of schizophrenia status. Before outlier pruning, with 1162 cases and 936 controls, we fitted a simple logistic regression model with case/control status as the dependent variable and count per sample of URVs as the predictor variable. We found that cases had a higher mean URV count (4456 vs. 4289, P = 0.002, two-sided), and that this was primarily driven by the presence of a portion of samples with unusually high URV counts. The URV outlier samples may be biasing the analysis of URVs even though they were not a concern for the analysis of common variants and SV, and will need to be removed. Following an approach previously established in the full Swedish sample 10 , we pruned samples that had an outlier total URV count, here defined as >6000 (Supplementary Fig. 7 ). The outlier samples appeared to have relatively higher ancestry heterogeneity (Supplementary Fig. 8 ) similar to the previous finding from the full Swedish sample in Genovese et al. 10 . After outlier pruning, we had 1104 cases and 921 controls and there was no evidence for a difference in mean URV count between cases and controls after this pruning step was carried out (4262 vs 4249, P = 0.4225, one-sided assuming higher burden in cases). Burden testing was done using VCFscreen (v0.1) and R (v3.2.2). All tests of URV burden in cases relative to controls were carried out using a logistic regression model framework that has been utilized in prior studies 10 . Specifically, the dependent outcome variable in logistic regression is phenotype (schizophrenia = 1, control = 0). The primary predictor is the count per sample of URVs that are specific to target region annotation, whether coding or noncoding. And, based on empirical evaluation, we included three covariate variables in logistic regression: mean_coverage, PC2 (the only PC of the 20 PCs determined from common SNPs which predicted sample case/control status at P < 0.01), and total URV count per sample. We carried out one-sided statistical tests assuming increased burden of URV in cases. Logistic regression models were implemented by the glm function in R (v3.2.2). Odds ratios were computed to measure the increase in the likelihood of having disease per unit increase in URV burden. Empirical P -values were derived by 10,000 permutations by swapping phenotype labels. Burden of ultra-rare SVs Analysis was done using PLINK (v1.07) and R (v3.2.2). All tests of ultra-rare variant burden in cases relative to controls were carried out using a logistic regression framework that has been established in prior studies 8 , 18 . Analysis was done for each type of variants separately. In order to ensure the robustness of the analysis, we first empirically evaluated variables that could potential confound association results. We fit a multiple linear regression model, where dependent/outcome variable was the genome-wide total number of ultra-rare SVs, and the independent/predictor variables were sex, mean sequence coverage, and the first three principal components derived from common SNP genotypes. Only the first principal component (PC1) showed significant association with genome-wide burden of ultra-rare SVs. To control its potential confounding effect, we included PC1 as covariate in all tests of burdens of ultra-rare SVs. For genome-wide burden tests, we fit the following logistic regression model: y ~ covariate + global, where y is the outcome phenotype variable (schizophrenia = 1, control = 0), covariate is the empirically determined covariate variable (i.e. PC1), and global is the genome-wide total number of ultra-rare SVs. For burden tests in target regions, we fit the following logistic regression model for each target region: y ~ covariate + global + target_region, where target_region is the count per sample of ultra-rare SVs that are specific to the target region annotation. Variables global and target_region are computed based on input variants (i.e. coding, noncoding, or combined coding and noncoding). We carried out one-sided statistical tests assuming increased burden of ultra-rare SVs in cases. Logistic regression models were implemented by the glm function in R (v3.2.2). Odds ratios were computed to measure the increase in the likelihood of having disease per unit increase in the burden of ultra-rare SVs. Empirical P values were derived by 10,000 permutations by swapping phenotype labels. Single-variant association analysis Analysis was done using PLINK (v1.9). Following the general guideline for logistic regression, we used a MAF cutoff of 0.01 to ensure that there were at least 10 events in the less frequent category. Post-QC variants that had MAF > 0.01 were subject for single-variant association analysis. Established variant filters were used to ensure all variants had missing rate per variant < 0.02, missing rate per variant difference in cases and controls < 0.02 ( P > 0.005), and Hardy–Weinberg equilibrium FDR < 1 × 10 −6 (controls) or <1 × 10 −10 (cases). To empirically determine confounding factors, we fit logistic regression models where the dependent outcome variable is phenotype (case/control status) and the independent predictor variables are sex and the first 20 PCs determined from common SNPs. Only PC2 showed significant association with phenotype. Therefore, we included PC2 as covariate for the analysis of autosomal variants, and included PC2 and sex as covariates for the analysis of chromosome X. A logistic regression model with additive genetic model (Plink –logistic) with empirically determined covariates was used to estimate association between single variants and schizophrenia. Statistical tests were two-sided. The established threshold of 5 × 10 −8 was used to identify genome-wide significance. Following association, we used IGV to inspect read alignments underlying each putative variant that exceeded the genome-wide significance threshold. False positives that had no IGV support were excluded. Manhattan plots were constructed using R (v3.2.2). Analysis was done separately for SNV/indels, deletions, duplications, inversions, ALU, SVA, and LINE1. For SNV/indels, deletions, duplications, and inversions, we filtered variants as described above. For ALU, LINE1, and SVA, we additionally restricted our attention to the most reliable variants by selecting variants with a quality score of 5 (best). For ALU, MAF was set to be >0.05 and Hardy–Weinberg equilibrium threshold to FDR < 0.05 for both cases and controls. Heritability estimation Following Wainschtein et al. 14 , we started with the initial set of QC-passing subjects and post-QC SNV/indels and additionally required that each variant observed at least three times in our dataset (i.e. MAF starts at 0.0007). Next we further removed one of each pair of individuals with estimated genetic relatedness >0.05. These procedures resulted in 1151 cases and 911 controls and 17,364,971 sequence variants to be used for narrow-sense heritability estimation. HapMap3 SNPs were downloaded from the HapMap ftp site. To identify imputable variants from Haplotype Reference Consortium (HRC) 21 and 1000 Genomes Project Consortium (1000GP) 22 , we used previously imputed data obtained by using SNP genotypes of the schizophrenia subjects from Illumina OmniExpress array genotyping and imputing the array genotype data to the HRC.r1.1 or the 1000GP p3v5 reference panel using EAGLE2 78 (v2.0.5). On the HRC imputation variants, we excluded variants with Imputation Score Info <0.8, individual missing rate >0.05, genotype missing rate >0.05, MAF < 0.0001 and P -value <1e-06 of Hardy–Weinberg equilibrium test. On the 1000GP imputation variants, we excluded variants with Imputation Score Info <0.8, and allele frequencies <0.005 or allele frequencies >0.995 based on previous results 17 . Heritability analysis was done using GCTA 39 (v1.26.0, v1.92.3beta). We assumed lifetime risk of schizophrenia of 1%. We calculated principal components from 1,189,077 HapMap3 37 SNPs selected from the WGS data and included the first 10 PCs (calculated from the same set of HapMap3 SNPs) for the analyses conducted using GCTA’s GREML-LDMS 40 . To test the robustness of the estimates, we repeated the analysis while correcting for the first 4 PCs, and for the first 12 PCs and found the results were similar. With the GREML-LDMS approach, a total of 14 MAF and LD bins were considered and the same set of bins were used for both the imputed-SNPs and for the WGS sequence variants. Specifically, we split the variants into seven different bins based on MAF (0.0007–0.0001, 0.001–0.01, 0.01–0.1, 0.1–0.2, 0.2–0.3, 0.3–0.4, 0.4–0.5) and for each bin of variants, computed SNP-based LD scores with the following parameters: –ld-score-region 200, -ld-wind 10000, ld-rsq-cutoff 0. For a given bin of variants defined by MAF, we defined low LD as < median LD score, and high LD as ≥median LD score. For each bin subsetted on MAF (and further split by LD), we used GCTA to produce a genetic relationship matrix (GRM) from the set of genotypes. We then used the REML function (via the Fisher scoring algorithm, as implemented in GCTA via –reml-alg 1) to conduct a GREML-LDMS analysis. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Summary statistics from single-variant association analysis in this study can be downloaded from Psychiatric Genomics Consortium’s website at . All other summary statistics and supporting data are available in Supplementary Information . Due to recent changes in Swedish and European Union regulations regarding genetic data, we are unable to deposit individual-level data into controlled-access repositories like dbGaP. Collaborative analyses are possible and can be pursued by contacting the authors. Code availability Analysis software used in this study include the following: HiSeq Control Software 3.3.39/RTA 2.7.1; Piper (v1.4.0, ); bwa (v0.7.12, ); SAMtools (v0.1.19, ); Picard (v1.120, ); qualimap (v2.2, ); FastQC (v0.11.4, ); BEDTools (v2.28.0, ); GATK (v3.3, ); PLINK (v1.9, ); PLINK (v1.07, ); ExpansionHunter (v2.5.5, ); Delly (v0.7.7, ); MELT (v2, ); VEP (v91, ); vcfanno (v0.2.9, ); AnnotSV (v1.1.1, ); VCFscreen (v0.1, ); R (v3.2.2., ); R/gap package ( ); GCTA (v1.26.0, v1.92.3beta, ); JMP (v11, ); AbCD Calculator ( ). Python code “concordance.py” and other relevant codes are posted at . Change history 05 January 2022 A Correction to this paper has been published:
Most research about the genetics of schizophrenia has sought to understand the role that genes play in the development and heritability of schizophrenia. Many discoveries have been made, but there have been many missing pieces. Now, UNC School of Medicine scientists have conducted the largest-ever whole genome sequencing study of schizophrenia to provide a more complete picture of the role the human genome plays in this disease. Published in Nature Communications, the study co-led by senior author Jin Szatkiewicz, Ph.D., associate professor in the UNC Department of Genetics, suggests that rare structural genetic variants could play a role in schizophrenia. "Our results suggest that ultra-rare structural variants that affect the boundaries of a specific genome structure increase risk for schizophrenia," Szatkiewicz said. "Alterations in these boundaries may lead to dysregulation of gene expression, and we think future mechanistic studies could determine the precise functional effects these variants have on biology." Previous studies on the genetics of schizophrenia have primarily involved using common genetic variations known as SNPs (alterations in common genetic sequences and each affecting a single nucleotide), rare variations in the part of DNA that provide instructions for making proteins, or very large structural variations (alterations affecting a few hundred thousands of nucleotides). These studies give snapshots of the genome, leaving a large portion of the genome a mystery, as it potentially relates to schizophrenia. In the Nature Communications study, Szatkiewicz and colleagues examined the entire genome, using a method called whole genome sequencing (WGS). The primary reason WGS hasn't been more widely used is that it is very expensive. For this study, an international collaboration pooled funding from National Institute of Mental Health grants and matching funds from Sweden's SciLife Labs to conduct deep whole genome sequencing on 1,165 people with schizophrenia and 1,000 controls—the largest known WGS study of schizophrenia ever. As a result, new discoveries were made. Previously undetectable mutations in DNA were found that scientists had never seen before in schizophrenia. In particular, this study highlighted the role that a three-dimensional genome structure known as topologically associated domains (TADs) could play in the development of schizophrenia. TADs are distinct regions of the genome with strict boundaries between them that keep the domains from interacting with genetic material in neighboring TADs. Shifting or breaking these boundaries allows interactions between genes and regulatory elements that normally would not interact. When these interactions occur, gene expression may be changed in undesirable ways that could result in congenital defects, formation of cancers, and developmental disorders. This study found that extremely rare structural variants affecting TAD boundaries in the brain occur significantly more often in people with schizophrenia than in those without it. Structural variants are large mutations that may involve missing or duplicated genetic sequences, or sequences that are not in the typical genome. This finding suggests that misplaced or missing TAD boundaries may also contribute to the development of schizophrenia. This study was the first to discover the connection between anomalies in TADs and the development of schizophrenia. This work has highlighted TADs-affecting structural variants as prime candidates for future mechanistic studies of the biology of schizophrenia. "A possible future investigation would be to work with patient-derived cells with these TADs-affecting mutations and figure out what exactly happened at the molecular level," said Szatkiewicz, an adjunct assistant professor of psychiatry at UNC. "In the future, we could use this information about the TAD effects to help develop drugs or precision medicine treatments that could repair disrupted TADs or affected gene expressions which may improve patient outcomes." This study will be combined with other WGS studies in order to increase the sample size to further confirm these results. This research will also help the scientific community build on the unfolding genetic mysteries of schizophrenia.
10.1038/s41467-020-15707-w
Other
Divided parties rarely win presidential elections, study shows
Paul-Henri Gurian et al. National Party Division and Divisive State Primaries in U.S. Presidential Elections, 1948–2012, Political Behavior (2016). DOI: 10.1007/s11109-016-9332-1 Journal information: Political Behavior
http://dx.doi.org/10.1007/s11109-016-9332-1
https://phys.org/news/2016-03-parties-rarely-presidential-elections.html
Abstract In presidential nomination campaigns, individual state primaries and a national competition take place simultaneously. The relationship between divisive state primaries and general election outcomes is substantially different in presidential campaigns than in single-state campaigns. To capture the full impact of divisiveness in presidential campaigns, one must estimate both the impact of national party division (NPD) and the impact of divisive primaries in individual states. To do so, we develop a comprehensive model of state outcomes in presidential campaigns that incorporates both state-level and national-level controls. We also examine and compare several measures of NPD and several measures of divisive state primaries found in previous research. We find that both NPD and divisive state primaries have independent and significant influence on state-level general election outcomes, with the former having a greater and more widespread impact on the national results. The findings are not artifacts of statistical techniques, timeframes or operational definitions. The results are consistent—varying very little across a wide range of methods and specifications. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction The divisive primary hypothesis, first suggested by Key ( 1953 ), posits that when a party’s primary is competitive or the eventual nominee does poorly in the primary, the party suffers in the general election. However, in presidential elections, measuring the impact of divisiveness is complicated by the fact that campaigns are waged both at the state and national level. As a consequence, the relationship between divisive state primaries and general election outcomes is substantially different in presidential campaigns than in subnational campaigns. Substantial research exists on the impact of divisive state primaries, however this research generally ignores the important distinction between national and subnational elections. Presidential elections, unlike state-level elections, directly involve the national parties. In any state, a divided national party could have a negative impact on the performance of its presidential candidate even though that state’s presidential primary was not divisive. Thus we do not know if national-level or state-level divisiveness exerts greater influence on state-level outcomes in presidential elections because existing models do not account for national party division (NPD). In a single-state primary, the winner of the popular vote becomes the party nominee. Presidential primaries are part of a larger, more complex environment. In presidential campaigns, individual state primaries do not determine the identity of the nominee. Rather, they select (or apportion) delegates to the national convention who then select the party nominee. It is common in the literature to use the term “divisive presidential primary” either to describe the divisiveness of an individual state primary or to describe the divisiveness of the national party during the nomination process. There has long been concern that the presidential nomination process undermines party cohesion and encourages intraparty factionalism. When one national party is divided and the other party united, the divided party usually loses the election. The relative divisiveness of the national parties is a critical component of the national campaign, yet it is included neither in models of state primary divisiveness nor in models of aggregate presidential election outcomes. Excluding NPD from models of presidential election outcomes has the potential to bias the estimate of the impact of divisive state primaries or other variables. To measure the full impact of the divisiveness in presidential campaigns, it is necessary to measure both NPD as well as divisive primaries in individual states. NPD is not simply an aggregation of divisive state primaries. NPDs are deeper and larger than state party divisions. A set of divisive state primaries does not necessarily indicate a divided national party in the general election or vice versa. Footnote 1 Absent the influence of NPD, measuring the impact of divisive state primaries might seem relatively straightforward. However, studies of subnational divisive primaries have reached a confusing variety of conclusions (Lengle and Owen 1996 ). Footnote 2 Measuring the impact of divisive state primaries in presidential campaigns is further complicated by the impact of NPD. In this research, we establish and measure the impact of NPD and that of divisive state primaries (DSP). To these ends, we first develop a comprehensive model of state outcomes in presidential campaigns that incorporates both state and national-level controls. As we will explain, there are several ways to define the appropriate timeframe and to specify the model. We test several possible measures of NPD, and several measures of divisive state primaries found in previous research. We show that our findings are not artifacts of statistical techniques, timeframes or operational definitions. The results are consistent; varying only slightly across a wide range of methods and specifications. Divisive State Primaries in Presidential and Subnational Campaigns The causes and consequences of divisive presidential primaries are somewhat different than those of divisive subnational primaries. Footnote 3 The literature on divisive congressional and gubernatorial primaries posits a link between a divisive state primary and the general election outcome in that state. Because presidential nomination campaigns are sequential and national in scope, some of what occurs in individual state primaries spills over to other state contests. Footnote 4 Previous studies (Hacker 1965 ; Kenney and Rice 1987 ; Atkeson 1998 ; Lazarus 2005 ; Southwell 1986 ) have suggested that a divisive subnational primary decreases that party’s vote because (a) supporters of the losing candidate are alienated or discouraged, (b) the primary battle provides rhetorical “ammunition” for the opposing party, or (c) the state party’s resources are depleted. Each of these effects manifests differently in presidential campaigns than in subnational campaigns. In a congressional or gubernatorial campaign, a competitive primary may divide the state party and deplete its resources, hurting its ability to compete in the general election. In a presidential campaign, a few divisive state primaries would neither divide the national party nor deplete its resources. Presidential candidates allocate resources to states based on their strategic importance; if state party resources are depleted in a battleground state, the national campaign will pump money into that state. Footnote 5 In a presidential campaign, because of national media coverage, supporters of losing candidates may be alienated even if there was not a divisive primary in their state. Rhetorical attacks made by intraparty rivals in a few primaries may be co-opted and disseminated nationally, influencing subsequent national media coverage of that candidate. More generally, the negative image of an internally divided national party may be a potent cue to general election voters. In presidential campaigns, some of the influences on general election outcomes derive not from divisive primaries in specific states but from a divided national party. An incumbent president may be challenged within his/her own party during the primaries, but the challenge is national, not restricted to a specific state. Both incumbent and challenger choose the state primaries in which they will compete vigorously. That decision is based on national-level factors as well as state factors. Similarly, both frontrunners and challengers in non-incumbent nomination campaigns run in particular states to bolster their chances of winning the national nomination. Differences in the context and causes of divisiveness in presidential and subnational primaries help explain why various studies have come to such different conclusions—analyzing presidential and subnational primaries separately leads to more clear and meaningful results. In this research, we focus exclusively on presidential primaries and their impact on state-level presidential general election results. Developing a Model of State Vote Outcomes We develop a comprehensive model of a state vote outcomes to estimate the impact of NPD, and to test a variety of operationalizations of divisive state primaries to determine the extent to which they influence the estimated effects. Examining the literature on divisive presidential primaries, we find that virtually all models are under-specified. Typically, they include few state-level controls and few, if any, national-level controls. This allows the possibility of excluded variable bias—that the estimated impact of divisive state primaries includes some of the impact of excluded variables, typically inflating and biasing that estimate. Our model of state general election outcomes takes into account a wide set of national-level and state-level variables that generally correspond to the factors that influence individual voting behavior. The use of state-level data, including partisanship and ideology, should provide strong controls to measure the impact of NPD and to re-examine the divisive primary hypothesis. The Key Variables The dependent variable used here is the proportion of the major party state vote won by the Democratic Party in the general election. Footnote 6 To measure NPD, we use the proportion of delegate votes received by the Democratic nominee (on the first ballot) at the convention minus the corresponding proportion for the Republican nominee as a measure of relative NPD. Below, we will show that this measure, though not ideal, leads to results that are substantively the same as results obtained using very different measures of NPD such as the difference in aggregate poplar vote. To measure divisive state primaries we use the proportion of the state primary vote received by the eventual Democratic nominee minus the corresponding proportion received by the eventual Republican nominee as the measure of divisive state primaries (Kenney and Rice 1987 ). Below, we test a variety of possible measures of divisive state primaries to determine the degree to which the operationalization of this key variable influences the results. State-Level Variables Some previous studies of divisive primaries have controlled for state-level effects by including one or more previous presidential election results (Mayer 1996 ; Atkeson 1998 ). In this study, state-level effects are accounted for by controlling for state partisanship, state ideology, and the home states of the presidential and vice presidential nominees. Rabinowitz et al. ( 1984 ) analyzed the vote outcomes of the states and found that presidential elections are structured by party and ideology (Jackson and Carsey 1999 ; Erikson et al. 1993 ). Research at the individual and state levels shows that partisanship exerts substantial influence in presidential elections. State partisanship is measured here as the average of the most recent statewide votes for Governor, Senator and U.S. House. Previous presidential vote is not included since that might reflect national factors involving previous presidential politics rather than underlying state-level partisanship. In this model, state partisanship is not fixed; rather, its values often change to some degree from election to election. One might argue that previous voting in congressional and gubernatorial elections is not a good measure of partisanship for the southern states until recent decades. In their analysis, Rabinowitz et al. ( 1984 ) found that conservative Democratic states such as Alabama and South Carolina tended to cluster in a different part of the factor space than either liberal Democratic states or conservative Republican states. Thus, we include two measures of state ideology, a general left–right scale and a scale that involves civil rights and social issues. Footnote 7 These controls should capture changing state-level effects. Footnote 8 Although the general ideology measure fails to capture some salient issues, it does reflect many issue-oriented differences across state populations. Civil rights issues (integration, voting rights, affirmative action, etc.) and social issues (abortion, gay rights, gun control, etc.) have been powerful over many elections and, critical to this study, their impact has been regional, affecting states differently (McCarty et al. 2006 ; Zaller 1992 ). Both ideology variables are measured using mean DW-NOMINATE roll call data scores for the U.S. House delegation in each state in the term prior to the presidential election (Poole and Rosenthal 1997 ). The expectation is that the civil rights/social issues variable will have a negative effect on voting for the Democratic Party in elections since the 1970s. Candidate evaluation is difficult to measure at the state level. Polls, which could provide such information, are rarely consistent in format and rarely available for every state. Thus we are limited to controls for the home state of the presidential and vice presidential candidates, and the presidential candidate’s home region. Presidential candidates tend to do better in their home region than elsewhere and both presidential and vice presidential candidates tend to do well in their home states. Each of these variables can take on values of −1, 0 or +1 (e.g., the Republican candidate’s home state has a value of −1), although 0 is by far the most common. Home region is measured as all states adjacent to a candidate’s home state (Holbrook 1991 ). National-Level Variables In the literature on forecasting presidential elections (e.g., Campbell 2001 ; Bartels and Zaller 2001 ) there is general agreement that the national economy has a powerful impact on the national popular vote. In the current research, national economic conditions are operationalized as the annual change in RDI. Since the dependent variable is Democratic vote share, this variable is multiplied by −1 when the incumbent president is Republican. Thus, credit or blame for the economy is directed at the incumbent party. Footnote 9 Several studies suggest that the apparent effect of divisiveness may be spurious (Jacobson and Kernell 1981 ; Kenney 1988; Atkeson 1998 ). It is quite possible that an unpopular incumbent would attract more or stronger challengers; similarly, a popular incumbent might “scare off” strong challengers. Thus the apparent relationship between divisiveness and vote outcomes may be an artifact of spuriousness—both divisiveness and vote outcomes are strongly influenced by the strength of the incumbent. This argument is supported, directly or indirectly, by numerous studies of subnational primaries (e.g., Hacker 1965 ; Partin 2002 ; Lazarus 2005 ). We address this concern in two ways. At the presidential level, an unpopular incumbent may encourage intraparty challenges, which may exacerbate existing regional or ideological divisions within the party. However, unlike most subnational election campaigns, the out-party typically begins with 5–9 legitimate candidates (e.g., senators and governors) for the nomination, whether the incumbent is popular or not. Historically the number and “quality” of nomination candidates in the out-party (or open seat presidential campaigns) appears unrelated to the quality of the opposing party’s candidate. Footnote 10 Ford (in 1976) and Carter (in 1980) did attract strong intraparty opponents, but such cases are rare. Popular incumbents like Nixon and Reagan were challenged by relatively strong fields of opponents. We also address this concern statistically. If the strength of the incumbent president is causing a spurious relationship to appear causal, including such a variable in the model would cause the parameter estimates of divisiveness to diminish or lose statistical significance. It is difficult to measure candidate quality directly, but we can do so by controlling for national economic conditions (which, except for 2008, rarely change much during the election year) and presidential approval, measured as the Gallup approval rating in January of the election year, before the primaries begin (Atkeson 1998 ). Footnote 11 . These variables reflect the perceived quality of the incumbent party candidate. (Indeed, these are the two most common factors in the presidential election forecasts.) Each party is a coalition of diverse elements. The longer a party holds power the more likely party fissures will develop (Campbell 2000 ). Thus, like the forecast models, we include a variable (“terms”) that controls for the length of time a party has held the White House. If the incumbent party has been in office for only one term, this variable has a value of 0; if it has been in office for two or more consecutive terms, this variable has a value of 1. We include a separate dummy variable indicating whether or not the incumbent president is running. Typically, the major parties nominate relatively centrist candidates, but occasionally one party nominates a relative extremist. In general, candidates who are perceived as more ideologically extreme are disadvantaged in presidential elections. Bartels and Zaller ( 2001 ) combined expert ratings of candidates 1948–1980 (Rosenstone 1983 ) with NES data 1984–1996. We extend this measure through 2012. Higher absolute values indicate greater relative extremism. In addition, we include a control variable for the impact of war. Wars have the capacity to divide parties and affect the decisions of candidates and voters. This variable is measured as the number of combat fatalities as a proportion of the national population. Increased fatalities are expected to disadvantage the incumbent party. War is measured as a national, not a state, variable because voters in all states receive national news of foreign affairs and because variations across elections are greater than those across states (Table 1 ). Footnote 12 Table 1 Descriptive statistics 1948–2012 Full size table Data and Methods Presidential elections are not singular national elections, as they are often treated, but are 51 separate contests. The influence of state factors is not included in most national studies. The model developed here measures the effects of NPD and divisive state primaries on presidential general election outcomes across space (states) and over time (elections). A pooled time-series allows both state-level and national-level effects to be tested concurrently. The model is applied to the set of presidential elections from 1948 though 2012. This study melds several research streams including those of divisive primaries and general election forecasting. The model used in this research differs from the national forecasting models in several ways. First, the purpose is explanation not prediction; we do not use previous presidential vote outcomes or national trial heat polls since they do not seem to add to the explanatory power of the model. Second, variables representing “special circumstances” such as Watergate or a Catholic candidate are not included. Instead, the model includes only factors that occur regularly in presidential elections. Third, the unit of analysis is the state rather than the nation. This study is not the first to develop a model of state-level presidential voting; Gelman and King ( 1993 ), Campbell ( 1992 ), Holbrook ( 1991 ), and Rosenstone ( 1983 ) have provided useful guidance. A time-series cross-sectional design should have enough power to generalize about the relationship between nomination campaigns and general elections. Because multiple units in time are observed, we need to control for the election year context (Stimson 1985 ). The substantive relevance of the election year is highlighted by Atkeson ( 1998 ). It is preferable to explicitly model national effects with actual national-level variables rather than leaving those effects in the “black box” of an election year dummy. Because election year dummies would be perfectly collinear with the national variables that change over time but do not vary across states, random effects for time are employed. However, fixed-effects are used to capture state-level heterogeneity. More formally, the standard two-way error component panel data model (Baltagi 2005 ) is given by $$\it Y_{it} \, = {\text{a}} + \, X_{it} \, {\beta} \, + \, u_{it}$$ where \(u_{it} = \, \mu_{i} + \lambda_{t} + \, v_{it}\) and i indexes units and t indexes time. Thus u it is a compound error term with a unit specific error term µ i , an election specific error term λ t , and an observation specific error term v it . OLS estimates of this model fail to account for both unit-specific and time-specific unobserved effects, which leads to incorrect standard errors (and potentially biased estimates if either unit- or time-specific effects are correlated with the independent variables). Thus this model is estimated via maximum likelihood using fixed effects for states (to explicitly model unit-specific effects) and random effects for time (since fixed effects for time prohibit the estimation of coefficients of variables, like NPD, that vary over time but not across states). A Wald test shows that the state-specific effects are jointly significant ( p < .001), and a likelihood ratio test against a model without random effects for time shows that the unrestricted model fits the data better than the restricted model ( p < .001). Footnote 13 1948 was chosen as the starting date because, as the first post-WW II election, it in some ways represents the beginning of the modern era of electoral politics and because national economic data (in particular RDI) are not available prior to this date. Although some state primaries played a role in some earlier nomination campaigns (1912, 1928–1944), they played a substantial role in the 1948 Republican campaign and in subsequent campaigns. Footnote 14 The 1968 election has been excluded because of the anomalous character of both the primary campaign and the general election makes it inappropriate for this research. Footnote 15 With only 16 national elections, one of them might be an influential outlier that biases the coefficients. As a test, the model was estimated repeatedly, each time omitting one election year. The effects on the key variables were minimal. There are 781 valid cases in the dataset. Footnote 16 Both primaries and caucuses are included. Lengle et al. ( 1995 ) analyzed the impact of caucuses, divisive primaries and non-divisive primaries on presidential election outcomes. Their results show very similar patterns for caucuses and non-divisive primaries, in sharp contrast to divisive primaries. They concluded that a caucus is a “non-divisive mechanism” that is virtually identical to a non-divisive primary in its impact on the general election. Accordingly, caucuses were assigned a value of 0 in terms of primary divisiveness (i.e., no advantage to either party). This substantially reduces the number of missing cases, facilitating more reliable parameter estimates. As a test, the model was estimated without any caucus states (see online Table A-1.) Measuring National Party Division and Divisive State Primaries NPD is generally driven by elites: activists, office holders, and opinion leaders (see Steger 2008 ). The actions of elites, especially candidates, can either exacerbate or mitigate NPDs. Unified, cohesive parties help the nominee, but parties that are internally divided and lacking cohesion hurt their candidates’ chances by diverting resources and tarnishing the nominee’s image (Campbell 2004 ). When party officials work together, with little ideological conflict, their electoral and policy goals are relatively clear (see Herrera 1993 ). The greater the fragmentation, the less likely that one candidate will be seen as the dominant frontrunner in the “invisible primary”. Lack of consensus during the invisible primary does not necessarily indicate party division during the general election campaign. Contested nominations are the norm in presidential campaigns; competition for the nomination does not indicate a divided party. A strong diverse field of candidates can exacerbate existing divisions; however, the dynamics of the system are such that one candidate could quickly capture the nomination. Footnote 17 When a nomination campaign is divisive, the nominee and party elites attempt to reunite the party. They will not be able to erase years of ideological, regional or demographic differences, but they may be able to persuade disparate factions to work together temporarily to help the party win the presidency. The appearance of unity or division at the convention can influence undecided voters who are just beginning to focus on the campaign (see Holbrook 1996 ). Footnote 18 Measuring National Party Division Measuring NPD presents multiple difficulties. Because this variable is central to our research, several measures were tested. Through the 1970s delegate votes at the national conventions provided a rough measure of divisiveness. Thus NPD could be measured as the proportion of convention delegate votes received by the Democratic nominee (on the first ballot) minus the corresponding proportion for the Republican nominee. This is a reasonable, though not ideal, measure of NPD, at least through the 1970s. However, beginning in the 1980s party elites have “stage-managed” convention votes (CV), perhaps in part to present the national television audience with the appearance of party unity. A delegate-based measure mainly taps the behavior of party activists, chosen during the presidential campaign by fellow partisan voters. The national conventions are typically the most influential events of the entire campaign (Holbrook 1996 ). They occur when many voters, especially independents and weak partisans are beginning to focus on the two parties and their nominees. Most nominees receive overwhelming support on the convention ballot; the exceptions occur when the national party is severely divided. As an alternative measure one could compare the proportion of the national primary vote won by the two nominees (aggregate primary vote, or APV). This measure is more appropriate to the post-reform period than to earlier elections. Footnote 19 Now, voting in a primary is consequential; delegates to the convention are allocated based on votes in primaries. Before 1972, there were few primaries and voting in primaries bore little if any relationship to the choice of the parties’ nominees. Neither CVs nor APV is ideal. Thus we considered, tested, but eventually rejected, several other possible measures, including the nominee’s New Hampshire primary vote (Norpoth 2001 ), the proportion of early-deciding partisans, and several variations of CV and APV (See online appendix). In the analyses to follow we estimate the impact of NPD using both CV and APV to demonstrate that our substantive conclusions about the impact of NPD do not depend on how the variable is measured. Measuring Divisive State Primaries There is no consensus on how best to operationalize state-level primary divisiveness. Different ways of operationalizing the concept might account for the differing results seen in previous studies. In presidential campaigns, a divisive state primary can be thought of as one in which the state primary electorate generally prefers a candidate(s) other than the eventual nominee. This implies that there is a large pool of voters who may be inclined to abstain or defect. This concept of divisiveness is measured by the proportion of the vote for candidates other than the eventual nominee (Born 1981 ). Kenney and Rice ( 1987 ) argue that the proportion of the state primary vote received by the eventual Democratic nominee minus the corresponding proportion received by the eventual Republican nominee is the best measure of state primary divisiveness (also see Atkeson 1998 ). This approach seems advantageous since it accounts for the relative divisiveness of the two parties’ state primaries. The major alternative approach focuses on the competitiveness of the primary. A close, hard-fought primary may lead some voters to harbor intense negative feelings about the eventual nominee. This concept of divisiveness is measured by the vote margin between the two leading candidates in the primary (Lengle et al. 1995 ), sometimes operationalized as a dummy variable (e.g., less than 20 % vote margin). Although both approaches measure aspects of divisiveness that could influence the general election outcome, they relate to substantively different phenomena. Consider a state primary in which the eventual nominee comes in second with only 30 % of the vote while the winner of that primary receives 35 % (the remaining votes distributed among other candidates). Such a primary would be considered highly divisive by the former measure (support for other candidates) but relatively non-divisive by the latter (margin of victory). Several measures used in previous research were tested. The analyses will indicate the kind of “divisiveness” that leads to diminished performance in the general election. Footnote 20 Results The main results of the analyses are shown in Table 2 . They indicate that both NPD and divisive state primaries are statistically significant at the .01 level (whether NPD is measured using CV or APV) and exert a potentially meaningful (i.e., non-trivial) impact on election results. The parameter estimates of the control variables vary, largely because of the different time-periods involved. As expected, the parameter estimates of the national economy, state partisanship and state ideology are all statistically significant and in the expected direction. Table 2 Impact of national party division and divisive state primaries on state vote outcomes Full size table The results indicate that the impact of divisive state primaries is limited, while the impact of NPD can be substantial. For example (using CV as the measure of NPD), if in a certain state one party’s primary is divisive, with its eventual nominee receiving only 50 % of the state primary vote, and the other party’s primary is non-divisive with its nominee receiving 90 %, the former would lose only 1.12 % in that state in the general election. In comparison, smaller differences in NPD lead to greater differences in the national outcome. If the nominee of one party receives 70 % of the vote at his/her national convention while the nominee of the other party receives 90 %, then the former would lose 2.43 % in the national popular vote. During the 1948–2012 period, NPD ranged from –46.2 to +35.3 (negative values indicate greater division in the Democratic Party). Considering this range, the coefficient of .121 indicates that the effect of relative NPD on the Democratic popular vote varied from −5.6 % to +4.3 %. The mean absolute value of DSP for all states is 19.18 %. The mean absolute shift in state vote outcome caused by DSP is .52 %, and the maximum is 2.7 %. The mean absolute shift in state vote outcomes caused by NPD is 2.9 %, and the maximum shift is 5.59 %. Assuming all the DSP scores in an election have the same sign (which is plausible though unlikely) and setting both DSP and NPD at their (absolute) means, the mean national shift caused by NPD is more than five times the mean national shift caused by DSP. Footnote 21 The Dependent Variable A critical concern is the operationalization of NPD. We argue above that relative CV (per cent of first ballot votes for the Democratic nominee minus the corresponding percentage for the Republican nominee) is a suitable measure of NPD. We note however that since the 1980s, conventions and convention voting have become increasingly stage-managed. As indicated above, APV, is well suited to the post-reform era (from 1972 on). Thus we employ the basic model, as described above, in two ways: one using CV for the entire post WW II period (1948–2012), and the other using APV for the post-reform period (1972–2012). Table 2 shows the results of the two models, identical except for the measure of NPD: CV in one, APV in the other. Because these variables are measured on different scales, we do not expect the coefficients of CV and APV to be similar. We do however expect, if both are reasonable measures of NPD, that the coefficient of divisive state primaries will not be affected much by which measure of NPD is used. Indeed, this is what we find: the coefficients of NPD differ according to their scale of measurement (.121 using CV 1948–2012, .237 using APV 1972–2012) but both are statistically significant at the .01 level; the coefficients of divisive state primaries are strikingly similar (.0279 using CV, .0258 using APV; both significant at the .01 level). Measured as the proportion of CVs received by the Democratic nominee minus the corresponding number for the Republican nominee (CV), NPD indicates that, for example, if one nominee receives 90 % of the delegates, while the other receives 65 %, the latter would lose 3.03 % in the general election. The impact of NPD was at least 3.29 % in 9 of the 16 elections. The coefficient for NPD measured as relative APV (taking into account the difference in the unit of measure) is larger than when measured as relative CV, suggesting that the impact of NPD may be greater than estimated using CV. The similarity between the results using CV and APV provides evidence that the results are not very sensitive to the way that NPD is operationalized. Except as noted, the analyses to follow will use CV as the measure of NPD. Tests for Robustness A number of sensitivity tests were performed. To test for possible realignment effects and/or the McGovern-Frasier reforms, the model was estimated with dummies for 1972 or later and for 1980 or later, as well as interactions with these variables. Both January and July presidential approval ratings were tested. The war variable was tested both as a state-level and as a national-level effect. Third party votes were incorporated in several ways. Changes in coefficients (using either CVs or APV) are negligible when these variables are included. To make sure that no one election was driving or distorting the results, the model was estimated repeatedly, each time excluding one election. The results are very consistent, varying slightly across a wide range of statistical methods, model specifications, and measurements. Neither possible realignment effects, the timing of presidential approval ratings, the way war was incorporated into the model, nor the exclusion of any one election led to results that failed to confirm the hypotheses. As shown in Table A-1 (online appendix), the parameter estimate for divisive state primary is similar if caucuses are excluded, though the parameter estimate for NPD is about 20 % lower; the substantive conclusions are unchanged. The parameter estimates are very similar whether or not 1968 is included. Nonetheless, for the reasons discussed above, in footnote 15, and the online appendix, we decided that the 1968 data are not appropriate to this research. The model was also estimated with a two-way random effects model, a GLS estimator for panel data with AR(1) serial correlation (Baltagi and Wu 1999 ), Panel Corrected Standard Errors (both with and without a lagged dependent variable), and a mixed model that adds a random effect on a third party variable, effectively allowing it to vary across elections (while maintaining fixed-effects for states). The results are robust to the use of these alternative statistical techniques. The size of the coefficients varies to some extent, but the substantive message is the same: NPD (using either the CV measure or the APV measure) has large and statistically significant effects on election outcomes and divisive state primaries have small but significant effects (Table A-1, online). With few exceptions, the results are very similar across varying specifications. The coefficient of NPD was very stable, and significant at the .01 level. (More precisely, the coefficients using CV as the measure of NPD are very similar to one another; the same is true of the coefficients using APV as the measure of NPD). The coefficient of divisive state primaries, measured in terms of support for the eventual nominees, was extremely stable and significant at the .05 level, two-tailed. (As discussed below, when measured by margin of victory, the divisive state primaries variable in not statistically significant.) The results of these tests indicate that that our findings are robust: both NPD and state primary divisiveness significantly influence state-level presidential outcomes. Footnote 22 Sensitivity to the Operationalization of DSP As discussed above, several measures have been used in previous research to represent divisive state primaries. The two main approaches are to define a divisive state primary as (a) one in which the eventual nominee does poorly, and (b) one in which the victory margin is small. Each can be represented either as two variables (one for each party) or as one variable (the difference between the two parties in that state). Furthermore, margin of victory can be represented either as a continuous or as a dummy variable (e.g., using a 20 % cutoff). As shown in Table 3 , the results indicate that operationalizing divisive state primaries in terms of the eventual nominee’s performance is more reliable than operationalizing it in terms of victory margin. Footnote 23 Indeed, both of the former are statistically significant while none of the latter are. This suggests that diminished performance in the general election occurs because there is a large pool of voters who did not support the eventual nominee in the state primary rather than because some voters evaluate the nominee negatively because the state primary was close and competitive (Note that the coefficient of NPD is very stable and highly significant regardless of the way that divisive state primaries is measured). Table 3 Results of tests of alternative measures of divisive state primaries (DSP) Full size table Although the impact of divisive state primary (measured in terms of the nominees’ relative performance) is statistically significant, its national impact is relatively minor (State party divisions are nonetheless quite important to state party leaders, who generally have little or no effect on NPD). Even in the most extreme possible case, where one party’s nominee received 100 % of the state primary vote while his opponent received 0 %, the impact on the general election vote in that state is only 2.79 % (using CV; 2.58 % using APV). State partisanship and state ideology have a greater impact than divisive state primaries (using standardized coefficients for comparative purposes). This confirms the expectation that divisive state primaries have a small and usually inconsequential effect on electoral outcomes. On the other hand, NPD is one of the more influential variables. These results support the hypothesis that NPD potentially has a substantial negative effect on electoral outcomes. Footnote 24 A divisive state primary leads to a maximum possible decrease in a state’s general election vote of 2.8 %, while a divided national party more often than not leads to a decrease of more than 3.2 %. The impact of divisive state primaries is limited to a subset of states while the impact of NPD is not. Taken together, these results are consistent with the thesis that the overall negative impact of NPD is greater than that of divisive state primaries. Even in a close election, it is unlikely that divisive state primaries would make the difference in terms of who wins the Electoral College, though NPD may well have such an effect. Substantive Impact of Divisive State Primaries and NPD To provide an overview of the potential substantive effects of DSP and NPD, we calculated the estimated vote in each state in each election, absent the effects of divisive primaries, and absent the effects of NPD. Table 4 shows the number of states (and electoral votes) that likely would have been won by the other party; below we describe the potential effects on each national election. These estimates are intended to be illustrative. There is no way to know which states actually would have switched. Candidate, media and voter behavior would have been different in various ways. The number of states that would switch is partly a function of the closeness of the election. A close national election combined with state or national divisiveness can lead to a number of states “switching”. The table reflects in substantive terms the results of the statistical analysis: the impact of divisive primaries is small and limited while the impact of NPD is greater and more widespread. Table 4 Substantive impact of divisive state primaries and national party division (states and electoral votes expected to switch absent divisiveness effects) Full size table Based on the results shown in Table 4 , among all 16 elections, only 16 states likely would have switched because of divisive state primaries alone; 95 states would have switched because of NPD alone. In none of the 16 elections do the results indicate that more than two states, or more than 53 electoral votes, would have switched because of divisive state primaries. In comparison, there were six elections in which NPD would likely have switched at least eight states with more than 112 electoral votes. (This illustration uses CVs to measure NPD; these estimates are more modest than those using the APV measure). In six of the 16 elections, the impact of divisiveness would have substantially changed the results. Without the effects of state primary divisiveness, we estimate that Kerry in 2004 would have won Ohio and thus the presidency and in 1948 neither candidate would have won a majority of electoral votes. Without the effects of NPD, we estimate that in 2000 Gore would have won Florida and thus the presidency; in 1980, had it not been for the national division between Carter and Kennedy, Carter would have won 17 additional states, putting him within striking distance of Ronald Reagan. Similarly, in 1976 (absent the effects of NPD) Ford would have won New York, Texas, Pennsylvania and five other states leading to a substantial victory over Carter; in 1960, Kennedy would have won 18 additional states leading to a landslide victory over Nixon; and in 1948, Dewey would have won an additional 10 states, verifying the Chicago Tribune headline “Dewy defeats Truman”. Conclusion The relationship between divisive state primaries and general election outcomes is substantially different in presidential campaigns than in subnational campaigns. To appropriately estimate the impact of divisiveness in presidential campaigns, one must measure both the impact of NPD and the impact of divisive primaries in individual states. To this end, we developed a comprehensive model of state outcomes in presidential campaigns and tested several measures of NPD and several measures of divisive state primaries. We find that, in presidential campaigns, both NPD and divisive state primaries significantly influence state-level general election outcomes, with the former having a greater and more widespread impact. In addition, we have demonstrated that the impact of state primary divisiveness is sensitive to how the concept is operationalized. One can conceptualize a divisive state primary as one in which the state primary electorate only weakly supports the eventual nominee, implying that there are many partisans who may abstain or defect. This is measured by the proportion of the vote for candidates other than the eventual nominee. Alternatively one can conceptualize a divisive state primary as close and competitive, causing some partisans to harbor negative feelings about the eventual nominee. This is measured by the vote margin between the two leading candidates in the primary. These two approaches relate to substantively different phenomena. The analyses indicates that the former leads to diminished performance in the general election (the latter is not statistically significant). The results indicate that the impact of divisive state primaries is limited, while the impact of NPD can be substantial. A divisive state primary leads to no more than a 2.8 % decrease in the general election in that state. In comparison, NPD more often than not leads to decreases of at least 3.2 % across states. The impact of divisive state primaries is limited to a subset of states while the impact of NPD is not. Taken together, these results confirm the general thesis that the overall negative impact of NPD is greater than that of divisive state primaries. This research demonstrates that NPD is a critical component of divisiveness in presidential campaigns, but one that generally is not included in previous research. By incorporating a comprehensive set of appropriate controls, we have reliably estimated the impact of NPD and of divisive presidential primaries. We show that the national component is potentially powerful; the state-level component pales in comparison. Previous analyses of divisive state presidential primaries have measured a minor effect while ignoring the greater effect. Although this research has focused on the relative impact of NPD and divisive state primaries, the analysis also sheds light on the behavior of states in presidential elections. Among other findings, it indicates that election-specific national factors are critical to understanding general election outcomes and that long-term state-level factors such as partisanship and ideology play a major role in state-level electoral behavior. Footnote 25 It is hoped that this study contributes substantially to resolving the controversy over the impact of divisive presidential primaries. It has been shown that a divisive state presidential primary does have a negative effect on the vote outcomes in the general election, although the magnitude of the effect is relatively small, unlikely to change the winner in more than one or two states in a presidential election. Because of the control variables included in the model, it is unlikely that this relationship is an artifact of unpopular incumbents, weak economies, or the partisan or ideological predispositions of state electorates. Furthermore, the results are consistent—varying little across a wide range of methods and model specifications. Having established this, it would be useful and interesting to differentiate between the relative impact of early and late primaries, those with high versus low turnout, and those with few or many active candidates. These and other state-specific factors could cause some divisive state primaries to have greater or lesser impact on general election results than others. That would represent a potentially important avenue for future research. Such research would present a number of measurement problems. Primary turnout is difficult to gauge because the denominator is generally unknown. The three factors are inter-related: greater turnout tends to occur when there are more candidates which tends to happen early in the primary season. Similarly, once the race has been called, both turnout and the number of candidates diminishes. One implication for the parties is clear: in terms of winning the presidency, a divided national party does more damage than a set of divisive state primaries. Competition among candidates does not necessarily hurt a party’s general election chances, but schisms within the party’s base can be truly harmful. The analysis indicates that a divisive state primary will decrease the party’s general election vote in that state, but usually by less than 2 %. In comparison, a divided national party decreases the party’s vote across states, usually by more than 3 %. Party leaders no doubt understand that NPD leads to negative consequences in the general election. However, in most cases, they need not be concerned about the effects of divisive primaries in individual states. Except in the most pivotal states, such as Florida and Ohio, in a close national election, a decrease in the range of 1–2 % in the popular vote will not influence which party wins the Electoral College. Much can be gained by investigating the causes and consequences of divided national parties. What causes the underlying long-term divisions within the parties? Under which circumstances do presidential nomination campaigns exacerbate such divisions? What can candidates do before and after the convention to unite their party? And what can the parties do between elections to diminish the chances that divisions will intensify during the next campaign cycle? Notes A national party can have a set of divisive state primaries yet remain united (e.g., 2000 Republicans). Alternatively, a party could be divided nationally yet see few divisive state primaries, especially if “divisive” is operationalized as a small victory margin. One candidate could win handily in some regions while losing by large margins in others (e.g., 1976 Republicans). In some elections there are numerous divisive state primaries yet the national party is able to unite during the general election campaign (e.g., 1976 Democrats, 1980 Republicans). In some elections there are few divisive primaries, yet the national party is severely divided at the convention and beyond (e.g., 1964 Republicans). Some state primaries are not even contested since they occur after the nomination has effectively been decided. Studies of subnational divisive primaries have reached a confusing variety of conclusions (Lengle and Owen 1996 ). Several found that such primaries negatively affect general election outcomes (e.g., Bernstein 1977 ), others found mixed effects (Born 1981 ; Kenney and Rice 1984 ), others found little or no effect (Hacker 1965 ; Kenney 1988), and some found a positive effect in the out-party (Westlye 1991 ; Partin 2002 ). Jacobson’s ( 1978 ) work on congressional elections helps to make sense of these results. A congressional or gubernatorial incumbent whose reelection chances are relatively low may be challenged within his or her own party, leading to a (potentially) divisive primary that hurts the incumbent in the general election. On the other hand, challengers typically are not well-known—a primary battle in the out-party brings media attention to the candidates in that party and thus raises their name recognition, a valuable resource in a congressional or gubernatorial race (Westlye 1991 ; Lazarus 2005 ). The nature of single-state primaries in presidential campaigns is dramatically different. Unlike sub-national primaries, candidates may or may not choose to compete vigorously in certain states. Thus it is possible for a number of presidential state primaries to be non-divisive (if it is clear which candidate is likely to win that state) even though the national campaign may be highly competitive. For example, in 1980 few Democratic state primaries were competitive; most were assumed to be easy victories for one candidate or the other and thus not seriously contested. Conversely, it is possible for there to be a number of divisive state primaries even though the result of the national campaign is not really in doubt. In 1976, for example, most non-Southern primary were seriously contested, yet the national Democratic party did not suffer substantial internal divisions and quickly united behind Jimmy Carter once the primaries ended. The divisive primary hypothesis is rooted in cognitive psychology, but there are several behavioral explanations that could cause the phenomenon. Voters may rationally use divisiveness as a cue for low candidate quality. One could hypothesize a divisiveness effect without making strong assumptions about voter rationality. For example, voters in Iowa and New Hampshire usually can choose among 5–9 potentially viable candidates; after Iowa and New Hampshire the field typically narrows to 2 or 3 viable candidates because the unsuccessful candidates withdraw. Thus the choices of voters in subsequent states is restricted. State party resources are rarely used during primary battles, whether subnational or presidential, rather resources come from the individual candidate campaigns. Economic and other national contextual variables are adjusted to account for the party of the incumbent president. After the mid-1970s, the second dimension is best characterized as reflecting “social issues” such as abortion, busing, and gun control. (Poole, Keith; 2015; interview with author) As a test for possible realignment effects, the model was re-estimated with a dummy for the post-1968 period; the dummy is not significant and its inclusion barely alter the coefficients. There is some collinearity between the national economy and national party division (r = .67). A weak economy is often associated with divisions within the incumbent party. If the economic variable was excluded from the model, it would bias the coefficient of national party division, probably by artificially inflating the estimated impact of that variable. For example, there were roughly as many out-party candidates in the primaries opposing popular incumbents such as Reagan and Bill Clinton as there were opposing unpopular incumbents such as Ford and Carter. Similarly, the two most popular incumbents running in the past 50 years were Nixon and Reagan; both faced several strong candidates in the other party (Muskie, Humphrey, Wallace and Scoop Jackson in 1972; Mondale and John Glenn in 1984). In-party challenges to an incumbent president are rare. During the 1948–2012 period, nine incumbents faced no serious challenge in the primaries; only two (Ford and Carter) were challenged (though some would not classify Ford as a true incumbent). The case of Johnson in 1968 is open to interpretation—Johnson was challenged but withdrew early (1968 is not included in our dataset). Results are substantively similar if July approval ratings are used. We believe that war, as measured by casualties, is a national-level phenomenon. Certainly there are variations across states during wartime but we believe that the difference between wartime and peacetime has a greater effect on the electorate than do variations across states. We tested state-level war deaths and found it was not statistically significant. It should be noted however that Karol and Miguel ( 2007 ) found state-level war casualties to be significant in their analysis of the 2004 election. To account for the possibility of serial correlation, the model was also estimated with Baltagi and Wu’s ( 1999 ) GLS estimator for AR(1) panel data and OLS with a lagged dependent variable and Panel Corrected Standard Errors (Beck and Katz 1996 ); both yield substantively similar results to those presented in the text. (See table A-1, online appendix) A possible problem arises in that the statistical model assumes a continuous and unbounded dependent variable. While the general election outcome is indeed continuous, a proportion is, by definition, bounded. Paolino ( 2001 ) shows that when there are many cases close to the bounds (in this case 0 and 1), there are substantial benefits to using a maximum likelihood model for beta-distributed dependent variables. However, in this dataset there are no cases within .19 of the bounds and only 9 cases (about 1.2 %) within .25 of the bounds. As such, the gains from a beta-distributed dependent variable model would be minimal. Indeed, Paolino’s replication of Atkeson ( 1998 ) uses a similar dependent variable and shows no difference between a model assuming an unbounded dependent variable and the beta-distributed dependent variable model. Since the McGovern–Fraser reforms dramatically changed the nature of nomination campaigns, the model was also applied only to the elections of 1972–2012. Both the divisive state primary measure and the national party division measure in 1968 are anomalous. The nomination phase is unique in that one of the two leading candidates, Robert Kennedy, was assassinated before the convention, thus likely altering the impact of divisive state primaries on general election results. Also, Hubert Humphrey entered no primaries, thus every primary shows up as extremely divisive. The general election results are also anomalous because of the strong performance of a non-centrist third party candidate (see online appendix). Although we decided to exclude 1968 from the analysis, we tested the model with 1968 included. The parameter estimates for national party division and divisive state primaries were essentially unchanged (see online Table A-1). Two cases were excluded because neither the Democratic candidate nor electors pledged to him appeared on the ballot (Mississippi in 1960, Alabama in 1964). One was excluded because it was an extreme outlier (Johnson received less than 13 % in Mississippi in 1964). These outlying cases could bias the parameter estimates (Achen 1982 ). The Democrats lacked an early dominant frontrunner in 1976 and 1992, yet the party was relatively united by convention time. The 1972 and 1984 Democratic campaigns both had dominant early frontrunners, yet the party was divided and lost the general election. Typically we observe five to nine candidates in a presidential nomination contest that does not include an in-party incumbent; some of these campaigns lead to a divided national party; others do not. This research focuses on the potential negative impact of short-term national party division on that year’s general election. It is important to differentiate between preexisting national party division (before the primaries) and national party division when it is most likely to impact general election results (during the primaries, at the convention, and beyond). The existence of long-term underlying division is not sufficient to hurt a party’s general election vote. Rather, the harm becomes manifest when there is intense competition for the party’s nomination and the nominee is unable to unite the party. Throughout the 1960s and 1970s, there were severe long-term divisions in the Democratic party while the Republican party was much more united. Nonetheless, in 1964 and 1976, the Democrats were mostly united and the Republicans seriously divided. Aggregate primary vote gets around the problem of stage-managed conventions but it is not a good measure for the pre-reform period. Fifty years ago, less than a third of the states used primaries (rather than caucuses), and many of them were either “delegate primaries”, “favorite son” or “beauty contest” primaries. Nowadays, more than two-thirds of states use primaries, delegates are generally bound or committed to vote for a particular candidate. Primaries vary in many ways (timing, winner-take-all vs. proportional representation, open vs. closed, number of candidates, turnout, etc.). Although each of these is potentially related to divisiveness and thus reflected in our parameter estimates, we recognize that timing and the number of candidates (and possibly turnout) could cause some divisive state primaries to have greater or lesser impact on general election results than others. We address these concerns in the online appendix. Comparing the consequences of a 1-unit change in DSP with a 1-unit change in NPD may not be a fair comparison. It may be that changing DSP is “easy” while changing NPD is “hard”. In nomination campaigns, party leaders try to unify the national party as soon as possible; at the state level, candidates try to stay active, run in primaries and defeat opponents. It's easy to get national party unity when there's a popular incumbent running; it's hard to do so when there are several strong candidates representing different factions. It's easier to get high divisiveness scores when one party has selected its nominee and the other has not; it's harder to do so after both parties have selected their nominees (see online appendix). In this research, we seek to show that the full impact of divisiveness in presidential elections involves both state primary divisiveness and national party division. A model of election outcomes that does not include the latter is misspecified; thus the estimate of state primary divisiveness is potentially biased (see Table A-1 online). The coefficients of divisive state primaries measured in terms of support for the eventual nominees and by victory margin are not comparable because they are measured on different scales. Nonetheless, the former are statistically significant while the latter are not. The potential for deleterious effects is greatest when nomination candidates differ ideologically and regionally, as was the case in the 1980 Democratic campaign. In 2008, the Obama–Clinton struggle did not prevent the party from winning the general election. However, we specifically test the effects of state and national party division in 2008. First, two interactive variables were created to see if the impact of divisive state primaries or national party division was different in 2008 than in other elections. Neither was significant, indicating no discernible difference in the impact of divisiveness in 2008. Similarly, estimating the analysis without 2008 produced nearly identical parameter estimates indicating that the 2008 election results fit the general pattern seen in previous elections. Other researchers reached similar conclusions (Henderson et al 2010 ; Makse and Sokhey 2010 ; Southwell 2010 ; but see Wichowsky and Niebler 2010 ). The analysis shows that state general election outcomes are influenced by both long-term and short-term factors and by both national-level and state-level factors. Among the state-level factors, state partisanship and both state ideology variables are statistically significant. A state in which the average previous congressional and gubernatorial Democratic vote was 60 %, for example, would tend to have more than a 2 % higher presidential vote than a state with 50 % previous Democratic vote. The difference in the presidential vote between a very moderate state and a state with the most extreme general ideology score would be approximately 7–8 %. The corresponding civil rights/social issues ideology difference would be 4 %. In addition, a presidential candidate tends to receive about 3 % more in his home state than would otherwise be expected.
Divided political parties rarely win presidential elections, according to a study by political science researchers at the University of Georgia and their co-authors. If the same holds true this year, the Republican Party could be in trouble this presidential general election. The study, which examined national party division in past presidential elections, found that both national party division and divisive state primaries have significant influence on general election outcomes. In this election cycle, the nominee of a divided Republican Party could lose more than 3 percent of the general election vote, compared to what he would have gained if the party were more united. "History shows that when one party is divided and the other party is united, the divided party almost always loses the presidential election," said Paul-Henri Gurian, an associate professor of political science at UGA's School of Public and International Affairs. "Consider, for example, the elections from 1964 through 1984; in each case the divided party lost." The study measures party division during the primaries and indicates how much the more divided party loses in the general election. The study found that divisive state primaries can lead to a 1 to 2 percent decrease in general elections votes in that state. For example, Hillary Clinton received 71 percent of the Democratic vote in the Georgia primary, while Donald Trump received 39 percent of the Republican vote. According to the historical model, a Republican-nominated Trump would lose almost 1 percent of the Georgia vote in the general election because of the divided state primary. National party division has an even greater and more widespread impact on the national results, often leading to decreases of more than 3 percent nationwide. Looking again at the current presidential election cycle, Trump had received 39.5 percent of the total national Republican primary vote as of March 16, while Clinton had received 58.6 percent of the Democratic vote. If these proportions hold for the remainder of the nomination campaign (and if these two candidates win the nominations), then Trump would lose 4.5 percent of the vote in the general election, compared to what he would have received if the national Republican Party was not divided. "In close elections, such as 2000, 2004 and 2012, 4-5 percent could change the outcome in terms of which party wins the presidency," Gurian said. The results of this study provide political analysts with a way to anticipate the impact of each primary and, more importantly, the impact of the total national primary vote on the general election results. Subtracting the percent of the Republican nominee's total popular vote from that of the Democratic nominee and multiplying that by 0.237 indicates how much the Republican nominee is likely to lose in the November election, compared to what would otherwise be expected. The 4.5 percent figure calculated through March 16 can be updated as additional states hold their primaries. (The same can be done for each individual state primary by multiplying by 0.026.)
10.1007/s11109-016-9332-1
Earth
How a 'shadow zone' traps the world's oldest ocean water
C. de Lavergne et al, Abyssal ocean overturning shaped by seafloor distribution, Nature (2017). DOI: 10.1038/nature24472 Journal information: Nature
http://dx.doi.org/10.1038/nature24472
https://phys.org/news/2017-11-shadow-zone-world-oldest-ocean.html
Abstract The abyssal ocean is broadly characterized by northward flow of the densest waters and southward flow of less-dense waters above them. Understanding what controls the strength and structure of these interhemispheric flows—referred to as the abyssal overturning circulation—is key to quantifying the ocean’s ability to store carbon and heat on timescales exceeding a century. Here we show that, north of 32° S, the depth distribution of the seafloor compels dense southern-origin waters to flow northward below a depth of about 4 kilometres and to return southward predominantly at depths greater than 2.5 kilometres. Unless ventilated from the north, the overlying mid-depths (1 to 2.5 kilometres deep) host comparatively weak mean meridional flow. Backed by analysis of historical radiocarbon measurements, the findings imply that the geometry of the Pacific, Indian and Atlantic basins places a major external constraint on the overturning structure. Main Dense waters originating from the surface at high latitudes make up the overwhelming majority of the ocean volume. Once formed through heat loss and salt gain, they sink to depth and spread across the globe, carrying information about atmosphere–ocean–ice interactions into the slow-paced abyss and contributing to the ocean’s long ‘memory’ of atmospheric conditions 1 . But the memory timescale and climate buffering effect of the deep ocean ultimately depend upon the rate at which these dense waters are removed from deep seas and returned to the surface. Physical controls on the volume and return pathways of dense waters are therefore key to the ocean’s carbon and heat storage capacity and its role in centennial to multi-millennial climate variability 2 , 3 . The cycle of production, modification and consumption of dense water masses is often conceptualized as a meridional overturning circulation composed of two dynamically distinct limbs 4 , 5 ( Fig. 1a ): an abyssal, northward limb that carries the densest Antarctic-sourced waters (Antarctic Bottom Water, AABW) until they upwell into lighter waters of the Indian, Pacific and Atlantic basins; and a shallower, southward limb that carries these lighter deep waters to the Southern Ocean. Because it involves a gradual decrease in the density of AABW, the abyssal branch is considered to be essentially a diabatic circulation. In contrast, the southward flow of overlying deep waters is thought to be predominantly adiabatic, that is, density-preserving 6 , 7 . This dynamical divide is consistent with the two regimes apparent in the deep-ocean density distribution ( Fig. 1a ): north of the Antarctic Circumpolar Current and away from North Atlantic sinking, level density surfaces above depths of about 2.5 km appear to be compatible with an adiabatic arrangement of water masses, whereas the northward descent of abyssal density surfaces signals transformation of AABW as it travels north. The transition between diabatic and adiabatic regimes and the transition from northward to southward mass transport have been linked to the depth profile of basin-averaged mixing rates, and to surface wind forcing over the Southern Ocean 3 , 4 , 5 , 6 , 7 , 8 . Here we show that these two transitions are tied to the depth distribution of the seafloor and are separate from each other. Figure 1: Density surfaces, seafloor areas and the ocean’s overturning. Climatologies 41 , 49 of neutral density ( a ) and zonally summed incrop areas (in units of square metres per degree of latitude and per (kilograms per cubic metre)) ( b ) as a function of latitude and pseudo-depth. The pseudo-depth of density surfaces is found by filling each latitude band from the bottom up with ocean grid cells ordered from dense to light. Density is contoured in black every 0.1 kg m −3 for γ ≥ 27.5 kg m −3 . Grey arrows in a give a simplified view of overturning flows. Flows oriented along (or across) density surfaces correspond to adiabatic (or diabatic) transports. This study focuses on the latitude range 32° S–48° N enclosed in white lines. PowerPoint slide Full size image The deep ocean communicates with the surface in two high-latitude regions ( Fig. 1a ): the North Atlantic, where deep waters are formed and exported southward to ventilate the 27.7–28.14 kg m −3 density range 9 , 10 ; and the Southern Ocean, where rising density surfaces allow deep waters to upwell primarily adiabatically 6 , 7 , 11 , 12 until they are converted into denser AABW or lighter intermediate and mode waters 5 . Note that we use neutral density 13 , denoted γ , as a globally consistent density variable and subtract 1,000 kg m −3 from all density values. Away from these two high-latitude regions, dense waters are isolated from surface exchanges: their density transformation and upwelling rely on deep diabatic processes. We henceforth focus on such processes and restrict the analysis to ocean waters deeper than 1 km between 32° S and 48° N. Geometry At depths of 1–2.5 km, ocean topography is dominated by relatively steep continental slopes and accounts for less than 8% of the total seabed area ( Fig. 2a and b ). Deeper, the emergence of flatter ridges decaying onto abyssal plains markedly increases the seafloor area per unit depth, which quadruples between depths of 2.5 km and 4.3 km. Depth layers therefore have unequal access to the seafloor: the quarter of the water volume which resides below 3.5 km occupies three-quarters of the seabed. This inequality is reinforced when considering the seafloor coverage of density layers—that is, layers defined by a fixed density interval—because the thickness of such layers generally increases with depth in the deep ocean ( Figs 1 and 2c and d ). By analogy with surface outcrop areas, the seafloor area that is intersected (covered) by a given density layer is termed the ‘incrop’ area. The relatively narrow 28–28.25 kg m −3 density range takes up over 80% of the ocean floor between 32° S and 48° N, with the lion’s share going to waters of about 28.11 kg m −3 ( Fig. 2c and d ; see also Extended Data Fig. 1 ). Figure 2: Depth and density distributions of seafloor area over 32° S–48° N. a , Seafloor area per unit depth. c , Seafloor area per unit density, termed incrop area. The mean density of geopotential surfaces and the mean depth of density surfaces are indicated on the left y axes of a and c , respectively. b , d , Bottom-up cumulative seafloor area as a function of depth ( b ) or density ( d ). The lower and upper white lines depict respectively the northward–southward and diabatic–adiabatic transition levels tied to the seafloor distribution, as proposed in this work. Spreading ridges and abyssal plains dominate topography deeper than 2.5 km; steep continental slopes dominate at smaller depths. Northward-flowing AABW dominates waters deeper than 4.3 km (denser than 28.11 kg m −3 ); its southward return as relatively dense Pacific Deep Water (PDW), Indian Deep Water (IDW) or North Atlantic Deep Water (NADW) occurs predominantly at depths greater than 2.5 km (densities greater than 28 kg m −3 ). PowerPoint slide Full size image These simple geometric considerations have important implications for the consumption rate and upwelling pathways of dense waters. Deep-ocean sources of density transformation have long been recognized to be concentrated near the seafloor 14 , 15 , 16 , 17 , 18 , 19 , 20 , where boundary-catalysed turbulence and geothermal heating combine to erode the near-bottom stratification and progressively lighten bottom seawaters. The resulting near-bottom confinement of density loss suggests that deep water masses benefiting from a large seafloor coverage are more likely to be efficiently consumed than those isolated from the bottom. Consistent with the preferential lightening of bottom boundary waters, incrop areas tend to increase along the northward path of AABW and to slowly migrate towards smaller densities ( Fig. 1b ), indicative of a successive removal of incropping density layers and resultant homogenization of AABW 21 . The conjunction between the regime of sloping density surfaces and the presence of large incrop areas ( Figs 1b and 3b ; Extended Data Figs 2 , 3 , 4 ) is also suggestive of the dominant role of boundary transformation. Hence, the clustering of seafloor area around the 4–5.5 km and 28.11 kg m −3 levels may strongly influence the structure of cross-density transports and the associated meridional flows. Figure 3: Pacific seafloor and radiocarbon distributions. a , Zonally summed seafloor areas as a function of latitude and depth. b , Zonally summed incrop areas as a function of latitude and pseudo-depth. c , Along-density zonal mean radiocarbon content (Δ 14 C) as a function of latitude and pseudo-depth. d , Schematic regime transitions, as in Fig. 2b and d . The pseudo-depth of density surfaces is defined as in Fig. 1 . In b and c , density is contoured in black every 0.1 kg m −3 for γ ≥ 27.5 kg m −3 . The lower and upper white curves depict, respectively, the local northward–southward and diabatic–adiabatic transition levels inferred from the incrop area distribution. Specifically, at each latitude y s , we calculate the γ -profile of summed incrop areas north of y s . The northward–southward transition then corresponds to the density of the profile peak, while the diabatic–adiabatic transition is defined as the smallest density at which the incrop profile decreases to 10% of its peak. All panels of this figure include the light-blue region shown in the inset map in a , which hosts the main Pacific abyssal overturning (Methods). The whole Pacific and southeastern Pacific are shown in Extended Data Fig. 2 . PowerPoint slide Full size image Water mass transformation To formally relate overturning flows to incrop areas, we first set out the link between cross-density transport and the vertical profile of diffusive density fluxes. Consider the volume V ( γ ) of waters denser than γ , bounded by the seafloor, the density surface A ( γ ) and latitudes y s < y n ( Fig. 4 ). We define the total geothermal and mixing-driven density fluxes entering V from below and from above as G ( γ ) and F ( γ ), respectively. For the volume V to remain unchanged, advection across A must balance local geothermal and mixing-driven density tendencies, such that 21 , 22 , 23 : where T ( γ ) denotes the mass transport through A ( γ ) and is termed the dianeutral transport. Equation (1) states that the transport across a given density layer is proportional to the net geothermal and mixing-induced density change within that layer. Geothermal heating causes only lightening, balanced by dianeutral upwelling ( T > 0, towards lower density). In contrast, mixing may be a density source or sink, requiring downwelling or upwelling, respectively. Figure 4: Sketch of a volume V of waters denser than γ , bounded by the density surface A ( γ ) and latitudes y s and y n . Density fluxes F and G entering V and the dianeutral mass transport T leaving V are also shown. The streamfunctions ψ s ( γ ) and ψ n ( γ ) are defined as the net southward mass transport below A ( γ ) at y s and y n , respectively. As an illustrative example, we show (dotted line) the surface of peak dianeutral upwelling (grey arrow). Mass conservation requires that this density surface corresponds to meridional flow reversal at y s (see velocity arrows on the left) if: (1) the along-density transport at y n is zero, as in the case of the Pacific and Indian basins given y n at their northern end; or (2) the along-density transport at y n is both southward and weak below the peak upwelling level, as we infer to be the case in the western Atlantic given y n = 48° N (Methods). PowerPoint slide Full size image Dianeutral transports are thus controlled by the γ -profile of the total density flux entering successively denser water volumes, ( F + G )( γ ). In turn, we can relate basin-scale dianeutral transports to meridional, along-density flows by realizing that T must equal the zonally integrated meridional mass flux into V . Denoting by the streamfunction ψ s (or ψ n ) the net southward mass transport through the y s (or y n ) bounding latitude section of V ( Fig. 4 ), we have, by continuity: In the deep Indian and Pacific oceans, choosing y n at the closed northern end of each basin yields simply T = − ψ s . This entails in particular that the density surface of peak dianeutral upwelling T north of y s defines the boundary between northward and southward flow at y s ( Figs 3d and 4 ). In the abyssal Atlantic, dianeutral upwelling may balance inflow from the south and north, so that the meridional flow reversal may lie at a denser level than the peak upwelling rate. Nonetheless, we find the two levels to match in this basin as well when choosing y n = 48° N ( Fig. 4 , Extended Data Fig. 4 and Methods). Mixing scenarios Figure 5 shows the profile of the mixing-driven density flux F as well as the associated 32° S–48° N dianeutral transports under two idealized scenarios (Methods): scenario S1 has uniform local density fluxes throughout the ocean interior; scenario S2 has bottom-enhanced local density fluxes, with uniform bottom magnitude. The bottom-intensification of scenario S2 density fluxes is specified as an exponential decay from the seafloor with a 500 m e-folding scale, a structure representative of turbulence observations in the abyssal Brazil Basin 24 . Flux magnitudes are chosen so that both peak upwelling rates equal 25 × 10 6 m 3 s −1 , a mid-range estimate of maximum abyssal upwelling 5 , 9 , 10 constrained by velocity measurements at circulation nodes 25 , 26 , 27 , 28 , 29 . Shadings show the added contribution of a uniform mixing rate of 10 −5 m 2 s −1 , a typical level of mixing away from the direct influence of boundaries 14 , 30 , 31 , 32 , 33 . Figure 5: Density fluxes and dianeutral transports within 32° S–48° N. Density profiles of the total density flux F ( a ), the density flux averaged over density surfaces ( b ) and total dianeutral transports T ( c ) under scenarios S1 (orange) and S2 (blue). Shading denotes the added contribution to fluxes and transports of a uniform mixing rate of 10 −5 m 2 s −1 . PowerPoint slide Full size image Scenario S1 corresponds to a diabatic bottom boundary overlain by an adiabatic ocean interior: convergence of local density fluxes occurs only in the unstratified bottom boundary layer, where the density flux weakens to satisfy a no-flux boundary condition—that is, the constraint that turbulent mixing cannot flux density across the seafloor 19 , 21 , 34 , 35 . The implied density transformation and dianeutral circulation are thus qualitatively identical to those that would be generated by a uniform geothermal density sink along the ocean bottom. The scenario results in diabatic upwelling peaking at γ = 28.11 kg m −3 and mostly confined to below the 28 kg m −3 density surface, matching the incrop area distribution ( Figs 2c , 5c and 6 ). Indeed, since a uniform density flux homogeneously lightens waters covering the ocean floor, scenario S1 implies that density layers upwell in proportion to their access to the seafloor ( Extended Data Fig. 5 ). Consequently, diabatic upwelling is restricted to the depth and density range of sizeable incrop areas, and the boundary between northward and southward meridional transport is the density surface of maximum incrop area ( Fig. 6 ). In particular, most of the southern-origin dense waters must then flow back to the Antarctic Circumpolar Current at depths greater than 2.5 km. Figure 6: Schematic abyssal overturning circulation north of 32° S. The average depth of density surfaces is shown as a function of bottom neutral density and cumulative seafloor area. The surface of maximum incrop area ( γ = 28.11 kg m −3 ), corresponding to meridional flow reversal, and the surface marking the approximate transition between diabatic and adiabatic flow regimes ( γ = 28 kg m −3 ), are contoured in white. Straight and wiggly red arrows depict mixing-driven and geothermal buoyancy (− γ ) fluxes, respectively. To simplify the illustration, mixing-driven fluxes are taken to be uniform in the vertical direction, as in scenario S1. Density loss and diabatic upwelling are confined to near-bottom waters, which climb across density surfaces and along topography at a rate commensurate with the incrop area. Through mass conservation, this cross-density, along-bottom circulation maintains an along-density, interior circulation which supplies (or returns) dense waters from (or to) the Antarctic Circumpolar Current. Note that in the Atlantic these along-density flows may have an additional supply component from the subpolar North Atlantic. In the Indo-Pacific, a weakly ventilated shadow zone lies above the abyssal overturning circulation in the approximate 1–2.5 km depth range. PowerPoint slide Full size image Because geothermal heat fluxes exhibit relatively weak spatial variations away from ridge crests and contribute only bottom density losses, their impact on circulation is well described by the uniform-flux idealization of scenario S1 21 , 23 . In contrast, deep ocean mixing is observed to be dominated by patchy, topographically enhanced turbulence 15 , 16 , 17 , 19 , 32 , 33 , 36 , 37 , 38 . Such turbulence is generally associated with a bottom-enhanced local density flux, whereby lightening of densest waters occurs at the expense of densification immediately above. Scenario S2 explores the impact of an idealized, geographically homogeneous bottom-intensification of density fluxes. Under this scenario, density loss (or gain) generally dominates for density layers that have a larger (or smaller) incrop area than their underlying neighbour: the change in incrop area with height determines the dianeutral transport ( Extended Data Fig. 5 ). Upwelling is consequently found within waters denser than 28.11 kg m −3 , peaking just under this level, whereas density gain and downwelling characterize lighter waters ( Fig. 5c ). Hence, despite their structural differences ( Fig. 5a and b ), the simple scenarios S1 and S2 share two essential features: dianeutral upwelling peaks near the density level of maximum seafloor coverage, and decreases rapidly at lower densities. The complex spatial patterns of deep ocean turbulence could override these features. However, multiple lines of evidence indicate otherwise. First, examination of a range of bottom-intensified mixing scenarios analogous to scenario S2, where the magnitude of local density-flux profiles is not uniform but instead depends on bottom roughness, slope, stratification or internal wave generation rates (Methods), consistently shows upwelling peaking at or below the peak incrop surface and dwindling rapidly above ( Extended Data Fig. 6 ). Second, consideration of local density fluxes that decay above the bottom according to non-exponential and region-dependent profiles alters the distribution of density gain in the interior but preserves the near-bottom location of density loss, leading to a coupling between incrop and upwelling profiles similar to that inherent in the scenarios S1 and S2. Third, turbulence remote from boundaries, fed by interactions among internal waves and associated with weakly varying mixing rates 14 , 30 , 31 , 32 , 33 of the order of 10 −5 m 2 s −1 , drives only weak upwelling deeper than 2.5 km ( Fig. 5c , shaded areas). From a depth of 2.5 km to a depth of 1 km, the upwelling induced by a uniform diffusivity of the order of 10 −5 m 2 s −1 remains modest and fairly constant, demanding little net meridional flow. Radiocarbon evidence A separate line of evidence corroborates the leading-order control of dianeutral upwelling by incrop areas: analysis of historical radiocarbon measurements 39 , 40 confirms the tight connection between the density distribution of the seafloor and the overturning structure ( Fig. 3c ; Extended Data Figs 2 , 3 , 4 ). By mapping radiocarbon content (Δ 14 C) along density surfaces (Methods), we find that: (i) the maximum incrop area accurately predicts the transition surface between northward and southward flow identified in each basin’s Δ 14 C distribution; (ii) the height at which the incrop area falls to 10% of its peak approximates the lower boundary of the relatively thick Δ 14 C minimum (age maximum) observed at mid-depth in the Pacific and Indian oceans. Because of its size and connectedness, the Pacific basin shows the clearest signature of seafloor areas on the radiocarbon distribution ( Fig. 3 ). The strongest vertical Δ 14 C gradient at 32° S occurs at the density of the basin’s peak incrop area, γ = 28.11 kg m −3 , where the inflow of relatively young waters underlies their southward return after a centuries-long journey in the abyssal Pacific. Water mass transformation estimates ( Fig. 5 and Extended Data Fig. 6 ) further indicate that most of this southward return flow takes place below the 28 kg m −3 density surface, the transition level above which seafloor availability becomes scarce ( Figs 2 and 3a and b ). The minimum Δ 14 C centred around 2.3 km ( γ = 27.95 kg m −3 ) must then reflect weak upwelling of bottom waters to that depth. This inference is corroborated by the correspondence between the structure of the Δ 14 C minimum and the depth or density distribution of seafloor areas ( Fig. 3 and Extended Data Figs 2 and 3 ): the oldest density layers appear to be those largely isolated from the ocean bottom and, thereby, from renewal via abyssal upwelling. The Atlantic and Indian oceans host more complex AABW pathways owing to their compartmentalization into many sub-basins. There, the leading role of inter-basin passages in transforming the northward-flowing AABW is clearly demonstrated by the bottom density field ( Extended Data Fig. 1 ). Indeed, substantial density drops from sub-basin to sub-basin largely reflect concentrated mixing within connecting AABW throughflows 16 , 29 , 36 . Such concentrated density transformation suggests that access to constrictive passages could be as strong a determinant of diabatic upwelling rates as is access to large seafloor areas. However, radiocarbon distributions show that abyssal circulation chokepoints do not host the peak dianeutral transports that define the meridional flow reversal (Methods and Extended Data Figs 3 and 4 ). Instead, deep straits and sills appear to reinforce the influence of incrop areas on the overall overturning structure: by contributing prominently to the homogenization of AABW, they favour the concentration of incrop areas to a narrow density range and the pivotal upwelling of end-basin waters 36 (Methods). Hence, water mass transformation scenarios and modern radiocarbon distributions together show that diabatic upwelling peaks near the density layer that has the largest seafloor coverage and decreases rapidly at lower densities. The robustness of this structure is due to two principal facts: (i) boundary mixing and geothermal heating restrict density loss to the bottom boundary; and (ii) density layers have strongly unequal access to the seafloor. Fact (ii) is first a consequence of the relative abundance of seafloor at depths greater than 2.5 km ( Figs 2 and 3a and b ; Extended Data Figs 2 , 3 , 4 ). It is further reinforced by fact (i), which underpins the progressive focusing of AABW into its lighter classes (which monopolize the floor of northern basins 21 , 36 ), and which favours the northward spreading of abyssal density surfaces 8 ( Fig. 1b ). Boundary-dominated transformation and the depth distribution of seafloor thus collude to shape an incrop area profile which peaks deeper than 4 km and decays to small values near 2.5 km depth. This collusion accentuates the segregation of water masses situated below and above the 2.5 km geopotential, and suggests that the main patterns of incrop area and upwelling diagnosed from the modern hydrography must hold across a broad range of ocean states. The implications of these patterns for the functioning of the meridional overturning north of 32° S can be summarized as follows ( Figs 2 and 6 ): (i) strong upwelling (or downwelling) rates define a diabatic deep ocean regime at depths greater than 2.5 km, where most of the seabed lies; (ii) an overlying adiabatic regime, within 1–2.5 km depth, hosts non-negligible mixing but comparatively limited dianeutral transports; (iii) northward-flowing dense waters reside below the density layer with greatest access to the seabed, thus largely below 4 km depth, where seafloor availability is maximal; (iv) the majority of dense southern-origin waters returns southward within the diabatic regime, below 2.5 km depth. The first two conclusions rely essentially on knowledge of ocean bathymetry and could potentially be recast for a different depth distribution of seafloor. The latter two further assume a southern origin of the densest global-scale water mass. Conclusion (iv) implies that the circulation of dense Antarctic-origin waters in the Pacific and Indian basins is more compressed in the vertical than has been inferred in most inverse box models 5 , 9 , 10 , despite limited consistency among these inverse solutions (see discussions of Pacific and Indian abyssal pathways in the Methods). The depth profile of the southward return flow at 32° S implied by idealized water mass transformation scenarios is also variable ( Fig. 5c and Extended Data Fig. 6 ), reflecting the lack of constraints on the global mixing distribution. These uncertainties and discrepancies call for further work to refine and reconcile different estimates of the abyssal overturning structure. Examining the silicic acid (Si(OH) 4 ) distributions 41 of the Pacific and Indian oceans, we find additional support for a relatively deep southward return flow ( Extended Data Fig. 7 ). In the northern part of the two basins, where deep Si(OH) 4 production is thought to be more intense and largely placed at the sediment–water interface 42 , 43 , 44 , 45 , vertical maxima of Si(OH) 4 lie immediately above the depth range of large seafloor areas 46 . This concurs with relatively strong circulation and short residence times limiting Si(OH) 4 accumulation within this depth range. Further south, the maxima shift towards larger densities falling within the diabatic regime, consistent with southward mean flow promoting the export of Si(OH) 4 there. This latter feature is not visible in the eastern Indian Ocean, but the radiocarbon structure of this sub-basin clearly substantiates a geometric confinement of its abyssal overturning to depths greater than 3.7 km ( Extended Data Fig. 3 and Methods). The seafloor area distribution, generally absent from conceptual models or quantitative theories of the overturning, thus exerts major constraints on the volumes, pathways and interplay of dense water masses. In particular, within basins receiving only a southern influx of ventilated dense waters, the concentration of seafloor at abyssal depths (greater than 2.5 km) implies a partial disconnect between a relatively well ventilated abyss and more stagnant mid-depth waters. Accordingly, we propose that present-day Pacific and Indian waters straddling the mid-depth radiocarbon minimum do not embody returning AABW but rather lie in a shadow zone of the overturning ( Fig. 3d ), characterized by its isolation from surface and bottom boundary influences, and traversed by relatively weak mean meridional flow. The larger volume and longer residence time of shadow zone waters relative to the underlying diabatic abyss make them more likely to hold large carbon and nutrient reservoirs hidden from the atmosphere. However, an active northern surface source of deep water, as occurs in today’s Atlantic and could have occurred in the Pacific 47 , 48 , may disrupt the mid-depth stores, and reduce the volume and influence of the dense southern-origin waters. Methods Dianeutral transports The water mass transformation estimates presented in Fig. 5 and Extended Data Fig. 6 use the global neutral density field of the WOCE hydrographic climatology 41 . In scenarios S1 and S2 and the scenarios of Extended Data Fig. 6 , the specified three-dimensional map of neutral density fluxes allows us to calculate the total density flux F through each density surface A ( γ ). The density derivative of F then yields the dianeutral transport T according to equation (1). The contribution of a fixed diffusivity illustrated by the shaded areas in Fig. 5 is obtained through the same procedure. Uncertainty in the obtained transport profiles reflects mostly the incomplete spatial coverage of the hydrographic observations that underlie the WOCE climatology and the limited horizontal and vertical resolution of the climatology. In spite of these limitations, substantial errors in the basin-scale structure of the density fluxes and dianeutral transports discussed here are not expected 21 . Sources of near-bottom turbulence, such as the breaking of internal waves 50 or the generation of submesoscale instabilities 51 , 52 , depend on local flow, topography and stratification conditions. In particular, topographic roughness, topography scales and bottom stratification enter scalings for the rates of bottom internal wave generation 53 , 54 . In addition, the presence of steep slopes or small-scale topographic features may catalyse near-bottom turbulence 51 , 52 , 55 , 56 , 57 . To explore the influence of these parameters, in Extended Data Fig. 6 we examine variations of the bottom-intensified mixing scenario S2 by setting the magnitude of local density-flux profiles proportional to the large-scale topographic slope squared; the large-scale topographic roughness; the small-scale topographic roughness 58 ; the horizontal wavenumber of small-scale topography 58 ; the bottom buoyancy frequency; the squared bottom buoyancy frequency; the internal tide generation rate 59 , 60 ; and the lee wave generation rate 61 . Roughness is defined as the variance of bathymetric height. Large-scale slopes and roughness are obtained by fitting planes over half-degree grid squares to the 1/30°-resolution ETOPO2v2 bathymetry product 49 . Small-scale abyssal hills are not resolved by this product. To account for these we use the small-scale roughness and wavenumber parameters estimated by ref. 58 . In all eight cases, the average magnitude of the fluxes is adjusted to obtain a maximum upwelling rate of 25 × 10 6 m 3 s −1 . Only the structure of transports thus warrants interpretation. Extended Data Fig. 6 shows that, irrespective of the scenario, dianeutral upwelling is maximum at or below γ = 28.11 kg m −3 , is weak or negative at γ = 28 kg m −3 , and remains modest across the overlying regime of small incrop areas. Radiocarbon maps Radiocarbon content (Δ 14 C, expressed in per mil) corresponds to the deviation of the measured 14 C/ 12 C ratio relative to an atmospheric reference ratio, correcting for isotopic fractionation 62 . At leading order, the evolution of Δ 14 C in the deep ocean is governed by advective–diffusive processes and radioactive decay of about −10‰ every 83 years 63 , 64 . Mixing affects the deep Δ 14 C distribution through both its impact on circulation and the direct diffusive redistribution of radiocarbon 64 , 65 . The latter effect dominates in particular when substantial divergence of diffusive 14 C fluxes coexists with weak divergence of diffusive density fluxes: mixing along density surfaces (an important process controlling the Δ 14 C distribution 66 ) or depth-independent density fluxes are cases in point. We consider all Δ 14 C values assembled in the GLODAPv2 data product 39 , 40 and pair these with γ values derived from corresponding hydrographic casts. The 2% of Δ 14 C measurements (891 out of 36,541 measurements) for which concurrent hydrographic parameters are not available are assigned the γ value of the corresponding position in the WOCE climatology. Next, each Δ 14 C cast is vertically interpolated onto a fixed series of 140 γ surfaces using a piecewise cubic Hermite interpolating polynomial. We then map Δ 14 C along each γ surface independently. Grid point values are obtained as a weighted average of neighbouring measurements, the selection and weighting of which rely on the distance look-up table described at . Specifically, weights are defined as 67 , 68 (1 − ( r /1,200 km) 3 ) 3 , with r the shortest path from the mapped grid point to surrounding data points at the grid point depth, bypassing topographic obstacles. Only data points whose distance r to the mapped grid point is less than 1,200 km are retained in each weighted average. The resulting global three-dimensional (longitude, latitude and γ ) radiocarbon field is then plotted as a zonal average in Fig. 3 and Extended Data Figs 2 , 3 , 4 using the pseudo-depth reprojection described in the caption of Fig. 1 . In Extended Data Fig. 8 , we show an example map of Δ 14 C at γ = 28.045 kg m −3 , together with the underlying observations. Uncertainty in the constructed maps originates from the individual Δ 14 C measurement error, estimated as ±4‰ (ref. 69 ); errors in concurrent neutral density values; the limited spatio-temporal coverage of Δ 14 C measurements; and the limitations of the mapping procedure. Given the sparse observational coverage, uncertainties relate primarily to the sampling density, which is lowest in the southeastern Pacific, the western Indian and the eastern Atlantic oceans ( Extended Data Fig. 8b ). The search radius of 1,200 km allows the vast majority of the ocean to be mapped, but smoothes out smaller-scale structures that may be present in the data. Note also that the presented radiocarbon maps do not correct for bomb-produced 14 C, nor for any source of temporal variability of Δ 14 C. Because we focus on the northern abyssal ocean whose ventilation timescales typically exceed centuries, the influence of bomb 14 C should not noticeably affect the qualitative structure discussed here 63 . In particular, the surfaces of meridional flow reversal identified in radiocarbon distributions are corroborated by other hydrographic fields such as stratification, oxygen or silicic acid (see Extended Data Fig. 7 ). In contrast, these features are blurred in the bomb-corrected GLODAP climatology product 70 , whose accuracy may be reduced in the abyss 66 . The neutrally averaged radiocarbon climatology constructed for the present study is available for download at . Pacific ocean abyssal pathways The density profile of summed incrop areas over the Pacific exhibits two different peaks ( Extended Data Fig. 9a and d ): a dominant peak at γ = 28.11 kg m −3 and a secondary peak at γ = 28.03 kg m −3 . The latter peak originates from the sub-basins situated east of the East Pacific Rise ( Extended Data Fig. 1 ). These sub-basins receive an inflow of Circumpolar Deep Water 71 , 72 , which is older than the AABW entering the southwestern Pacific but younger than southward-flowing deep waters ( Extended Data Fig. 2c and f ). Given that abyssal upwelling in the Pacific is inferred to be mostly confined to below the crest of the East Pacific Rise, we posit that all or part of the Circumpolar Deep Water inflow feeds a secondary overturning cell restricted to the southeastern Pacific and the 28–28.06 kg m −3 density range ( Extended Data Fig. 2 ). North of 32° S this overturning cell is presumed to be separate from the main Pacific abyssal cell fed by AABW. Our analysis suggests that the bulk of the Pacific AABW waters returns to 32° S at depths of more than 2.5 km. This result accords with early analyses of transport across zonal hydrographic sections of the North Pacific 73 , 74 , 75 and the South Pacific 71 , 76 but contrasts with more recent inverse estimates of overturning transports 10 , 72 , 77 , 78 , 79 , 80 , 81 at 32° S. Although the latter estimates all imply a shallower return flow at this latitude, they exhibit substantial spread, most place the meridional flow reversal well above the 28.11 kg m −3 density surface, and many conflict with a high-resolution inverse study of the eastern South Pacific circulation 82 . The uncertainties carried by the inverse solutions at this location thus appear too large to permit validation or invalidation of the present results. Further work is required to reconcile the regime transitions proposed here with large-scale tracer budgets and to narrow down ranges for the strength and structure of the deep southward return flow. Indian Ocean abyssal pathways Extended Data Fig. 3 documents the relationships between the distributions of seafloor areas, incrop areas and radiocarbon content in the Indian basin. The latter hosts multiple sub-basins with different ventilation histories. For improved interpretation, sub-basins with overlapping latitude ranges are therefore shown in separate panels. Two separate AABW routes ventilate the abyssal Indian ocean 83 ( Extended Data Fig. 1 ): an east route through the Perth, Wharton and Cocos basins, with some connection to the Central Indian basin ( Extended Data Fig. 3a–f ); and a west route through the Madagascar, Mascarene and Somali basins, and into the Arabian basin ( Extended Data Fig. 3g–l ). Inflow of young, dense AABW in the eastern Indian Ocean is clearly seen in its radiocarbon distribution. Yet an unambiguous 14 C signature of the level of meridional flow reversal at 32° S is not distinguishable, in part owing to the small ventilated volume of the sub-basin. Indeed, a distinct transition towards much older waters near 3.7 km depth, approximately coinciding with the weak incrop level (the inferred diabatic–adiabatic transition), suggests that the seafloor distribution constrains the upwelling and southward return of young Antarctic-sourced waters to depths greater than 3.7 km. Such a compressed AABW overturning in the eastern Indian ocean is consistent with: (i) weak AABW throughflow to the Central Indian basin, whose abyssal radiocarbon activities are much lower than those of eastern Indian bottom waters; (ii) the steep topographic barriers bounding the sub-basin, which limit dianeutral upwelling; (iii) the location of the silicic acid maximum within the northern half of the sub-basin, above that overturning ( Extended Data Fig. 7f ). Additional deep-water overturning in the 28.1–28.17 kg m −3 density range driven by water mass transformation in the Central Indian basin is expected to be weak owing to the low mixing rates 33 , 84 and low radiocarbon concentrations observed north of 30° S in this density range ( Extended Data Fig. 3i and l ). We thus interpret the apparent Si(OH) 4 tongue near 28.12 kg m −3 north of 15° S ( Extended Data Fig. 7f ) as due to local production, diffusion and/or horizontal recirculation, rather than net meridional flow 85 . The area of the western Indian ocean decreases more gradually, owing to the presence of weakly sloping ridges. As a result, large seafloor and incrop areas extend higher up in the water column, to about 2.5 km depth. The predicted northward–southward and diabatic–adiabatic transitions lie near 4 km depth (28.13 kg m −3 ) and 2.7 km depth (28.04 kg m −3 ), respectively. These compare well with the observed Δ 14 C and Si(OH) 4 distributions ( Extended Data Figs 3c and 7c ). Nonetheless, sampling limitations ( Extended Data Fig. 8b ), lateral redistribution by mixing along density surfaces, and the inflow of relatively young waters of North Atlantic origin into deep layers of the basin 5 hinder clearer identification of the overturning structure in the radiocarbon data. The Arabian basin has no water denser than 28.13 kg m −3 , so that its contribution to the overturning appears to be restricted to the transformation of lighter deep waters. The inferred structure of the deep Indian ocean overturning contrasts with the results of steady geostrophic box inversions 10 , 77 , 79 , 81 , 86 , 87 , 88 , 89 , which suggest a shallower return of dense southern waters. However, published inverse estimates of the Indian overturning differ widely in structure and strength 84 . The complexity of the flow and topography of the basin probably plays a part in this scatter 80 , 87 , 89 . In particular, the limited number of abyssal density layers considered in the inverse box models prevents resolution of the compressed AABW overturning identified in the eastern Indian ocean: most inversions carry one layer denser than 28.15 kg m −3 in this sub-basin, and predict either net northward 81 or southward 89 flow in the layer at 32° S. Further, the presence of multiple peaks in the total incrop area profile of the Indian ocean ( Extended Data Fig. 9c and f ), related to its topographic partitioning, suggests that the basin’s overturning streamfunction may exhibit several abyssal peaks. The coexistence of several abyssal overturning cells traversing relatively small contrasts in depth, density and other properties could possibly explain the limited consistency of hydrographic inversions and their mismatch with water mass transformation estimates 84 , 90 . Atlantic ocean abyssal pathways AABW enters the Atlantic Ocean west of the Mid-Atlantic Ridge 83 ( Extended Data Figs 1 and 4a–c ). As opposed to the situation in the Pacific Ocean, it carries a low radiocarbon signature relative to the overlying 14 C-rich NADW. At 32° S, the boundary between northward-flowing AABW and southward-flowing NADW coincides with the strongest vertical Δ 14 C gradient, observed at γ = 28.14 kg m −3 . This density surface is also the peak incrop surface across the western Atlantic, consistent with control by incrop areas of the level of meridional flow reversal. Transport analyses 9 , 10 and the climatological density field further indicate weak southward influx of waters denser than 28.14 kg m −3 at 48° N, substantiating a match between the levels of meridional flow reversal, peak incrop area and maximum dianeutral upwelling. We infer that the pivotal upwelling across the 28.14 kg m −3 surface occurs mainly between 30° N and 45° N near 4.5 km depth, where the weak abyssal stratification and large seafloor availability combine to maximize incrops ( Extended Data Fig. 4b ). Additionally, weak seafloor and incrop areas shallower than 3 km imply that the dianeutral upwelling of bottom waters and their southward return are concentrated at depths greater than 3 km. The abyssal eastern Atlantic ( Extended Data Fig. 4d–f ) is primarily fed from its western counterpart through the Chain (1° S), Romanche (1° N) and Vema (11° N) fracture zones, with additional NADW inflow from the northern end of the basin 83 , 91 ( Extended Data Fig. 1 ). The Δ 14 C levels at the outflow of the three fracture zones approach −115‰, which is closer to western Atlantic NADW levels (about −100‰) than to AABW levels (about −150‰). This suggests that the outflows are dominated by NADW, not AABW, in accord with local observational surveys 28 , 92 , 93 , 94 , 95 and silicic acid distributions ( Extended Data Fig. 7g, h ). Consequently, the eastern Atlantic contributes primarily to the transformation and upwelling of NADW. Because the estimated AABW throughflow to the eastern basins 94 , 95 is only a fraction of the 32° S Atlantic AABW input 27 , 96 , we conclude that dianeutral upwelling within the western Atlantic controls the present-day boundary between northward-flowing AABW and southward-flowing NADW. The seafloor and tracer distributions of the eastern Atlantic nonetheless indicate that, there as in the western Atlantic, dianeutral upwelling and AABW influence are most important below the 3 km geopotential. Radiocarbon evidence from the Atlantic and Indian basins thus bears out the relationship between the abyssal overturning structure and the depth and density distributions of the ocean floor. Major topographic obstacles and constrictions at sub-basin boundaries either (i) catalyse the transformation of AABW or (ii) restrict its access to certain sub-basins, but do not override this relationship. In situation (i), flow constrictions contribute to the creation of a more homogeneous bottom water mass, focusing incrop areas into a narrow density range and favouring rapid upwelling at the peak incrop layer downstream. Such inter-basin passages lie below the boundary between northward AABW transport and southward deep water transport. In situation (ii), topographic barriers limit the role of more isolated sub-basins in the transformation and upwelling of lighter deep waters. The additional deep boundary transformation occurring en route to and within these sub-basins (namely the eastern Atlantic, the Arabian basin and the Central Indian basin) concerns mostly waters lighter than AABW and denser than 28.05 kg m −3 ( Extended Data Fig. 9 ). Code availability Code for the generation and usage of the distance look-up table is available at . Analysis scripts are available from the corresponding author on request. Data availability The global bathymetry product can be downloaded at . The WOCE hydrographic climatology is available at . GLODAPv2 radiocarbon data can be retrieved from . The constructed radiocarbon climatology is made available by the authors at .
New research from an international team has revealed why the oldest water in the ocean in the North Pacific has remained trapped in a shadow zone around 2km below the sea surface for over 1000 years. To put it in context, the last time this water encountered the atmosphere the Goths had just invaded the Western Roman Empire. The research suggests the time the ancient water spent below the surface is a consequence of the shape of the ocean floor and its impact on vertical circulation. "Carbon-14 dating had already told us the most ancient water lied in the deep North Pacific. But until now we had struggled to understand why the very oldest waters huddle around the depth of 2km," said lead author from the University of New South Wales, Dr Casimir de Lavergne."What we have found is that at around 2km below the surface of the Indian and Pacific Oceans there is a 'shadow zone' with barely any vertical movement that suspends ocean water in an area for centuries. The shadow zone is an area of almost stagnant water sitting between the rising currents caused by the rough topography and geothermal heat sources below 2.5km and the shallower wind driven currents closer to the surface. Before this research, models of deep ocean circulation did not accurately account for the constraint of the ocean floor on bottom waters. Once the researchers precisely factored it in they found the bottom water can not rise above 2.5km below the surface, leaving the region directly above isolated. While the researchers have unlocked one part of the puzzle their results also have the potential to tell us much more. "When this isolated shadow zone traps millennia old ocean water it also traps nutrients and carbon which have a direct impact on the capacity of the ocean to modify climate over centennial time scales," said fellow author from Stockholm University, Dr Fabien Roquet. The article Abyssal ocean overturning shaped by seafloor distribution is published in the scientific journal Nature.
10.1038/nature24472
Earth
New study findings could help improve flood projections
Manuela I. Brunner et al, An extremeness threshold determines the regional response of floods to changes in rainfall extremes, Communications Earth & Environment (2021). DOI: 10.1038/s43247-021-00248-x Journal information: Communications Earth & Environment
http://dx.doi.org/10.1038/s43247-021-00248-x
https://phys.org/news/2021-08-new-study-findings-could-help.html
Abstract Precipitation extremes will increase in a warming climate, but the response of flood magnitudes to heavier precipitation events is less clear. Historically, there is little evidence for systematic increases in flood magnitude despite observed increases in precipitation extremes. Here we investigate how flood magnitudes change in response to warming, using a large initial-condition ensemble of simulations with a single climate model, coupled to a hydrological model. The model chain was applied to historical (1961–2000) and warmer future (2060–2099) climate conditions for 78 watersheds in hydrological Bavaria, a region comprising the headwater catchments of the Inn, Danube and Main River, thus representing an area of expressed hydrological heterogeneity. For the majority of the catchments, we identify a ‘return interval threshold’ in the relationship between precipitation and flood increases: at return intervals above this threshold, further increases in extreme precipitation frequency and magnitude clearly yield increased flood magnitudes; below the threshold, flood magnitude is modulated by land surface processes. We suggest that this threshold behaviour can reconcile climatological and hydrological perspectives on changing flood risk in a warming climate. Introduction There is clear theoretical, model-based, and empirical evidence that global precipitation extremes, i.e. precipitation exceeding a high threshold, will increase in a warming climate 1 , 2 , 3 , 4 . However, there is greatly more uncertainty regarding the hydrologic response regarding flooding and there is not yet clear evidence for widespread increases in flood occurrence either in observations 5 , 6 , 7 , 8 , 9 , 10 or in model simulations 11 , 12 , 13 . While there is still a theoretical expectation that flood events will increase in a warming climate 14 , 15 , 16 , 17 , and while such flood increases have been documented regionally 18 , 19 , the absence of broader observational trends supporting this hypothesis is conspicuous. In the literature on hydrological processes, the lack of such trends is often attributed to changes in non-precipitation-flood drivers, such as temperature-driven decreases in snow accumulation and increases in evaporation that yield decreases in soil moisture 9 , 20 , 21 , 22 , 23 . Because of the compounding nature of different flood drivers, establishing a direct link between increases in extreme precipitation and increases in flooding is challenging 24 , 25 , 26 . Indeed, previous studies suggest that the strength of the relationship between precipitation and discharge may depend on a range of factors including catchment size, event magnitude 25 , 27 , and season 28 though the details of these complex relationships remain largely unknown and are hard to generalize. Further complicating such investigations is the rarity of extreme events with long return intervals and their sparseness in observed precipitation and streamflow records. Several approaches have been proposed to address this data scarcity problem, including: pooling observations across different catchments 29 or seasonal predictive ensemble members 30 , 31 ; tree-ring and historic reconstructions 32 , 33 ; stochastic streamflow generation 34 , 35 ; and ensemble modeling using Single Model Initial-condition Large Ensembles SMILEs 36 . To date, however, few studies have combined atmospheric SMILEs with hydrological models to obtain a SMILE of streamflow time series, i.e. a ‘hydro-SMILE’ 37 , 38 , 39 . The availability of such a hydro-SMILE is crucial in assessing the relationship between future changes in extreme precipitation and flooding – particularly high-end extreme events (i.e., those occurring twice or fewer times per century), which are rare to nonexistent in observed time series. Here, we seek to reconcile the extreme precipitation-flood paradox in a warming climate: is there a precipitation threshold beyond which increasing precipitation extremes directly translate into increasing flood risk? We hypothesize that such a threshold should exist because moderately extreme events may be buffered by decreased soil moisture (due to warming) while very extreme events may quickly lead to soil saturation and subsequently to direct translation of precipitation to runoff. Using a hydro-SMILE approach, we consider precipitation and flood characteristics from historical (1961–2000) and warmer future (2060–2099) climates for 78 catchments in major Bavarian river basins (Main, Danube, and the Inn river with their major tributaries; henceforth Hydrological Bavaria) characterized by a wide variety of hydroclimates, soil types, land uses, and streamflow regimes 39 , 40 . We find that there does indeed exist a catchment-specific extremeness threshold (i.e. return interval threshold) above which precipitation increases clearly yield increased flood magnitudes, and below which flood magnitude is strongly modulated by land surface processes such as soil moisture availability. Ultimately, this finding may help reconcile seemingly conflicting climatological and hydrological perspectives on changing flood risk in a warming climate. Addressing the precipitation-flood paradox is simply not possible using observations alone, as the high-end extreme events of interest are rare to nonexistent in temporally limited observational records. This real-world data limitation effectively precludes statistical analyses of extreme events with return periods exceeding ~50 years. To overcome this problem, we use a hydro-SMILE to obtain a large number of extreme precipitation–streamflow pairs. The hydro-SMILE consists of hydrological simulations obtained by driving a hydrological model with climate simulations from a single model initial-condition large ensemble (SMILE) climate model. The underlying model simulations were originally generated by Willkofer et al. 40 as part of the ClimEx project 41 . The hydro-SMILE simulations consist of daily streamflow (mm d −1 ), snow-water-equivalents (SWE, mm), and soil moisture (%) – all of which were obtained by driving the hydrological model WaSiM-ETH 42 with a 50-member ensemble of high-resolution climate input (spatial: 500 × 500 m 2 , temporal: 3 h) (for further information on the hydro-SMILE see Section “Hydro-SMILE”). While such a large ensemble approach resolves the small or zero size problem for very extreme events, new sources of uncertainty do also arise. We acknowledge that the hydro-SMILE modeling chain is affected by uncertainties introduced through both the underlying climate and hydrological models. Climate model uncertainties include those relating to precipitation process-representation, downscaling, and bias-correction procedures, hydrological model uncertainties comprise model and parameter uncertainties. These latter uncertainties may be particularly relevant for the very extreme events under consideration in the present study because model calibration and evaluation rely upon observed events – and (as previously noted) modern observational records simply don’t exist for events of the extreme magnitudes considered here. However, we point out that this particular element of the overall uncertainty is essentially irreducible, and will likely remain so until the length of the observed record increases substantially some decades in the future. As such, the use of a hydro-SMILE is an appropriate method – and arguably the singular method available, at present - to comprehensively and quantitatively address the extreme precipitation-flood paradox. Results Threshold behavior in flow response to extreme precipitation We first seek to assess whether there exists a return interval threshold beyond which precipitation ( P ) increases consistently translate into streamflow ( Q ) increases, and thereby to increases in flood magnitude. To do so, we use a hydro-SMILE consisting of a 50-member ensemble of 3-hourly precipitation and streamflow time series for Hydrological Bavaria (see Methods section “Study region” and Supplementary Figure 1 ), which we aggregated to daily resolution. The hydro-SMILE was derived for the period 1961–2099 by combining the Canadian Regional Climate Model large ensemble CRCM5-LE 41 with the hydrological model WaSiM-ETH 40 , 42 (see Methods sections “Hydro-SMILE” and “Hydrological model evaluation”). From this ensemble, we extract precipitation–discharge ( P − Q ) pairs for a historical (1961–2000) and future time period (2060–2099) by first applying a peak-over-threshold approach on precipitation and then identifying corresponding peak discharges (see Methods section “Event identification”). We then empirically compute P and Q magnitudes for different levels of extremeness, i.e. mean events and progressively more extreme events with 10, 20, 50, 100, and 200 year return intervals, by pooling events extracted from the 50 ensemble members. Finally, we derive future relative changes in extreme event magnitudes by comparing magnitudes for a future period (2060–2099) with magnitudes of a historic period (1961–2000) (see Methods section “Changes in event magnitudes and P − Q relationship”). We find that median future changes in daily precipitation and corresponding discharge extremes overall catchments depend on their respective level of extremeness (here defined as their return interval, RI; Fig. 1 ). Precipitation frequency and magnitude are found to increase for all levels of extremeness, with the largest median increases corresponding to the most extreme events which is consistent with prior findings 43 , 44 , 45 . 50-year RI precipitation events (i.e. events of a magnitude occurring approximately twice per century), occur twice as often (a 100% increase) in the future period vs. the historical period, while the frequency of 200 years RI events increases by up to 200%. Median increases in precipitation magnitudes corresponding to these frequency increases range from an increase <10% for 50 year RI events to an up to 15% increase for 200 year events. Fig. 1: Future changes in precipitation ( P ) and streamflow ( Q ) magnitudes for different levels of extremeness overall 78 catchments. Relative changes [−] in ( a ) event frequency and ( b ) peak magnitude for mean and progressively more extreme events (those with 10, 20, 50, 100, and 200 year empirical return intervals, respectively). Relative changes are computed by comparing event characteristics of a future period (2060–2099) to characteristics of a historical period (1961–2000). The gray bar in ( b ) shows the relative change in event timing (day of the year, negative values indicate earlier extreme event occurrence overall events). Meaning of boxplot elements: central line: median, box limits: upper and lower quartiles, upper whisker: min(max(x), Q 3 + 1.5 × IQ R ), lower whisker: max(min(x), Q 1 − 1.5 × IQ R ), no outliers displayed. Full size image In notable contrast to precipitation changes, changes in flood frequency and magnitude exhibit a more complex response as a function of flood event extremeness. We find that there exists a return interval threshold below which flood frequency and magnitude decrease, and above which they increase. The mean location of this threshold across all catchments lies between event RIs of 20–50 years for both frequency and magnitude (Fig. 1 ). However, the exact location of this threshold is catchment-dependent (Fig. 2 ). Some catchments already show increases in magnitude/frequency at very low thresholds (<10 years, lightly colored catchments), while in other catchments a threshold only emerges at very long return intervals (100 or 200 years, darkly colored catchments). A few catchments (20%) don’t show any threshold behavior at all as they either exhibit uniformly increasing or decreasing discharges independent of the return interval. However, even in catchments without a distinct threshold, the discharge response becomes increasingly positive for increasing event magnitudes. Fig. 2: Catchment-specific return interval thresholds above which precipitation increases result in discharge increases. Relative changes (1 corresponds to 100% increase) in ( a ) event frequency and ( b ) peak magnitude for mean and progressively more extreme events (those with 10, 20, 50, 100, and 200 year empirical return intervals, respectively) for each of the 78 catchments (1 line = 1 catchment). Dashed lines denote catchments without a distinct threshold. Relative changes are computed by comparing event characteristics of a future period (2060–2099) to characteristics of a historical period (1961–2000). The approximate location of the return interval threshold is indicated using different line colors with darker colors representing higher return interval thresholds. Full size image This finding of a catchment-specific return interval threshold in a great majority of instances suggests that the extreme streamflow response in a warming climate changes sign, from negative to positive, when comparing more ‘common’ flood events (i.e. those occurring 5 or more times per century) to more ‘rare’ flood events (i.e. those occurring two or fewer times per century). This finding has major implications for the interpretation of time series of observed streamflow, as the historical record is often too short to robustly characterize changes in high-magnitude events occurring only several times per century, and any such threshold behavior might go undetected as a result. Still, the results corroborate findings by earlier studies suggesting that historical changes in flooding do, to some degree, depend on event extremeness 25 , 27 . Next, we assess which meteorological factors and catchment characteristics influence the location of the overall flood response threshold along the extremeness spectrum when considering median changes in extremes overall catchments. For this assessment, we compare historical and future precipitation and discharge extremes for (a) small (<1000 km 2 ) and large (>1000 km 2 ) catchments, (b) low- (<1000 m.a.s.l.) and high-elevation catchments (>1000 m.a.s.l.), (c) winter (Oct–Mar) and summer (April–Sept) events, (d) snow-influenced (>10 mm stored SWE) and rainfall-driven events (<10 mm stored SWE), and (e) events extracted using different precipitation temporal aggregation levels (1-day, 3-day, and 5-day accumulated precipitation) (see Methods section “Changes in event magnitudes and P − Q relationship”). Our results show that the threshold above which precipitation increases translate into increases in flood frequency and magnitude is strongly modulated by elevation, season, and event type (Figs. 3 , 4 ), but does not meaningfully depend upon the precipitation temporal aggregation level (Supplementary Figure 2 ) or upon catchment size (Supplementary Figure 3 ). This result may change if studying a dataset with a wider range of catchment sizes. However, when studying larger catchments, interactions of flood waves from different tributaries will have to be considered. The return interval threshold does not exist at all or occurs at a much lower extremeness level in high-elevation catchments (<10 years RI) versus low-elevation catchments (~50 years RI). In other words, precipitation frequency and magnitude increases in high-elevation catchments are more directly translated into flood frequency and magnitude increases than in low-elevation catchments for any given event extremeness level (Figs. 3c , 4c ). In addition to elevation, this threshold also depends on the season. In high-elevation catchments, discharge frequency and magnitude increases are stronger in winter than in summer. In contrast, flood frequency and magnitude mostly decrease in low-elevation catchments in winter while they increase in summer for high-magnitude events (Figs. 3b , 4b ). A substantial portion of this elevational separation in flood response may be explained by differences in extreme precipitation event type, i.e. whether an event is snow-influenced or rainfall-driven (Figs. 3d–f , 4d–f ). In low-elevation catchments, flood frequency and magnitude decrease for snow-influenced events caused by a decrease in extreme precipitation during such events while they increase for very extreme rainfall-driven events (return intervals >50 years) (Figs. 3e , 4e ). In contrast, high-elevation catchments show flood frequency and magnitude increases for both snow-influenced and moderately extreme rainfall-driven events (Figs. 3f , 4f ). This behavior would be consistent with a simultaneous decrease in mean snowpack accumulation and the number of rain-on-snow events 39 , 46 , 47 , 48 , 49 , 50 , which in some cases have lower peaks than solely rainfall-driven events 23 . Fig. 3: Factors influencing future frequency changes in extreme precipitation and discharge magnitudes for different levels of extremeness. Median relative change [−] in P and Q frequency per season ( a – c ) and event type ( d – f ) across all, low-elevation, and high-elevation catchments for mean and progressively more extreme events (those with 10, 20, 50, 100, and 200 year return intervals, respectively). Relative changes are computed by comparing event characteristics of a future period (2060–2099) to characteristics of a historical period (1961–2000). Full size image Fig. 4: Factors influencing future magnitude changes in extreme precipitation and discharge for different levels of extremeness. Median relative change [−] in P and Q magnitude per season ( a – c ) and event type ( d – f ) across all, low-elevation, and high-elevation catchments for mean and progressively more extreme events (those with 10, 20, 50, 100, and 200 year return intervals, respectively). Same as Fig. 3 , but here for future magnitude changes. Relative changes are computed by comparing event characteristics of a future period (2060–2099) to characteristics of a historical period (1961–2000). Full size image Flood-precipitation dependence strengthens In addition to assessing changes in precipitation and flood magnitude, we consider the (non-)stationarity of the relationship between the two variables over time in a warming climate. We compare different measures of dependence including correlation and extremal (i.e. tail) dependence 51 for progressively more extreme events for the historical and future period (see Methods section “Changes in event magnitudes and P − Q relationship”). Similar to changes in flood frequency and magnitude, we find that changes in the strength of the P − Q relationship overall catchments are generally positive above a certain return interval threshold and depend on event magnitude, season, and in particular elevation (Fig. 5 ). The median P − Q relationship changes overall 78 catchments are generally stronger in high- versus low-elevation catchments, and are also stronger in winter than in summer. In low-elevation catchments, the relationship weakens for moderate extreme events and intensifies only for very extreme events, particularly in summer. In high-elevation catchments, the relationship intensifies for both moderate and severe extremes. In these catchments, however, the strengthening of the relationship in winter decreases as events become more extreme, while it intensifies more strongly for the more extreme events in summer. These findings suggest that influences on the threshold above which the P − Q relationship strengthens are complex, and likely vary widely across hydroclimates as suggested by variations by season and event type. They are also suggestive of a potentially important role for antecedent land surface conditions in modulating the underlying relationship – a topic we explore further in the next section. Fig. 5: Factors influencing future changes in the P − Q relationship for different levels of extremeness. Median relative change [−] in P − Q dependence (areal precipitation sum and peak discharge) per season across ( a ) all, ( b ) low-elevation, and ( c ) high-elevation catchments for correlation and tail dependence for progressively more extreme events (those with 10, 20, 50, 100, and 200 year return intervals, respectively). Relative changes are computed by comparing event characteristics of a future period (2060–2099) to characteristics of a historical period (1961–2000). Full size image Role of antecedent conditions in flood response We also assess the extent to which land surface and hydro-meteorological drivers beyond precipitation govern flood magnitudes at different levels of extremeness. For this assessment, we construct a multiple linear regression model that predicts flood magnitude (mean and 100 year RI) using a set of predictors: mean event precipitation, mean event temperature, mean event SWE, and mean event soil moisture anomalies, which are only weakly collinear according to the variable inflation factor (VIF does not exceed 10 for any pair and only exceeds 4 for very few pairs; see Methods section “Importance of hydro-meteorological drivers”). We consider the sign and magnitude of the associated regression coefficients, and their change between the two time periods of interest (historical: 1961–2000, future: 2060–2099). The regression analysis shows that flood magnitude is driven by different meteorological conditions and land surface processes whose importance varies widely by the level of extremeness, elevation, and season (Fig. 6 upper panel). For moderate and severe extremes at both low and high elevations, precipitation is positively related to discharge magnitude (i.e. for sufficiently extreme events, precipitation increases almost always lead to discharge increases). In contrast, the role of all the other drivers particularly that of temperature strongly depends on the level of extremeness, elevation, and season and is not statistically significant in all cases. Fig. 6: Importance of flood drivers in the past and its future changes. Importance of precipitation ( P ), temperature ( T ), snow-water-equivalent (SWE), and soil moisture (SM) as drivers of historical ( a ) moderate floods (median overall events identified in a catchment) and ( b ) extreme floods (100-yearly flood). Future changes in driver importance for ( c ) moderate floods and ( d ) extreme floods. All panels are divided into low- vs. high-elevation catchments and distinguish between summer and winter events. Turquoise and pink colors ( a , b ) indicate positive and negative correlation coefficients, respectively (coefficients not statistically significant at p < 0.05 level are hatched), and green and red colors ( c , d ) indicate increases and decreases in driver importance, respectively. Full size image In low-elevation catchments, temperature increases are associated with discharge decreases, particularly for moderate extremes (negative regression coefficients) (Fig. 6a ). In summer, higher temperatures mean higher evapotranspiration and therefore lower soil moisture, which means higher soil water storage capacity and therefore less direct runoff resulting from a given amount of precipitation. In winter, higher temperatures are associated with less snow accumulation and therefore less rain-on-snow events 46 , 47 , 49 , 50 , which can lead to smaller flood peaks because solely rainfall-driven events may not be as severe as rain-on-snow events 23 . While these temperature effects are strong for moderate floods, temperature loses importance moving toward more extreme events. This effect is particularly pronounced in summer, where the negative effect of temperature weakens while the positive relation between event magnitude and precipitation intensifies. In winter, temperature effects are still important, however, also to a smaller degree (Fig. 6b ). In low-elevation catchments during winter, soil moisture and snow accumulation are indeed important drivers of flood magnitude. Increases in soil moisture lead to increases in flood magnitudes, as precipitation can more directly be converted into a runoff. In contrast, more snow accumulation is related to smaller floods because water is temporarily stored in the snowpack, and does not form runoff until melting at some later point. While the soil-drying effects of increasing temperatures may lead to flood decreases in low-elevation catchments, they can also lead to flood increases in high-elevation catchments (particularly in winter). This effect arises largely from the phase change of precipitation, which falls increasingly as rain rather than snow in a warming climate 47 , and which has been directly linked with an increase in flood magnitude in such regions 23 . Interestingly, the positive association between temperature and flood magnitude at high elevations exists not only for moderate events, but also for very extreme events. Our analysis of future changes in flood driver importance further shows that the future relevance of precipitation as a flood driver increases for severe events while the importance of temperature increases for moderate but decreases for severe extremes (Fig. 6c–d ). This may potentially be understood in the context of soil saturation as a modulating factor: for typical and even moderate events, antecedent soil-drying and snowpack losses resulting from warming temperatures oppose the effect of increasingly extreme precipitation volume; but for sufficiently severe precipitation events, the extremely large volume of water entering the system may be able to quickly saturate the soil column and overcome even a substantial degree of antecedent soil-drying. In addition, increasingly extreme precipitation may lead to infiltration excess even in the case when soils are not yet saturated. Collectively, these findings support the following generalization: the more extreme a flood event, the more important precipitation becomes as a singular driver – particularly in a warmer future climate. Discussion In this work, we demonstrate for hydrological Bavaria that there is an extremeness or return interval threshold, which varies by catchment, season, and event type, above which extreme precipitation increases outweigh the soil-drying effects of warming temperatures. This result suggests that in other regions around the globe with similar hydro-climates, i.e. temperate climates with pluvial or nival flow regimes, flood risk in a warming climate may also exhibit divergent changes above and below some locally-defined extremeness or return interval threshold. We further find that the hydrologic response to extreme precipitation varies predictably as a function of event magnitude in a warming climate, with streamflow responses becoming increasingly positive even in the few study catchments which do not exhibit distinct threshold behavior. This, when viewed in the context of prior research, may offer evidence for the broader geographic generalizability of our findings. We find that increases in precipitation yield larger and more consistent increases in flood magnitude for more extreme versus more moderate events which is supported by previous observational studies showing only weak dependence between extreme precipitation and moderate flood occurrence in the United States 10 , stronger increasing flood trends for extreme than moderate floods in Central Europe 27 , and trends in extreme discharge that only align with trends in floods for the rarest events in Australian catchments 25 . Thus, there does appear to be a growing body of real-world evidence suggestive of the existence of a precipitation-flood response threshold across a wider range of hydroclimatic and hydrologic regimes than explicitly considered in the present study. The complex influences of elevation, season, and event type upon the return interval threshold suggest that the location of this critical cross-over point may vary somewhat widely across regions of the world with varying topography and background climate. Substantial modulation of this threshold would likely occur depending on climatic factors such as aridity and the local relevance of snowmelt, catchment size, and land use and management. Consider, for example, a semi-arid or subtropical regime (as opposed to the moist mid-latitude regime that characterizes the catchments in the present study). In such a location, the return interval threshold might be higher due to drier antecedent soil conditions a temperature-related phenomenon we also see when comparing seasonally-varying summer with winter thresholds (Figs. 3 , 4 ). The existence of a high return interval threshold in drier Mediterranean regions is supported by observation-based studies that have demonstrated a stronger relationship between precipitation and discharge for larger versus smaller flood events in Spain 52 , and have shown decreases in the occurrence of moderate floods in southern Europe 9 , 27 . In contrast, if we consider cold high-latitude regions and/or high altitude regions with a snow-dominant precipitation regime, the return interval threshold might be expected to be much lower. Indeed, this relationship is apparent from our threshold analysis for snow-influenced events in high-elevation regions (Figs. 3 , 4 ). Additionally, and as suggested by our results (Supplementary Figure 3 ) the return interval threshold may also be modulated by catchment area (generally increasing with catchment size). For larger river basins than the ones included in our Bavarian selection, this finding would imply higher return level thresholds than 20–50 years. Furthermore, direct human influence on streamflow such as dynamic reservoir operations and/or flood management interventions might lead to higher return interval thresholds because smaller floods can be buffered by temporary water storage 53 . In contrast, urbanized catchments (characterized by a high fraction of water-impervious surfaces) might have lower return interval thresholds than catchments with unsealed surfaces because of a more direct relationship between extreme precipitation and flood response 54 , 55 . Such a return interval threshold might even vary from year to year in a single location-occurring at a higher level of event extremeness during drought versus pluvial periods. How exactly such a return interval threshold varies for different hydro-climates remains to be investigated using a global hydro-SMILE. Creating such a global hydro-SMILE for flood analyses requires the combination of a globally downscaled and bias-corrected atmospheric SMILE with a global hydrological model specifically calibrated for flood peaks. Satisfactory calibration for far-from-mean state conditions is challenging using calibration metrics commonly used for large-scale model calibration 56 and data storage and computational costs are high at a global scale when a large spatial domain is combined with a large ensemble size. In addition, global-scale models may not as accurately represent complex land surface processes as smaller-scale models and appropriate reference datasets for meteorology, soils, and hydrogeology are harder to obtain. Creating such a global hydro-SMILE therefore remains a considerable research effort, but one of substantial importance in a warming climate. There are two important implications arising from the existence of a return interval threshold above which increases in precipitation directly translate to increases in flood occurrence. First, this threshold existence suggests that previous studies that focused on less extreme floods, which have shown little change or even decreases in annual streamflow maxima or events with return intervals of less than ~20 years 57 , 58 , will likely be unrepresentative of changes in higher-magnitude events. A robust statistical signal is unlikely to arise in most historical datasets shorter than 100 years because the strongest link between increasing extreme precipitation and flood magnitude occurs for rare, high-magnitude events with return intervals exceeding 20–50 years. This result points to an important limitation of observation-only studies, as well as to the critical importance of large modeling ensembles that can yield larger sample sizes for rare, high-magnitude events. Second, our analysis suggests that despite historical uncertainties, large increases in flood magnitude are likely in a warming climate for the very largest events–potentially including those unprecedented in the modern historical record (i.e., events with 200-year RI, Fig. 1 ). The fact that climate warming may act to decrease the magnitude of more moderate flood events while simultaneously increasing the magnitude of the most extreme events, however, highlights the considerable risk of developing a “false sense of security” based on recent historical experience. These findings therefore have major implications for climate adaptation and flood risk mitigation activities, as well as infrastructure design, in a warming climate. Ultimately, we suggest that this analysis may help reconcile seemingly conflicting perspectives in the climatological and hydrological literature on flood risk in a warming climate. The apparent “precipitation-flood” paradox – whereby precipitation extremes have increased, but floods have not 5 , 24 – may in fact be fully resolved by separating flood events by their extremeness. In this sense, both perspectives may ultimately be correct: hydrologic evidence suggesting no consistent increase in recent flood magnitude because of land surface drying and the changing role of snow using observational records of limited length 9 , 59 , 60 is physically consistent with climatological arguments pointing to a large increase in the magnitude and frequency of historically rare or unprecedented precipitation events and subsequent flood risk 61 , 62 . Future research aimed at expanding the coverage of the regional hydro-SMILE approach to a wider range of hydrologic and climatological regimes will be critical in confirming the broader generalizability of our findings in the present study, but emerging observational evidence does suggest that threshold behavior in precipitation-flood response is plausible across a wide range of regimes in a warming climate 9 , 10 , 25 , 52 . In this work, we confirm that antecedent land surface conditions are indeed critical in modulating more common or moderate flood events, but that precipitation becomes the dominant driver for very extreme events and ultimately overwhelms the effects of soil moisture or snowpack. Finally, we emphasize that the inherent limitations of the historical observational record can be obviated through the use of a climate model large ensemble approach in combination with an advanced hydrological model–a framework that might be useful for more broadly assessing complex and possibly non-linear changes in extreme events in the warming earth system. Methods Study region We study the relationship between extreme precipitation and flood events and its influencing factors in a warming climate for a set of 78 catchments with nearly natural flow conditions in Hydrological Bavaria (Supplementary Figure 1 ). This region comprises the Main, Danube, and Inn rivers with their major tributaries. This study region is particularly well suited to analyze variations in the precipitation–discharge ( P − Q ) relationship because the constituent catchments are characterized by diverse topographic and climatic conditions, ranging from a wet alpine region in the south (1700 mm y −1 ) to a relatively flat and dry foreland to the north (700 mm y −1 ), and diverse soil types and land uses. The variations in these conditions lead to a wide range of hydrologic regimes, including snow-influenced regimes with flood peaks in spring and summer to primarily rainfall-influenced regimes with the main flood season in winter. While these regime types can be considered representative of the temperate climate zone with similar runoff regimes (pluvial to nival), our catchment selection does not cover other climate zones such as cold-climates, semi-arid to arid regions, and the tropics. Hydro-SMILE For this analysis, we use a hydro-SMILE, i.e. hydrological simulations obtained by driving a hydrological model with a Single Model Initial-Condition Large Ensemble (SMILE) climate model. The underlying simulations were originally generated by Willkofer et al. 40 as part of the ClimEx project 41 . The simulations consist of daily streamflow (mm d −1 ), snow-water-equivalents (SWE, mm), and soil moisture (%) – all of which were obtained by driving the hydrological model WaSiM-ETH 42 with a 50-member ensemble of high-resolution climate input (spatial: 500 x 500 m 2 , temporal: 3 h). The climate input consists of an ensemble provided through the Canadian Regional Climate Model version 5 nested with the Canadian Earth System Model 63 under RCP 8.5 64 – a ’high-warming’ climate scenario. WaSiM-ETH is a distributed, mainly physically-based hydrological model comprising modules for evapotranspiration, interception, snow accumulation, and melt, glaciers, runoff generation, soil water storage, and discharge routing 42 . The model was set up for 98 catchments in Hydrological Bavaria by Willkofer et al. 40 using spatial information on elevation, slope, and exposition derived from a digital elevation model for Europe (EU-DEM 65 ), land-use derived from the CORINE land cover dataset 66 , soil characteristics derived from the European soil database (ESDB v2.0 67 ), and hydro-geology (hydraulic conductivity) derived from the Bavarian hydrogeology map 68 and the international hydrogeological map of Europe (IHME1500 v1.1. 69 ) to define global model parameters (i.e. parameters applied to the 98 catchments) describing evapotranspiration rates, infiltration rates, groundwater fluxes, snowmelt, and glacier dynamics and by calibrating four parameters, i.e. those related to recession and direct flow. These local parameters were calibrated for the period 2004–2010 using the dynamically dimensioned search algorithm 70 on the observed 3 h discharge of the 98 catchments provided by the Bavarian Environment Agency (Bayerisches Landesamt für Umwelt - LfU 71 ) and sub-daily observed interpolated meteorological input (i.e. precipitation, temperature, relative humidity, incoming shortwave radiation, and wind speed). The meteorological Sub-Daily Climatological REFerence dataset (SDCLIREF) created in the ClimEx-project is based on a combination of hourly and disaggregated daily station data. To obtain the disaggregated daily station data, the method of fragments 72 was used to extend the sub-daily record to 1981–2010 and to densify the station network. The station data were then interpolated to a 500 × 500 m 2 grid using a combination of multiple linear regression, considering elevation, exposition, latitude, and longitude, and inverse distance weighting similar to Rauthe et al. 73 . The dynamically dimensioned search algorithm used a multi-objective function targeted at optimizing flood characteristics composed of the Nash–Sutcliffe efficiency ( E NS 74 ) and the Kling–Gupta efficiency ( E KG 75 ), which both focus on high flows 76 , the log( E NS ), which emphasizes low flows, and the root-mean-squared error to standard deviation ratio ( R SR ), which quantifies volume errors. The overall objective function assigns a lot of weight to the metrics E NS and E KG because our study focuses on flood events: $$M=0.5\times (1-{E}_{{{{{\rm{NS}}}}}})+0.25\times (1-{E}_{{{{{\rm{KG}}}}}})+0.15\times (1-{{\mbox{log}}}({E}_{{{{{\rm{NS}}}}}}))+0.1\times {R}_{{{{{\rm{SR}}}}}}.$$ (1) The calibrated model was first run for a reference period 1981–2010 with the sub-daily (3 h) observed interpolated meteorological input also used for model calibration. After running the model for the reference period, it was run for a simulation period 1961–2099 with meteorological data derived from the fifth-generation Canadian Regional Climate Model large ensemble (CRCM5-LE) 50 members 41 consisting of a dynamically downscaled version (0.11 ∘ ; 12 km) of the second generation Canadian Earth System Model large ensemble (CanESM2-LE) 77 . The CRCM5-LE data were further bias-corrected using a quantile mapping approach 78 , 79 adjusted to sub-daily time steps and the SDCLIREF as the reference climatology (1981–2010). Correction factors were determined for each quantile bin for each month and sub-daily time step. To preserve the ensemble spread, all members were pooled to obtain the correction factors and these factors were subsequently applied to each ensemble member separately. The bias-corrected data were then further downscaled to 500 × 500 m 2 spatial resolution. The center point of each 0.11 ∘ CRCM5-LE grid cell was treated as a virtual meteorological station and for each time step the anomaly from the mean state was interpolated to the 500 × 500 m 2 grid using inverse distance weighting. The interpolated anomalies were then multiplied/added to the climatological reference fields from the SDCLIREF. Afterwards, the downscaled data were corrected in order to ensure the conservation of mass for each downscaled 0.11 ∘ grid cell. Previous studies have demonstrated that CRCM5-LE (1) shows realistic patterns of daily and sub-daily extreme precipitation 80 and of the timing of annual maximum precipitation over Central Europe 81 ; (2) that its high-resolution allows for a realistic representation of local precipitation extremes, especially over coastal and mountainous regions 41 ; (3) that it is consistent with the EURO-Cordex ensemble 82 , and (4) that it compares well to other large ensembles with respect to regional precipitation pattern changes 81 . For the subsequent analyses of extreme precipitation and flood events, the 3 h meteorological and streamflow time series were aggregated to a daily scale and averaged over each catchment. Hydrological model evaluation We here evaluate the hydrological model for the 78 catchments used in this study for the reference period 1981–2010 using observed daily streamflow from the hydrological services of Bavaria and Baden-Württemberg (both in Germany), Austria, and Switzerland with respect to a set of measures including visual inspection, general efficiency metrics, and flood characteristics of events determined using a peak-over-threshold approach with the 98th flow percentile as a threshold and a minimum time lag of 10 days between successive events to ensure independence. The general efficiency metrics considered are the Kling–Gupta efficiency 75 , Nash–Sutcliffe efficiency 74 , volumetric efficiency, and mean absolute error, four metrics often used in flood simulation studies. The flood characteristics considered are the number of events, mean timing (day of the year), mean peak magnitude (mm d −1 ), mean volume (mm event −1 ), mean duration (days), and P − Q dependence. The start and end of events are determined as the time when discharge rises and falls below the threshold, respectively, event duration is defined as the time elapsing between the start and end of an event, and the volume as the cumulative flow exceeding the threshold over the whole event duration. The model shows a satisfactory performance qualitatively and quantitatively using general and flood-specific evaluation metrics (Supplementary Figure 4 ). Kling–Gupta efficiencies ranged from the first quartile of 0.67 to the third quartile of 0.85, Nash–Sutcliffe efficiencies from the first quartile of 0.56 to the third quartile of 0.8, and volumetric efficiencies from the first quartile of 0.68 to the third quartile of 0.8. The mean absolute error lay at 0.35 mm d −1 (Supplementary Figure 4a ). The flood-specific performance evaluation showed a slight underestimation of the number of events (relative error: 1st quartile: −0.14, median: −0.06, 3rd quartile: 0.07), a slight delay of the timing of flood occurrence (relative error: 1st quartile: −0.01, median: 0.05, 3rd quartile: 0.11), a slight overestimation of flood peaks (relative error: 1st quartile: −0.02, median: 0.07, 3rd quartile: 0.22), an overestimation of both flood volume (relative error: 1st quartile: 0.08, median: 0.32, 3rd quartile: 0.54) and duration (relative error: 1st quartile: −0.02, median: −0.14, 3rd quartile: 0.39), and an underestimation of P − Q dependence (relative error: 1st quartile: −0.35, median: −0.24, 3rd quartile: −0.04) (Supplementary Figure 4b ). Overall, the model performance with respect to high flows and flooding is satisfactory. In addition, the results of our change impact assessment are less affected by inconsistencies between observed and simulated flow because we assess relative rather than absolute changes in precipitation and flood magnitudes. Event identification Using the daily streamflow simulations from the 50 members of the hydro-SMILE, we identify pairs of extreme precipitation (i.e. areal sum over catchment) and corresponding streamflow for two non-overlapping periods of 40 years, a historical (1961–2000) and a warmer future period (2060–2099). Periods of 40 years were chosen to maximize the sample size while ensuring that the two periods are as distinct as possible. To identify these P − Q pairs, we first define daily extreme precipitation events (mm d −1 ) using the 99th percentile (determined on all days (including 0 precipitation days) using the full-time series 1961–2099) as a threshold and by prescribing a minimum time lag of 10 days between events in order to ensure independence (i.e. to enable declustering). This event extraction procedure results in roughly 2–2.5 events chosen per year on average depending on the catchment. Over the 2000 model years of data per time period (40 years across 50 ensemble members), we select approximately 5000 extreme events per catchment. The start of each precipitation event is defined as the day when precipitation exceeds 1 mm prior to the first threshold exceedance and the end of each precipitation event is defined as the time when precipitation falls below 1 mm after the final threshold exceedance (for an illustration of the event identification procedure see Supplementary Figure 5 ). Next, for each precipitation event, we identify the corresponding streamflow peak (mm d −1 ) within a time window from the start of the precipitation event to 5 days after the end of the precipitation event. Finally, for each event, we determine temperature ( ∘ C) on the day of peak precipitation and snow-water-equivalent (mm) and soil moisture anomalies (deviation from the mean, percentage) on the day prior to the occurrence of the precipitation extreme. We repeat this event extraction procedure for two additional temporal aggregation levels (3-day and 5-day mean precipitation accumulations) in order to assess the effect of precipitation aggregation on future precipitation and discharge changes because event identification using different aggregation levels results in the extraction of different event sets. Changes in event magnitudes and P − Q relationship In the first part of our analysis, we use the P − Q event pairs identified to analyze how precipitation and corresponding flood magnitudes as well as the relationship between the two variables may change in the future. To do so, we compare the statistical characteristics of these variables for the future period (2060–2099) to the characteristics of the historical period (1961–2000). P and Q magnitudes are determined empirically by pooling events extracted from the 50 ensemble members for different levels of extremeness, i.e. ’mean’ events (those which occur, on average, once or twice per year) and progressively more extreme events with 10, 20, 50, 100, and 200 year return intervals, respectively. Sample quantiles are computed for probabilities corresponding to different return periods T using: $$p=1-(\mu /T),$$ (2) where μ is the mean inter-arrival time between events. The P − Q relationship is characterized for different dependence measures including Pearson’s correlation coefficient and the tail dependence coefficient \(\overline{\chi }\) 51 , which provides a simple measure of extremal dependence, at different levels of extremeness (i.e. probabilities corresponding to return intervals of 10, 20, 50, 100, and 200 years). Future changes are expressed as relative changes with respect to the characteristics of the historical period. We identify factors potentially influencing the nature of change in P − Q magnitudes and relationship by looking at different levels of extremeness, i.e. return intervals, small and large catchments, high-elevation, and low-elevation catchments, winter and summer events, and snow-influenced and rainfall-driven events. The levels of extremeness considered for both P and Q are the mean and quantiles corresponding to return intervals of 10, 20, 50, 100, and 200 years. Within the 2000 model years available for analysis, roughly 10 events have a return interval of 200 years while roughly 200 events have a return interval of 10 years in each catchment. Small to medium-size catchments are distinguished from large catchments by setting an area threshold of 1000 km 2 83 , which results in 21 small and 57 large catchments. Similarly, low-elevation catchments are separated from high-elevation catchments using an elevation threshold of 1000 m above sea level 84 , which results in 55 low-elevation catchments and 23 high-elevation catchments. Winter events are defined as those events happening between October and March and summer events as those events occurring between April and September. Our results are not sensitive to the use of an alternative seasonal definition aligning with the start of the hydrological year (Nov–April, May–Oct). Throughout the analysis, snow-influenced events are defined as those events during which there was at least 10 mm of SWE while rainfall-driven events are those with less than 10 mm of SWE 47 . Importance of hydro-meteorological drivers In the second part of the analysis, we identify potential hydro-meteorological drivers influencing extreme precipitation and flood magnitudes and their statistical relationships. A comparison of driver importance for the two periods (historical and future) allows us to identify drivers losing or gaining importance in the future. For both periods, we fit multiple linear models to flood magnitudes (mean or quantiles for the 78 catchments) using four explanatory variables, all of which exhibit only weak collinearity according to the variable inflation factor, which lies around 1–2 for most variables and does not exceed 4 in most cases. The explanatory variables include mean event precipitation for each catchment (i.e. mean precipitation for the extreme events identified), mean event temperature, mean event SWE, and mean event soil moisture anomaly. Both flood magnitudes and the explanatory variables are standardized prior to model fitting by subtracting the mean and dividing by the standard deviation (z-scores) in order to make the resulting regression coefficients inter-comparable and easily interpretable. Comparing regression coefficients of the future model to the coefficients of the historical model (absolute changes) enables quantification of changes in future driver importance. Similar to the change analysis, we also distinguish between different levels of extremeness to determine how driver importance varies for events with different return intervals (mean and 100 year event), between low- and high-elevation catchments to define to which degree driver importance depends on catchment elevation, and between winter and summer events to shed light on how driver dependence varies by season. Data availability The raw data of the CRCM5-LE is publicly available to the scientific community ( ). The extreme precipitation-discharge pairs generated with the hydro-SMILE and analyzed in this study are available through HydroShare: . Code availability The code used to process the data and to produce the figures can be requested from the first author.
Climate change will lead to more and stronger floods, mainly due to the increase of more intense heavy rainfall. In order to assess how exactly flood risks and the severity of floods will change over time, it is particularly helpful to consider two different types of such extreme precipitation events: weaker and stronger ones. An international group of scientists led by Dr. Manuela Brunner from the Institute of Earth and Environmental Sciences at the University of Freiburg and Prof. Dr. Ralf Ludwig from the Ludwig-Maximilians-Universität München (LMU) have now shed light on this aspect, which has been little researched to date. They found that the weaker and at the same time more frequent extreme precipitation events (on average every 2 to 10 years) are increasing in frequency and quantity, but do not necessarily lead to flooding. In some places, climate change may even reduce the risk of flooding due to drier soils. Similarly, more severe and at the same time less frequent extreme precipitation events (on average less frequent than 50 years and as occurred in the Eifel in July 2021) are increasing in frequency and quantity, but they also generally lead to more frequent flooding. The team published the results of their study in the journal Communications Earth & Environment. In some places, climate change leads to lower flood risk "During stronger and at the same time rarer extreme precipitation events, such large amounts of rainfall hit the ground that its current condition has little influence on whether flooding will occur," explains Manuela Brunner. "Its capacity to absorb water is exhausted relatively quickly, and from then on the rain runs off over the surface, thus flooding the landscape. It's a different story for the weaker and more frequent extreme precipitation events," says Brunner. "Here, the current soil conditions are crucial. If the soil is dry, it can absorb a lot of water and the risk of flooding is low. However, if there is already high soil moisture, flooding can occur here as well." So, as climate change causes many soils to become drier, the flood risk there may decrease for the weaker, more frequent extreme precipitation events—but not for the rare, even more severe ones. Heavy rainfall will generally increase in Bavaria In the specific example of Bavaria, the scientists also predict how the different extreme precipitation events there will become more numerous. Weaker precipitation events, which occurred on average every 50 years from 1961 to 2000, will occur twice as often in the period from 2060 to 2099. Stronger ones, which occurred on average about every 200 years from 1961 to 2000, will occur up to four times more frequently in the future. "Previous studies have proven that precipitation will increase due to climate change, but the correlation between flood intensities and heavier precipitation events has not yet been sufficiently investigated. That's where we started," explains Manuela Brunner. Ralf Ludwig adds, "With the help of our unique dataset, this study provides an important building block for an urgently needed, better understanding of the very complex relationship between heavy precipitation and runoff extremes." This could also help to improve flood forecasts. 78 areas investigated In its analysis, the team identified so-called frequency thresholds in the relationship between future precipitation increase and flood rise for the majority of the 78 headwater catchments studied in the region around the Inn, Danube and Main rivers. These site-specific values describe which extreme precipitation events, classified by their occurring frequency, are also likely to lead to devastating floods, such as the one in July in the Eifel region. For its study, the research team generated a large ensemble of data by coupling hydrological simulations for Bavaria with a large ensemble of simulations with a climate model for the first time. The model chain was applied to historical (1961-2000) and warmer future (2060-2099) climate conditions for 78 river basins. "The region around the headwater catchments of the Inn, Danube, and Main rivers is an area with pronounced hydrological heterogeneity. As a result, we consider a wide variety of hydroclimates, soil types, land uses and runoff pathways in our study," says Brunner.
10.1038/s43247-021-00248-x
Biology
Unexpected functions of the spinal locomotor network
Weipang Chang et al, Locomotion dependent neuron-glia interactions control neurogenesis and regeneration in the adult zebrafish spinal cord, Nature Communications (2021). DOI: 10.1038/s41467-021-25052-1 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-25052-1
https://phys.org/news/2021-08-unexpected-functions-spinal-locomotor-network.html
Abstract Physical exercise stimulates adult neurogenesis, yet the underlying mechanisms remain poorly understood. A fundamental component of the innate neuroregenerative capacity of zebrafish is the proliferative and neurogenic ability of the neural stem/progenitor cells. Here, we show that in the intact spinal cord, this plasticity response can be activated by physical exercise by demonstrating that the cholinergic neurotransmission from spinal locomotor neurons activates spinal neural stem/progenitor cells, leading to neurogenesis in the adult zebrafish. We also show that GABA acts in a non-synaptic fashion to maintain neural stem/progenitor cell quiescence in the spinal cord and that training-induced activation of neurogenesis requires a reduction of GABA A receptors. Furthermore, both pharmacological stimulation of cholinergic receptors, as well as interference with GABAergic signaling, promote functional recovery after spinal cord injury. Our findings provide a model for locomotor networks’ activity-dependent neurogenesis during homeostasis and regeneration in the adult zebrafish spinal cord. Introduction Neurotransmitter signaling is traditionally associated with communication between neurons. However, several reports suggest that neurotransmitters also influence critical aspects of neurogenesis, including proliferation, migration, and differentiation, under both physiological and pathological conditions 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 . The association between neurotransmitter signaling and neurogenesis appears to be primarily dependent on transmitter receptors that are not confined to neurons. Such receptors are now known to be expressed on diverse cell types in the central nervous system, including stem and progenitor cells 4 , 13 . Therefore, neuronal network activity can directly affect neurogenesis 8 , 14 . Previous studies highlighted a link between neurogenesis and neurotransmission, showing the direct effects of the cholinergic and GABAergic signaling in the modulation of the stem/progenitor cells in the mammalian hippocampus and spinal cord 3 , 8 , 13 , 15 , 16 , 17 , yet it remains unclear how neuronal activity is linked to neurogenic activity in the adult spinal cord. Hence, we hypothesized that prolonged spinal network activity, after training, could stimulate the animal growth rate by engaging the spinal proliferative and neurogenic programs. In the early development of the vertebrate spinal cord, all neurons follow a specific genetic program that defines their identities and assigns them a specific neurotransmitter phenotype 18 . Spinal neurons are organized into distinct networks that integrate and process sensory and motor-related information important for various movements 19 , 20 , 21 . Among the spinal networks, the central pattern generators (CPGs) function as local “control and command” centers that are essential for generating the rhythmicity and coordination required for muscle activity during locomotion 19 , 20 , 21 . At the level of spinal locomotor circuits, several classes of premotor interneurons use specific neurotransmitters, including glutamate, γ-aminobutyric acid (GABA), glycine, and acetylcholine (ACh), to mediate their functions 22 . However, it is unknown whether these neurotransmitters released during locomotion can directly affect the neural stem/progenitor cells (NSPCs) within the spinal cord. If so, by identifying neurotransmitters with neurogenic potential could expose the neurons that control these processes. Therefore, neurotransmitter signaling may play an essential activity-dependent role in regulating and fine-tuning the adult spinal cord neurogenesis. To determine whether physical activity can induce spinal cord neurogenesis, we applied an array of anatomical, pharmacological, electrophysiological, and behavioral approaches in adult zebrafish. Our data demonstrate that cholinergic (synaptic) and GABAergic (non-synaptic) neurotransmission regulates the activity of the NSPCs in opposite manners. We show that among spinal interneurons, it is the locomotor V2a interneurons that mediate the essential cholinergic input to NSPCs. The results demonstrate that spinal network activity plays a crucial role in modulating non-motor and non-neuronal functions in the nervous system besides generating motor behaviors. Results Physical activity induces animal growth and proliferation in the spinal cord Several studies have documented the impact of physical activity on neurogenesis in the mammalian hippocampus 11 , 23 , 24 , 25 , 26 . Unlike mammals, zebrafish retain a remarkable adult neurogenic capacity in many central nervous system areas, including the spinal cord 27 , 28 , 29 , 30 . We first tested whether physical activity leads to proliferative and neurogenic events and assayed global consequences on animal growth by using our recently developed forced swim protocol 31 . We observed that prolonged physical activity (>2 weeks) significantly increased animal growth (Supplementary Fig. 1 ). Combining our exercise protocol with the thymidine analog 5-bromo-2ʹ-deoxyuridine (BrdU), a marker of DNA synthesis, we observed a 3-fold increase in the number of BrdU + cells in the spinal cord after 2 weeks of training (short-term survival; Fig. 1a–c ). After a BrdU pulse, we could also trace the migrated cells out of the proliferative central canal niche (Fig. 1a, b ). After 2 weeks rest from the exercise, the proliferation rate dropped to the level of untrained control animals (Fig. 1a, c ), demonstrating the dynamic and reversible nature of exercise-induced proliferation. Fig. 1: Exercise-induced transient activation of the NSPCs and neurogenesis in the adult spinal cord. a Inverted confocal images from whole-mount adult zebrafish spinal cord hemisegments showing cycling (BrdU + ) cells in control animals (untrained), following 2 weeks of training and 2 weeks rest after training. b Similar distribution pattern of BrdU + cells in the spinal cord comparing untrained, trained, and resting zebrafish. c Quantification of BrdU + cells per hemisegment in different conditions show that the enhanced proliferation after training is reversible ( P = 4.418E-10). d Expression pattern of her4.1 : GFP (NSPCs; green) in close apposition of the adult zebrafish spinal cord’s central canal. e The vast majority (~97.5%) of the her4.1 + cells (green) express the stem cell marker Sox2 (magenta). Arrowheads indicate double-labeled cells. f Cycling her4.1 + radial glia cells (BrdU + , magenta; GFP, green). Training increased the number of BrdU + / her4.1 + cells per hemisegment. g Quantification of the average BrdU + cells per spinal cord section co-expressing neuronal markers (mef-2, HuC/D, or NeuN) in untrained (control) and trained animals. h Proportions of BrdU + cells expressing neuronal or glial markers are similar comparing untrained and trained animals. Quantification is based on the early neuronal marker mef-2. BrdU, 5-bromo-2ʹ-deoxyuridine; CC, central canal; GFP, green fluorescent protein; her4.1, hairy-related 4, tandem duplicate 1; HuC/D, elav3 + 4; mef-2, myocyte enhancer factor-2; NeuN, neuronal nuclei; NSPC, neural stem/progenitor cell; Sox2, sex-determining region Y-box 2. Data are presented as mean ± s.e.m. or as box plots showing the median with 25/75 percentile (box and line) and minimum–maximum (whiskers). ** P < 0.01; *** P < 0.001; **** P < 0.0001; ns, not significant. For detailed statistics, see Supplementary Table 1 . Full size image NSPCs in the adult spinal cord respond by increased proliferative activity to physical training Specialized glial cells are lining the spinal cord’s central canal, a proliferative niche harboring NSPCs in both fish and mammals 32 , 33 , 34 , 35 , 36 . In zebrafish, the calcium-binding protein calbindin (CB) selectively marks cells surrounding the spinal cord’s central canal (Supplementary Fig. 2a ) 37 . Moreover, CB is extensively colocalized with the stem cell marker Sox2 (Supplementary Fig. 2b ) but is not expressed in the GABAergic cerebrospinal fluid contacting neurons (CSF-cNs; Supplementary Fig. 2c ) adjacent to the central canal 38 . To corroborate that CB selectively marks spinal NSPCs, we used the her4 . 1:GFP transgenic reporter line that marks NSPCs in the zebrafish CNS (Fig. 1d and Supplementary Fig. 2d ) 39 , 40 , 41 . We found that none of the radial glia-like GFP + cells expressed the neuronal marker HuC/D (Supplementary Fig. 2e ; HuC/D − ), all GFP + were CB + (Supplementary Fig. 2f ), and that the vast majority of GFP + cells were also expressing Sox2 (Fig. 1e ). Double labeling after BrdU pulse during physical training (Fig. 1f ) showed an increased number of her4.1:GFP + BrdU + cells after training (Fig. 1f ), indicating increased NSPCs’ proliferation in response to physical activity. Most newborn cells differentiate into neurons after physical activity Next, we sought to examine the fate of new cells in the adult spinal cord 2 weeks after BrdU treatment (Supplementary Fig. 3 ). A majority (~68%) of the BrdU + cells expressed either the early differentiation neuronal marker mef-2 or the post-mitotic pan-neuronal markers HuC/D and NeuN (Fig. 1g, h ). In contrast, a small fraction (~32%) of BrdU + cells expressed the glial marker GFAP (Supplementary Fig. 3a, b ). In both control and trained animals, the proportion of newborn cells expressing glial versus neuronal markers remained unaltered, suggesting that physical activity did not affect newborn cells’ differentiation fate (Fig. 1h and Supplementary Fig. 3b ). NSPCs receive neuronal input during locomotion Next, we examined whether the spinal locomotor network is directly implicated in the activation of the NSPCs. We performed whole-cell patch-clamp recordings in single NSPCs while recording motor nerve activity of the ipsilateral CPG in ex vivo adult her4.1:GFP zebrafish preparation 42 , 43 (Fig. 2a ). We verified that the GFP + cells had glial physiological properties, such as hyperpolarized resting membrane potential (−68.27 ± 0.7 mV), linear voltage–current relations, and no generation of action potentials (Fig. 2a and Supplementary Fig. 4 ). Moreover, in the absence of the electrically induced fictive swimming, the NSPCs did not receive any synaptic input (Fig. 2a ). However, after initiating fictive locomotion by electric stimulation (10 pulses, 1 Hz) of the descending axons from the brainstem, we detected a strong periodic synaptic input in NSPCs at frequencies above 4 Hz (Fig. 2a, c ). Moreover, this input was always in phase to the CPG activity (Fig. 2b ). To further confirm that the NSPCs’ inputs were causally associated with the CPG activity, we also recorded from contralaterally located NSPCs and found that they displayed out-of-phase relations (Fig. 2b ). This differential phase-locked association between the CPG activity and the NSPC input suggested that this was a locomotor network’s activity outcome. Swimming burst frequency and strength showed no correlation to the NSPCs’ response (Fig. 2d ). Nevertheless, the locomotor episodes’ duration correlated with the periodic NSPCs’ response (Fig. 2e ). Together, these data link locomotor network and NSPCs’ activity but reveal neither the nature of this input nor the spinal locomotor interneurons involved. Fig. 2: NSPCs receive periodic input from the locomotor network. a Images of a NSPC ( her4.1:GFP + , white arrowhead) close to the spinal cord’s central canal. Current steps do not produce action potentials in NSPCs. Ex vivo setup of the brain-spinal cord preparation allows simultaneous recordings of a spinal cord NSPC and ipsilateral motor nerves ( Nv ). Electrical stimulation (10 pulses at 1 Hz) of the descending inputs elicits a swimming episode. In the absence of swimming, NSPCs do not respond (top traces). During a fictive locomotor episode, NSPCs periodically receive strong inputs (bottom traces). b NSPC responses during swim phase and the locomotor cycle during simultaneous ipsilateral and contralateral recordings. c Graph showing the activity of different NSPCs as a function of instantaneous swimming burst frequency. Individual data points represent instantaneous swimming frequencies of all swimming cycles where the respective NSPCs responded. d No apparent correlation between the amplitude of the periodic NSPC responses and the swimming frequency. e Correlation ( R 2 = 0.9098) between the swim duration and the detected number of inputs to NSPCs. The dashed gray line represents the baseline. NSPC, neural stem/progenitor cell. For detailed statistics, see Supplementary Table 1 . Full size image NSPCs receive synaptic cholinergic input and non-synaptic GABAergic input We performed whole-cell electrophysiological recordings from the NSPCs ( her4 . 1:GFP + ) upon stimulation with neurotransmitters using an intact ex vivo adult zebrafish spinal cord preparation. Among the tested neurotransmitters (glutamate, glycine, serotonin, ACh, and GABA; Supplementary Fig. 5a ), only ACh and GABA induced noticeable changes in the NSPCs (Fig. 3a, b and Supplementary Fig. 5b–h ). Bath application of ACh (100 μM or 5 mΜ) induced numerous excitatory postsynaptic currents (EPSCs) with variable amplitudes in a dose-dependent manner (Supplementary Fig. 5b, c ). The action potential blocker tetrodotoxin (TTX) affected neither the frequency nor the amplitude of the EPSCs, confirming the presence of ACh receptors on the NSPCs’ membrane (Supplementary Fig. 5b, c ). Next, we aimed to distinguish between muscarinic and nicotinic-ACh receptors (Fig. 3a ) in this membrane-associated activity. Activation of muscarinic (muscarine, 500 μM) and nicotinic (nicotine, 100 μM) ACh receptors generated EPSCs with different frequencies and amplitudes (Fig. 3a ). Stimulation of nicotinic receptors better recapitulated the results induced by ACh compared to stimulation of muscarinic receptors, indicating that the cholinergic input on the NSPCs is predominantly mediated by nicotinic receptors (Fig. 3a ). Treatment with the selective α7 nicotinic-ACh receptor antagonist methyllycaconitine (MLA, 10 μM) significantly reduced the recorded EPSCs (Fig. 3a ), further supporting the central role of nicotinic receptors. Next, we assessed the GABAergic responses to NSPCs (Fig. 3b and Supplementary Fig. 5d, f, h ). NSPCs responded to bath application of the neurotransmitter GABA by inducing a prominent inward tonic activation (depolarization; Fig. 3b and Supplementary Fig. 5h ) insensitive to TTX (Supplementary Fig. 5d ). These GABA-mediated tonic responses were blocked entirely by the GABA A receptor antagonist gabazine (10 μM; Fig. 3b ). Applying the selective GABA A receptor agonist muscimol (15 mM; Fig. 3b ) could also accurately generate tonic activation. Fig. 3: NSPCs respond to synaptic cholinergic input from the locomotor network and to non-synaptic GABAergic signaling. a Bath application of ACh induced inward currents in all recorded NSPCs (20 out of 20). Sample traces of the muscarine- and nicotine-induced inward currents in recorded NSPCs. Significant reduction of the induced ACh currents in the presence of the α7 nicotinic receptor antagonist MLA (10 μM). Quantification of the frequency (Hz; P = 3.237E-5) and amplitude (pA; P < 0.0001) of the recorded cholinergic currents. b Exogenous application of GABA induced tonic activation of NSPCs (22 out of 22). GABAergic tonic responses were completely abolished in the presence of the GABA A receptor antagonist, gabazine (10 μΜ). Exogenous application of muscimol (GABA A receptor agonist) induced tonic activation of NSPCs. Quantification of the amplitude (pA; P = 7.904E-8) and duration (s; P = 1.141E-7) of the GABA-related responses. c Schematic protocol for NSPC recordings during local electrical stimulation. Ten pulses (20 Hz) were applied to increase the probability of presynaptic release. Superimposed representative sample trace (in red) out of >40 sweeps (in black) from NSPC responses under control conditions, following application of the polysynaptic blocker mephenesin, and application of the selective nicotinic receptor antagonist MLA suggesting synaptic cholinergic, but not GABAergic activation of the NSPCs. Quantification of the average number of detected EPSCs per sweep and the average amplitude of the responses in control and after the application of polysynaptic blocker (mephenesin). d Application of MLA during locomotion abolishes the regular and strong input to NSPCs, implying a predominant role of nicotinic receptors. ACh, acetylcholine; GABA, γ-aminobutyric acid; MLA, methyllycaconitine; NSPC, neural stem/progenitor cell; Nv , motor nerve recording. The dashed gray line represents the baseline. Data are presented as mean ± s.e.m. and as violin plots. *** P < 0.001; **** P < 0.0001; ns, not significant. For detailed statistics, see Supplementary Table 1 . Full size image Next, we examined the nature of the cholinergic and GABAergic transmission to NSPCs. We applied electrical stimulation of the spinal cord to depolarize all neurons, thereby increasing neurotransmitter release (Fig. 3c ). Following the electrical stimulation, we observed that the NSPCs received solely monosynaptic cholinergic inputs, as they were unaffected (number and amplitude of recorded EPSCs) by the presence of the polysynaptic blocker mephenesin (Fig. 3c ), while activity was entirely blocked by the selective α7 nicotinic-ACh receptor antagonist MLA (Fig. 3c ). These data suggest that the GABAergic signaling to NSPCs is non-synaptic, as previously found in the mammalian brain 3 , 8 , 44 . Furthermore, we observed that MLA abolished the regular strong input to the NSPCs during fictive locomotion, indicating that locomotion-induced signal is solely cholinergic and mediated through the α7 nicotinic receptors (Fig. 3d ). Premotor V2a interneurons mediate the essential cholinergic input to NSPCs during locomotion To gain insight into the ACh’s potential sources to NSPCs, we focused on spinal cholinergic interneurons 45 . Cholinergic V2a interneurons (INs) 22 (Fig. 4a ) are the principal components of the locomotor CPG 19 , 20 , 21 , 43 , 46 , 47 . Anatomical analysis revealed close appositions of V2a-IN axonal collaterals ( Chx10:GFP + ) to the central canal area (Fig. 4b , c) while we identified that ~40% of the NSPCs (CB + ) had close proximities, likely synaptic contacts, with the V2a-INs (GFP + ; Fig. 4b ). To functionally test whether V2a-INs could provide cholinergic input to NSPCs, we performed pair recordings from the same segment (Fig. 4f ). We observed that a train of action potentials elicited in V2a-INs failed to induce any postsynaptic responses to NSPCs (intra-segmental: 0 out of 15 pairs; Fig. 4f ). However, V2a-INs are long ipsilateral descending neurons, and we observed that all (25 out of the 25; n = 8 zebrafish) long descending (>10 segments) spinal cholinergic neurons in zebrafish were indeed V2a-INs (GFP + ; Fig. 4d ). Moreover, we identified that the descending cholinergic V2a-INs (~3/hemisegment) have medium-to-large body size and specific dorsomedial location in the spinal cord (Fig. 4e ). Pair recordings obtained from distal segments (~5–7 segments apart; inter-segmental) revealed that action potentials in V2a-INs induced vigorous small-amplitude EPSCs in NSPCs in 22% of the cases (inter-segmental: 11 out of 50 pairs; Fig. 4f, g ). The recorded EPSCs were resistant to mephenesin (1 mM), a pharmacological agent shown to act as a potential polysynaptic transmission blocker in the mammalian spinal cord 48 (Fig. 4f and Supplementary Fig. 6 ). Yet, the observed changes in the duration of the recorded EPSCs after the application of mephenesin suggested that the interaction between V2a-INs and NSPCs comprises monosynaptic and polysynaptic inputs (Supplementary Fig. 6 ). To further determine whether these responses are cholinergic, we applied the nicotinic receptor antagonist MLA and observed that it completely abolished the monosynaptic EPSCs in NSPCs (Fig. 4f ). We noticed that the transmission between V2a-INs and NSPCs exhibited partial and complete failures in ~20% of the cases during train stimulation (Supplementary Fig. 7 ), suggesting that nicotinic-ACh receptors might undergo desensitization, which is a potential mechanism to control synaptic efficacy 49 , 50 . Next, we asked whether V2a-INs release cholinergic input to NSPCs during locomotion. Simultaneous recordings from connected pairs of V2a-INs and NSPCs revealed that while V2a-INs discharged rhythmically during swimming, NSPCs occasionally receive this cholinergic input (Fig. 4h ) as seen before (Fig. 2a ), implying that the nicotinic-ACh receptor desensitization most likely is responsible for the non-regular release of the ACh during locomotion. Fig. 4: Spinal locomotor V2a-INs contribute cholinergic inputs to NSPCs. a Large and dorsally located spinal cord V2a-INs ( Chx10:GFP + , green) are cholinergic (ChAT + , magenta). Arrowheads indicate double-labeled neurons ( Chx10:GFP + ChAT + ). b A sample stack from the central canal area showing the presence of V2a-IN ( Chx10:GFP + ) axonal collaterals (green) close to CB + NSPCs (magenta) with analysis of the proportion of the CB + NSPCs that are in close proximity with the V2a-IN ( GFP + ) processes. c Quantification of the probability of the V2a-IN axonal collaterals in the central canal region ( n = 15 zebrafish). d Representative whole-mount confocal image showing that all (25 out of 25; 100% from 8 zebrafish) long descending (dextran tracer, blue; >10 segments) cholinergic (ChAT + , red) neurons are V2a-INs (GFP + , green). Arrowheads indicate triple-labeled neurons. e Quantification and analysis of the number, size and location of long descending cholinergic V2a-INs in the adult zebrafish spinal hemisegments. f Sample average (~25 sweeps) traces from dual electrophysiological recordings between a premotor V2a-IN and NSPCs, located in the same segment (intra-segmental, 1) or 5–6 segments rostrally (inter-segmental, 2). Cholinergic connections were observed in the inter-segmental pairs (22%, 11 out of 50 pairs) but not in the intra-segmental pairs (0%, 0 out of 15). g Postsynaptic responses in the recorder NSPCs generated from suprathreshold (black, with action potential) and not from subthreshold (gray, without an action potential) short pulse depolarization of the V2a-IN. h Ex vivo setup of the brain-spinal cord preparation allows simultaneous recordings of a NSPC and ipsilateral descending V2a-INs during fictive locomotion. Sample trace of a connected pair that, while the V2a-IN discharges during fictive swimming, the NSPC receives occasional input. i Illustrat i on of recordings acquired during electrical stimulations. Ten pulses (20 Hz) of a rostral spinal cord segment were applied to depolarize V2a-INs connected to NSCs. Representative sample trace (in red) of superimposed sweeps (~20, in black) from not responding and responding NSPCs. Bath application of the selective nicotinic antagonist MLA (10 μM) abolished the recorded currents suggesting that they are cholinergic. Changes in the proportion of the recorded NSPCs that respond to electrical stimulation observed after training ( n : number of recorded NSPCs). The average number of detected events per stimulation sweep from both sites was significantly higher in trained animals, suggesting adaptive changes in the innervation and the cholinergic release to the NSCs ( P < 0.0001). The dashed gray line represents the baseline. CB, calbindin D-28K; CC, central canal; ChAT, choline acetyltransferase; EPSC, excitatory postsynaptic current; INs, interneurons; MLA, methyllycaconitine; NSPC, neural stem/progenitor cell. Data are presented as mean ± s.e.m., as violin plots and as box plots showing the median with 25/75 percentile (box and line) and minimum–maximum (whiskers). **** P < 0.0001. For detailed statistics, see Supplementary Table 1 . Full size image Finally, we tested whether NSPCs adapt their response to signals from rostral segments (segments 9–11) during training (Fig. 4i ). We noticed that training caused an increase in recruiting NSPCs that receive a nicotine-mediated synaptic input from descending V2a-INs by 18% (Fig. 4i ). Moreover, we found that the number of detected EPSC events per stimulation sweep increased by ~150% (Fig. 4i ). These data suggest that physical activity increased the cholinergic release to NSPCs and expanded V2a-IN cholinergic synapses to NSPCs. However, their extensive and complex branched morphology precluded a strict quantification of this structural plasticity. Cholinergic and GABAergic receptors control the NSPCs’ proliferation in an opposing manner To test whether manipulation of nicotinic-ACh and GABA A receptors impinges NSPCs’ proliferation, her4.1 + BrdU + cells lining the central canal were quantified after a single administration of nicotine, GABA, or gabazine followed by a pulse with BrdU for 1 h (Fig. 5a ). We observed a significant increase in the number of her4.1 + BrdU + cells after exposure to nicotine and gabazine (Fig. 5a ). In contrast, GABA A receptor activation by ambient GABA significantly reduced the number of BrdU her4.1 + BrdU + cells (Fig. 5a ). Fig. 5: Cholinergic and GABAergic receptors control the NSPCs’ proliferation in an opposing manner. a Microphotographs and analysis show that nicotine and gabazine increased the her4.1 + (green)/BrdU + (magenta) cells, whereas GABA reduced the number of her4.1 + /BrdU + cells in the examined spinal hemisegment ( P < 0.0001). b In vivo administration of ACh, muscarine, nicotine, and gabazine increased the number of BrdU + cells per hemisegment ( P < 0.0001). Administration of GABA reduced the number of BrdU + cells in the adult zebrafish spinal cord hemisegment ( P < 0.0001). c Co-administration of nicotine and gabazine generated the same number of BrdU + cells as the individual administration of nicotine or gabazine. Co-activation of the nicotinic-ACh receptors and the GABA A receptors produced the same number of BrdU + cells as in control (saline; P < 0.0001). d Application of ACh induced currents of the same frequency and amplitude in NSPCs before and after training, suggesting no changes in the cholinergic receptors following training. e Bath application of GABA before and after training revealed a significant reduction of the tonic activation amplitude without affecting its duration in the NSPCs. ACh, acetylcholine; BrdU, 5-bromo-2ʹ-deoxyuridine; GABA, γ-aminobutyric acid; GFP, green fluorescent protein; her4.1, hairy-related 4, tandem duplicate 1; The dashed gray line represents the baseline. Data are presented as mean ± s.e.m., as violin plots and as box plots showing the median with 25/75 percentile (box and line) and minimum–maximum (whiskers). * P < 0.05; ** P < 0.01; *** P < 0.001; **** P < 0.0001; ns, not significant. For detailed statistics, see Supplementary Table 1 . Full size image We next examined how cholinergic activation of NSPCs engenders the proliferation in vivo by injecting adult zebrafish intraperitoneally with ACh (2 mM), muscarine (a selective agonist of muscarinic-ACh receptors, 50 μM), or nicotine (a selective agonist of nicotinic-ACh receptors, 300 μM) (Fig. 5b ). Activation of the ACh receptors significantly increased the number of BrdU + cells in the spinal cord (Fig. 5b ). Notably, nicotine administration resulted in a higher number of BrdU + cells than ACh, reflecting that extracellular acetylcholinesterases rapidly degrade ACh. In contrast, GABA (500 μM) or the GABA A receptor antagonist gabazine (100 μM; Fig. 5b ) caused a decrease in the number of BrdU + cells. Conversely, the number of BrdU + cells increased in animals treated with the GABA antagonist gabazine (Fig. 5b ). To verify that neurotransmitters acted directly on spinal cord receptors and not through a systemic effect, we treated isolated intact cords with cholinergic and GABAergic antagonists and agonists ex vivo (Supplementary Fig. 8a ). In accordance with the in vivo studies, we observed a significant increase in proliferation upon selective activation of cholinergic receptors (Supplementary Fig. 8b ). In contrast, GABA-treated adult spinal cords showed a significant decrease in the detected BrdU + cells, and gabazine caused a significant increase in proliferation (Supplementary Fig. 8b ). We also treated the isolated intact spinal cords with nicotine, GABA, and gabazine in the presence of the synaptic blocker TTX (Supplementary Fig. 9a ). We observed that activation of the nicotinic receptors and blockage of the GABA A receptors (gabazine) enhanced the proliferation (BrdU + cells), while GABA reduced the number of the newborn cells (Supplementary Fig. 9b ). Collectively, our data suggest that the observed changes in the proliferation resulted from the direct modulation of neurotransmitter receptors on spinal cord NSPCs. Next, we determined how simultaneous manipulation of nicotinic-Ach and GABA A receptors influenced proliferation. We found that activation of the nicotinic-ACh receptors along with the blockage of the GABA A receptors by co-administration of nicotine and gabazine did not produce more BrdU + cells than each manipulation separately (Fig. 5c ). When we performed nicotine and GABA co-injections, we observed that the number of the BrdU + cells was the same as in control (saline-injected) animals (Fig. 5c ). These results collectively show that direct activation of nicotinic receptors triggers NSPCs and induces the proliferative program that is counteracted by GABA A receptors. Adaptive regulation of NSPC GABA A receptors after training We determined whether physical training results in changes in NSPC receptors. We observed no changes in the frequency or amplitude of ACh-induced EPSCs in the NSPCs upon training (Fig. 5d ), suggesting that the number of cholinergic receptors remained unaltered. On the contrary, the responses observed by the NSPCs from trained animals after treatment with GABA (15 mM) had significantly lower amplitude than those observed in control (untrained) animals (Fig. 5e ). However, their duration was unaffected (Fig. 5e ), implying a reduction in GABA A receptors’ abundance after prolonged physical activity. These findings collectively suggest that two distinct and complementary mechanisms regulate spinal cord proliferation after training: an increase in cholinergic neurotransmission (a synaptic/network mechanism) and a reduction in the number of GABA A receptors (a self-regulatory mechanism) in the NSPCs. Manipulation of neurotransmitter receptors on NSPCs promotes neuronal regeneration and restoration of motor functions Spinal cord regeneration involves an extensive proliferation of NSPCs and subsequent neurogenesis. Indeed, following transection of the spinal cord at segment 15, we observed a significant increase in the number of BrdU + cells (Fig. 6a ). We asked whether pharmacological manipulation of nicotinic-ACh and GABA A receptors could promote regeneration (Fig. 6 ). Animals that received pharmacological treatment with either nicotine or gabazine produced higher number of BrdU + cells than the saline-treated fish (Fig. 6b, d ). The increased proliferation correlated with increased neurogenesis assayed by BrdU + mef-2 + (Fig. 6c, e ). These data suggest that pharmacological treatments can effectively bolster proliferation and neurogenesis in the injured animals. We found that both nicotine- and gabazine-treated fish recovered locomotion performance faster than the untreated animals after spinal cord transection, suggesting an accelerated regeneration process (Fig. 6f ). Thus, spinal cord regeneration could be promoted by either increasing the cholinergic signaling or blocking the GABAergic signaling to NSPCs. Fig. 6: Nicotine and gabazine promote neurogenesis and restoration of motor performance after spinal cord injury. a Pulse-chase experiment to assess proliferation, neurogenesis, and restoration of motor functions after pharmacological manipulation of the nicotinic-ACh and GABA A receptors in the adult zebrafish spinal cord. b Representative whole-mount confocal microphotographs showing BrdU + cells in the zebrafish spinal cord. c Representative whole-mount confocal images for immunodetection of BrdU + /mef-2 + cells. Arrowheads indicate double-labeled cells. d Quantification of BrdU-incorporation after injury in control (saline) and pharmacologically treated animals (nicotine, gabazine). The dashed gray line represents the baseline (BrdU + cells in uninjured animals). e Quantification of the BrdU + cells express the neuronal marker mef-2 + . f Nicotine- and gabazine-treated animals swim faster than the control (saline) fish during the critical speed test. The dashed gray line represents the baseline (critical speed of the uninjured animals). Speed is normalized (BL/s). BL, body length; BrdU, 5-bromo-2ʹ-deoxyuridine; mef-2, myocyte enhancer factor-2; SCI, spinal cord injury. Data are presented as box plots showing the median with 25/75 percentile (box and line) and minimum–maximum (whiskers). * P < 0.05; ** P < 0.01. For detailed statistics, see Supplementary Table 1 . Full size image Discussion Here we revealed an adaptive mechanism by which physical activity dynamically modulates adult neurogenesis mediated by ACh and GABA neurotransmitters. Specifically, the locomotor CPG V2a-INs link motor functions to neurogenesis by contributing to regulation of the NSPCs’ proliferation and subsequent neurogenesis. While activation of NSPCs relies on an increased synaptic cholinergic input and is independent of the number of cholinergic receptors, insensitivity to non-synaptic GABAergic signaling 3 , 8 is achieved by reducing the abundance of GABA A receptors (Fig. 7 ). Therefore, exercise-dependent neurogenesis involves two distinct and mutually antagonistic processes. Extending these findings to a spinal cord injury model, we found that activation of the nicotinic-ACh receptors and inhibition of the GABA A receptors increased the number of newborn neurons and promoted motor function restoration. Fig. 7: Proposed model for exercise controlled adult neurogenesis in the zebrafish spinal cord. The findings link the locomotor CPG network to adult neurogenesis. Spinal cholinergic interneurons, including the premotor V2a-IN population, increase their cholinergic release to NSPCs during training. ACh acts directly on the NSPCs via nicotinic and muscarinic cholinergic receptors to activate them. Activation of NSPCs leads to the downregulation of GABA A receptors. Full size image Previous studies have shown that several neurotransmitters can directly or indirectly regulate the activity of the NSPCs and neurogenesis in the nervous system 3 , 8 , 9 , 10 , 11 , 13 , 14 , 16 , 51 , 52 , 53 . Our results highlight the neurotransmitter ACh’s pivotal role in mediating physical activity-induced proliferation and neurogenesis. The evolutionarily conserved role of ACh and GABA in regulating mammalian hippocampal neurogenesis 3 , 8 , 16 , 17 and spinal cord gliogenesis 13 , 15 , further indicates that these transmitters aim to control the generation of new cells without affecting their differentiation program. Indeed, it has been shown that after spinal cord injury, mammalian spinal cord progenitor/stem/progenitor cells exhibit a robust but abortive proliferative response that fails to generate mature neurons 54 , but rather produces the glial scar formation 33 , 55 . Conversely, zebrafish can fully regenerate the spinal cord and recover motor and sensory functions by activating the neurogenic program of the NSPCs around the central canal 34 , 56 . The antagonistic effects of ACh and GABA transmitter activity on NSPCs probably operate through a dynamic interplay with the molecular context provided by certain molecular factors, but the rules that govern this interaction still remain unclear. We also investigated the role of other neurotransmitters in the direct control of NSPCs. While serotonin is known to regulate proliferation and neurogenesis in various contexts 10 , 41 , 52 , 57 , 58 , its precise mechanism of action has remained unexplored 59 , 60 . Our results indicate that serotonin, glutamate, and glycine 22 are not directly involved in spinal NSPCs’ proliferation. In support, our previous study showed that serotonergic inputs modulate NSPCs’ proliferation in the brain indirectly through other signaling molecules, such as the brain-derived neurotrophic factor (BDNF) 41 , 61 . Similarly, glutamate has also been suggested to act on NSPCs indirectly, via modulation of neurotrophic factors, such as the BDNF and the nerve growth factor (NGF) and fibroblast growth factor (FGF) 10 , 62 , 63 , 64 , 65 . It is conceivable, indeed probable, that adaptation occurs within the spinal cord after training 31 . The exact mechanisms of this adaptation are unclear, and the present data generate several testable hypotheses. For example, neurotransmitter switching is a recently discovered form of plasticity 66 , 67 whereby neurons change their transmitter phenotypes in response to a sustained stimulus such as exercise 31 . As such, it is an activity-dependent adaptive mechanism 31 , 68 that could explain changes in neurotransmitter availability and equilibrium that occur in the nervous system under both physiological as well as pathophysiological conditions. Further studies could examine the extent to which increased cholinergic neurotransmission to stem cells is mediated through cholinergic re-specification of the spinal interneurons. The findings reported here show that the activity of the locomotor networks induces NSPCs to proliferate. We found that cholinergic V2a-INs are among the spinal interneurons interacting with the NSPCs, presumably using direct (monosynaptically) and indirect (polysynaptically) connectivity. We observed reliable, fast, and time-locked cholinergic reactions in the NSPCs triggered by V2a-IN spikes, a result that is commonly interpreted as reflecting direct monosynaptic input (Fig. 4f and Supplementary Fig. 6 ). Yet, the observed changes in the shape of the EPSCs after applying mephenesin also suggested the presence of a polysynaptic component (Supplementary Fig. 6 ). Even if a component of the observed action potential-mediated signals is transmitted through a downstream neuron to NSPCs, the critical reasoning of using spike-triggered approaches in V2a-INs through pair recordings and evaluating the downstream impact to NSPCs is to determine effective communication between the two cell types, which our data do certainly suggest. Future studies may uncover a remaining question regarding the extent of the physiological relevance between the direct and indirect communication in modulating the NSPC activity after training. V2a-INs are regarded as fundamental components of the locomotor CPG and are therefore essential for initiating and maintaining locomotor rhythm 19 , 20 , 21 , 43 , 46 , 47 and to provide the primary driving input to motoneurons during locomotion 43 . However, other local cholinergic interneurons, like the ones residing close to the central canal, exist in the spinal cord 45 , 69 , 70 , and we cannot rule out the possibility that their firing could produce value-related cholinergic input to NSPCs. Nevertheless, the data here link locomotor network activity 19 , 20 , 21 to spinal cord neurogenesis and demonstrate an essential non-motor/non-neuronal function for the CPG. Methods Experimental animals All animals were raised and kept in a core zebrafish facility at the Karolinska Institute following established practices. Adult zebrafish of both sexes ( Danio rerio ; n = 343 animals; 8–10 weeks old; length: 15–20 mm; weight: 0.04–0.06 g), wild type (AB/Tübingen), Tg( Chx10:GFP nns1 ), and Tg( her4.1:GFP ) lines. Zebrafish of both sexes were used in all experiments. No selection criteria and blinding procedures were used to allocate zebrafish to any experimental group. The local Animal Research Ethical Committee (at Karolinska Institutet), Stockholm (Ethical permit no. 9248-2017) approved all experimental protocols, and were implemented under EU directive for the care and use of laboratory animals (2010/63/EU). All efforts were made to utilize only the minimum number of experimental animals necessary to obtain reliable scientific data. Training protocol All animals used in the swim training paradigm had similar sizes (body length, BL; body depth, BD) and weights. Some of the designated animals ( n = 8 zebrafish) were randomly selected and subjected to the critical speed ( U CRIT ) test, which measures the highest sustainable swimming speed a fish can reach using a commercially available swim tunnel (5 L; Loligo systems, SW10050). After determining the critical speed, animals were selected for the exercise training protocol, in which exercised/trained zebrafish (~25) swam at 60% of U CRIT for 6 h per day, 5 days per week. To study the effect of training on animal growth, fish were trained for 6 consecutive weeks. At each time point (every 7 days), the fish were anesthetized in 0.03% tricaine methane sulfonate (MS-222, Sigma-Aldrich, E10521), and images of the body size were obtained. For all the other experiments, zebrafish were trained for 2 consecutive weeks. After the exercise period, fish were randomly assigned to a short-term experimental group (training) or long-term (rest) group. Animals of the recovery group were kept under standard conditions for 2 weeks. Afterward, all animals (training/rest) selected for anatomical investigations were anesthetized and processed for immunohistochemistry, as described in the “Immunohistochemistry” section. Trained animals for electrophysiological recordings were processed within the first three days after the end of the training. BrdU treatment Animals were treated with 5-bromo-2ʹ-deoxyuridine (BrdU; Sigma-Aldrich, B5002) at a concentration of 0.7% in fish water for 2 h. BrdU is a nonradioactive analog of thymidine incorporated into proliferating cells’ DNA during the S phase of mitosis. Fish were then allowed to survive for another 22 h (short-term survival) or 2 weeks (long-term survival) before being processed for BrdU immunodetection. For the acute treatment, animals were treated with BrdU at a concentration of 0.7% in fish water for 1 h before analysis. In acute experiments described in Fig. 5a , animals were injected intraperitoneally (volume: 2 μl) with either saline, nicotine (300 μΜ; Sigma-Aldrich, SML1236), GABA (500 μΜ; Sigma-Aldrich, A2129), or gabazine (100 μΜ; Sigma-Aldrich, SR95531). Immediately after injection, the animals were treated with BrdU for 1 h, as described above. Descending neuron labeling Zebrafish were anesthetized in 0.03% tricaine methane sulfonate (MS-222, Sigma-Aldrich, E10521). Retrograde labeling of descending spinal cord neurons located in spinal segments 1–3 was achieved through dye injections with biotinylated dextran (3000 MW; ThermoFisher, D7135) into segment 16 or 17. Animals were kept alive for at least 24 h after injection to allow retrograde transport of the tracer, deeply anesthetized with 0.1% MS-222, and the spinal cords were dissected and fixed in 4% paraformaldehyde (PFA) and 5% saturated picric acid (Sigma-Aldrich, P6744) in phosphate-buffered saline (PBS; 0.01 M, pH = 7.4; Santa Cruz Biotechnology, Inc., CAS30525-89-4) at 4 °C for 4–10 h. The tissue was then washed extensively with PBS and incubated in streptavidin conjugated to Alexa Fluor 488 (dilution 1:500, ThermoFisher, S32354), Alexa Fluor 555 (1:500, ThermoFisher, S32355), or Alexa Fluor 647 (dilution 1:500, ThermoFisher, S32357) overnight at 4 °C. Primary and secondary antibodies were applied as described in the “Immunohistochemistry” section. After thorough buffer rinses, the tissue was mounted on gelatin-coated microscope slides and cover-slipped with an anti-fade fluorescent mounting medium (Vectashield Hard Set, VectorLabs; H-1400). Pharmacology For the experiments conducted to evaluate the impact of pharmacological agents on proliferation and neurogenesis in vivo, animals were anesthetized using 0.03% tricaine methane sulfonate (MS-222; Sigma-Aldrich, E10521) in fish water and injected intraperitoneally (volume: 2 μl) with saline, ACh (2 mM; Sigma-Aldrich, A6625), muscarine (50 μΜ; Sigma-Aldrich, M104), nicotine (300 μΜ; Sigma-Aldrich, SML1236), GABA (500 μΜ; Sigma-Aldrich, A2129), or gabazine (100 μΜ; Sigma-Aldrich, SR95531). Immediately after injection, the animals were treated with BrdU as described above (see “BrdU treatment” section). For ex vivo evaluation of NSPC receptor activation’s contribution to proliferation, animals were anesthetized and dissected as for the electrophysiological recordings. Isolated intact spinal cords were then transferred to a continuously aired chamber containing the pharmacological agents, and BrdU diluted in the extracellular solution used for electrophysiological recording. In some experiments the extracellular solution contained TTX (1 μM) to abolish the synaptic transmission in the spinal cord networks. After the pharmacological treatments, the animals and tissues were processed for immunodetection of the incorporated BrdU. Immunohistochemistry All animals were deeply anesthetized with tricaine methane sulfonate (MS-222, Sigma-Aldrich, E10521). The spinal cords were then extracted and fixed in 4% paraformaldehyde (PFA) and 5% saturated picric acid (Sigma-Aldrich, P6744) in phosphate-buffered saline (PBS) (0.01 M; pH = 7.4, Santa Cruz Biotechnology, Inc., CAS30525-89-4) at 4 °C for 2–14 h. We performed immunolabeling in both whole-mount spinal cords and cryosections. For sections, the tissue was removed carefully and cryoprotected overnight in 30% (w/v) sucrose in PBS at 4 °C, embedded in Cryomount (Histolab, 45830) sectioning medium, rapidly frozen in dry-ice-cooled isopentane (2-methylbutane; Sigma-Aldrich, 277258) at approximately –35 °C, and stored at −80 °C until use. Transverse coronal plane cryosections (thickness: 20–25 μm) of the tissue were collected and processed for immunohistochemistry. For all sample types (whole-mount and cryosections), the tissue was washed 3 times for 5 min each in PBS. Nonspecific protein binding sites were blocked with 4% normal donkey serum (NDS; Sigma-Aldrich, D9663) with 1% bovine serum albumin (BSA; Sigma-Aldrich, A2153) and 1% Triton X-100 (Sigma-Aldrich, T8787) in PBS for 1 h at room temperature (RT). Primary antibodies (Supplementary Table 2 ) were diluted in 1% of the blocking solution and applied for 1–3 days at 4 °C. After thorough buffer rinses, the tissues were then incubated with the appropriate secondary antibodies (Supplementary Table 2 ) diluted 1:500 or with streptavidin conjugated to Alexa Fluor 488 (1:500, ThermoFisher, S32354), Alexa Fluor 555 (1:500, ThermoFisher, S32355), or Alexa Fluor 647 (1:500, ThermoFisher, S32357) in 1% Triton X-100 (Sigma-Aldrich, T8787) in PBS overnight at 4 °C. Finally, the tissue was thoroughly rinsed in PBS and cover-slipped with a hard fluorescent medium (VectorLabs; H-1400). To visualize the incorporated BrdU, DNA denaturation was performed by incubating the tissue in 2 N HCl for 30 min (sections) or 75 min (whole mounts) at 37 °C, followed by thorough washing in PBS. The standard immunodetection procedure described above was then applied. Electrophysiology Adult zebrafish were cold-anesthetized in a slush of a frozen extracellular solution containing MS-222. The skin and muscles were removed to allow access to the spinal cord. The spinal cord was dissected out carefully and transferred to a recording chamber that was continuously perfused with an extracellular solution containing 135.2 mM NaCl, 2.9 mM KCl, 2.1 mM CaCl2, 10 mM HEPES, and 10 mM glucose at pH 7.8 (adjusted with NaOH) and an osmolarity of 290 mOsm. For whole-cell intracellular recordings of NSPCs in voltage-clamp mode, electrodes (resistance, 3–5 MΩ) were pulled from borosilicate glass (outer diameter, 1.5 mm; inner diameter, 0.87 mm; Hilgenberg) on a micropipette puller (model P-97, Sutter Instruments) and filled with an intracellular solution containing 120 mM K-gluconate, 5 mM KCl, 10 mM HEPES, 4 mM Mg2ATP, 0.3 mM Na4GTP, and 10 mM Na-phosphocreatine at pH 7.4 (adjusted with KOH) and an osmolarity of 275 mOsm. Cells were visualized using a microscope (LNscope; Luigs & Neumann) equipped with a CCD camera (Lumenera) and explicitly targeted. Intracellular patch-clamp electrodes were advanced to the stem/progenitor cells using a motorized micromanipulator (Luigs & Neumann) while applying constant positive pressure. Intracellular signals were amplified with a MultiClamp 700B intracellular amplifier (Molecular Devices). All cells were clamped at –70 mV throughout all voltage-clamp recordings. All experiments were performed at RT (23 °C). The following drugs (prepared by diluting stock solutions in distilled water) were added (singly or in combinations mentioned in the text) to the physiological solution: acetylcholine (ACh, 100 μM or 5 mM; Sigma-Aldrich, A6625), GABA (γ-aminobutyric acid, 1 or 15 mM; Sigma-Aldrich, A2129), gabazine (10 μM; Sigma-Aldrich, SR95531), glutamate (5 mM; Sigma-Aldrich), glycine (1 mM; Sigma-Aldrich, G2879), methyllycaconitine (MLA, 10 μΜ; Sigma-Aldrich, M168), muscarine (500 μΜ; Sigma-Aldrich, M104), nicotine (100 μM; Sigma-Aldrich, N3876 and SML1236), N -methyl- D -aspartate (NMDA, 100 μM; Sigma-Aldrich, M3262), serotonin (1 mM; Sigma-Aldrich, H9523), and tetrodotoxin (TTX, 1 μΜ; Sigma-Aldrich, T8024). For the evaluation of the activity of the NSPCs during fictive locomotion (Fig. 2a ), we used the adult zebrafish ex vivo preparation 42 , 71 . Extracellular recordings were performed from the motor nerves. Activation of the locomotion was induced by extracellular stimulation (using a train of 10 pulses: 1 Hz) applied via a glass pipette placed at the junction between the brain and the spinal cord. Recordings were made from both ipsilateral and contralateral located NSPCs and motor nerves (Fig. 2b ). The local spinal neuron activation was triggered by extracellular stimulation (using a train of 10 pulses: 20 Hz) applied via a glass pipette (Fig. 3c ). Activation of the long descending spinal neurons was induced by extracellular stimulation (using a train of 10 pulses: 20 Hz) applied via a glass pipette placed 5–6 segments rostral to whole-cell NSPC recording the adult zebrafish spinal cord (Fig. 4i ). To attenuate and potentially block the polysynaptic transmission, we used 2.5x HiDi (a high concentration of divalent cations) solution or mephenesin (1 mM; Sigma-Aldrich, 286567) a possible polysynaptic blocker applied for at least 20 min before all the recordings. For dual whole-cell recordings of V2a-INs and progenitors/stem cells, two patch-clamp electrodes were advanced from opposite directions into the spinal cord to record cells separated by at least five segments (inter-segmental recordings) or from the same spinal segment (intra-segmental recordings). Single and multiple short-duration (0.5 ms) suprathreshold and subthreshold current pulses were used to stimulate presynaptic V2a interneurons and record responses in stem/progenitor cells. All dual whole-cell recordings are presented as averages of 20–35 sweeps. Only NSPCs (GFP + ) that had stable resting membrane potentials at or below −60 mV did not fire action potentials upon strong depolarizations (>0 mV) and showed minimal changes in resistance (<5%) were included in this study. In all recordings, the EPSC events were detected and analyzed in a semi-automatic (supervised) fashion after baseline subtraction using AxoGraph (version X 1.5.4; AxoGraph Scientific, Sydney, Australia; RRID: SCR_014284) or Clampfit (version 10.6; Molecular Devices). The EPSC amplitude was calculated as the difference between the baseline and the peak of the event. Spinal cord injury Adult zebrafish were anesthetized in 0.03% tricaine methane sulfonate (MS- 222; Sigma-Aldrich) before subjection to the spinal cord injury, which involved complete transection of the spinal cord segment 15 with a micro knife (10318-14; Fine Science Tools) under constant visual control. Once the lesion was completed, the spinally transected animals were injected intraperitoneally (volume: 2 μl) with BrdU solution (Sigma; 0.2 mg/g body weight) containing either saline (control), nicotine (300 μΜ; Sigma-Aldrich, SML1236), or gabazine (100 μΜ; Sigma-Aldrich, SR95531). All animals were kept in freshwater under standard conditions and received two additional intraperitoneal injections of the drugs every 4 days, as described in Fig. 6a . Critical speed test All post-injured animals (12 days after injury) were subjected to the critical speed ( U CRIT ) test using a commercially available swim tunnel (5 L; Loligo systems, SW10050). Critical speed ( U CRIT ) is a measure of the highest sustainable swimming speed that a fish can reach. The zebrafish were subjected to time intervals (2 min) of increased water flow velocity (increments of 4.5 cm/s) until the fish could not swim against the water current 31 . The critical speed was normalized to the experimental animals’ body length (BL) and is given as BL/s. Analysis Morphometric analysis of the adult zebrafish was performed in images acquired with an HD camera (MC120, Leica) attached to a stereomicroscope (M60, Leica). The body size (total length, TL) of each animal was quantified using ImageJ. All immunodetections of whole-mount images of the adult zebrafish spinal cord preparations were acquired using an LSM 800 laser scanning confocal microscope (Zeiss) with a 40x objective (oil immersion). Each examined whole-mount spinal cord hemisegment was scanned from the ipsilateral side to the contralateral side at the contralateral primary motoneurons level to ensure the central canal region’s acquisition, generating a z-stack (z-step size = 0.3–0.5 μm). All neurons, including her4 :GFP + stem/progenitor cells and newborn cells, were counted in spinal hemisegment 15 or 16. The neurons’ somata and cells’ relative positions ( XYZ coordinates) within the spinal cord were determined (using the lateral, dorsal, and ventral edges of the cord as landmarks) using ImageJ (cell counter plugin). Soma sizes and numbers were also measured using ImageJ. All whole-mount data are presented as the number of cells or neurons in each analyzed hemisegment. The central canal region was defined as the area in the midline between the contralateral primary motoneurons. Analysis and quantifications of the BrdU + cell differentiation profiles were performed using 6 coronal sections (20 μm thick, 20 μm intervals) of the spinal cord segments 14–16. All quantifications in the injured animals were performed in an area of 150 μm length, located 50–60 μm rostrally from the injured site. The probability matrix of the V2a-IN processes in the central canal area was generated from multiple data (~7 sections/animal; n = 15 zebrafish) using Origin 8 (OriginLab, Northampton, MA, USA). To enhance visualization of our data, most of the whole-mount images presented here were prepared by merging subsets of the original z-stacks. Most single channel images showing the BrdU + were inverted to allow better visualization. Colocalizations were detected by visual identification of structures whose color reflects the combined contribution of two or more antibodies in the merged image. Most of the presented traces were low-pass filtered (Gaussian, 11-21 coefficients) using Clampfit (version 11.0; Molecular Devices). All figures and graphs were prepared with Adobe Photoshop and Adobe Illustrator (Adobe Systems Inc., San Jose, CA, USA). Digital modifications of the images (brightness and contrast) were minimal to diminish the potential distortion of biological information. All double-labeled immunofluorescence images were converted to magenta-green to improve visualization of the results for color-blind readers. Statistics and reproducibility The significance of differences between the means in experimental groups and conditions was analyzed using parametric tests such as the two-tailed unpaired or paired Student’s t -test and one-way ANOVA (ordinary) followed by post hoc Tukey’s test or Dunnett’s multiple comparison test, using Prism (GraphPad Software Inc.). Significance levels indicated in all figures are as follows: * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001. All data are presented as mean ± s.e.m. (standard error of mean) or as box plots showing the median, 25th, and 75th percentile (box and line) and minimal and maximal values (whiskers). Finally, the n values indicate the final number of validated animals per group, cells, or events that were evaluated and presented in detail in Supplementary Table 1 . All experiments were carried out independently 2–5 times from different investigators. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data used for the analyses presented in this study are included in Supplementary Table 1 . Source data are provided with this paper.
Researchers at Karolinska Institutet, the German Center for Neurodegenerative Diseases (DZNE) and Columbia University Irving Medical Center have found an unexpected link between spinal locomotor network activity and adult neurogenesis in the adult zebrafish spinal cord. The study has recently been published in Nature Communications. Since the first demonstration of spinal central pattern generators (CPGs) in the early '70s, the activity of neurons involved in the central pattern generator networks has been considered only in terms of their contribution to locomotion. "We can now reveal an unforeseen yet central non-motor function of spinal locomotor neurons and demonstrate how they dynamically regulate neurogenesis and regeneration following spinal cord injury," says Konstantinos Ampatzis, researcher at the Department of Neuroscience and corresponding author. What does your study show? "In this study, we identify the direct contribution of the spinal locomotor neurons in activating the spinal cord stem cell population, glial cells that can generate new neurons in the adult zebrafish. Therefore, during prolonged locomotion, as we see after training, the stem cells receive excessive synaptic input that allows them to exit their quiescent state and proliferate." The researchers revealed that acetylcholine and GABA are the two neurotransmitters that can directly affect the stem cells in the adult zebrafish spinal cord; however, they act antagonistically to each other. "To identify the neurons that provide the cholinergic input to activate the stem cells was among the most unexpected findings. We found that a particular type of spinal locomotor interneurons, named V2a's, is among the neurons that link locomotion and stem cell activation," Konstantinos Ampatzis continues. How might your findings be put to use? "The overall outcome is a comprehensive understanding of the plasticity and adaptations (mechanisms, structural changes) that develop in response to physical activity and how these adaptive phenomena underlie pathogenicity after injury and/or regeneration of spinal networks. The results are expected to have a substantial impact because they lay the groundwork for developing new, more effective targeted treatments for restoration of the spinal cord after injury," Konstantinos Ampatzis explains. The study involved a set of different methodologies in neuroscience, such as anatomy, electrophysiology, pharmacology, and behavior in the adult zebrafish. In their experiments, the researchers took advantage of the experimental amenability of the adult zebrafish. "This model animal is ideal for these studies. It has an anatomically simple nervous system yet possesses all vertebrate features. It offers unprecedented access to neuronal circuits in behaving animals, and it has a rare ability to regenerate after injury." What is your next step? "Our next step is to identify the type of neurons that are born under homeostasis, training and spinal cord injury. We need to identify if the new neurons replace the existing ones or if they act as add-ons on the spinal cord networks," says Konstantinos Ampatzis.
10.1038/s41467-021-25052-1
Medicine
How sleepless nights compromise the health of your gut
Light-entrained and brain-tuned circadian circuits regulate ILC3 and gut homeostasis, Nature (2019). DOI: 10.1038/s41586-019-1579-3 , nature.com/articles/s41586-019-1579-3 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1579-3
https://medicalxpress.com/news/2019-09-sleepless-nights-compromise-health-gut.html
Abstract Group 3 innate lymphoid cells (ILC3s) are major regulators of inflammation, infection, microbiota composition and metabolism 1 . ILC3s and neuronal cells have been shown to interact at discrete mucosal locations to steer mucosal defence 2 , 3 . Nevertheless, it is unclear whether neuroimmune circuits operate at an organismal level, integrating extrinsic environmental signals to orchestrate ILC3 responses. Here we show that light-entrained and brain-tuned circadian circuits regulate enteric ILC3s, intestinal homeostasis, gut defence and host lipid metabolism in mice. We found that enteric ILC3s display circadian expression of clock genes and ILC3-related transcription factors. ILC3-autonomous ablation of the circadian regulator Arntl led to disrupted gut ILC3 homeostasis, impaired epithelial reactivity, a deregulated microbiome, increased susceptibility to bowel infection and disrupted lipid metabolism. Loss of ILC3-intrinsic Arntl shaped the gut ‘postcode receptors’ of ILC3s. Strikingly, light–dark cycles, feeding rhythms and microbial cues differentially regulated ILC3 clocks, with light signals being the major entraining cues of ILC3s. Accordingly, surgically or genetically induced deregulation of brain rhythmicity led to disrupted circadian ILC3 oscillations, a deregulated microbiome and altered lipid metabolism. Our work reveals a circadian circuitry that translates environmental light cues into enteric ILC3s, shaping intestinal health, metabolism and organismal homeostasis. Main ILC3s have been shown to be part of discrete mucosal neuroimmune cell units 2 , 3 , 4 , 5 , raising the hypothesis that ILC3s may also integrate systemic neuroimmune circuits to regulate tissue integrity and organismic homeostasis. Circadian rhythms rely on local and systemic cues to coordinate mammalian physiology and are genetically encoded by molecular clocks that allow organisms to anticipate and adapt to extrinsic environmental changes 6 , 7 . The circadian clock machinery consists of an autoregulatory network of feedback loops primarily driven by the activators ARNTL and CLOCK and the repressors PER1–PER3, CRY1 and CRY2, amongst others 6 , 7 . Analysis of subsets of intestinal ILCs and their bone marrow progenitors revealed that mature ILC3s express high levels of circadian clock genes (Fig. 1a–c , Extended Data Fig. 1a–d ). Notably, ILC3s displayed a circadian pattern of Per1 Venus expression (Fig. 1b ) and transcriptional analysis of ILC3 revealed circadian expression of master clock regulators and ILC3-related transcription factors (Fig. 1c ). To test whether ILC3s are regulated in a circadian manner, we investigated whether intestinal ILC3s require intrinsic clock signals. Thus, we interfered with the expression of the master circadian activator Arntl . Arntl fl mice were bred to Vav1 Cre mice, allowing conditional deletion of Arntl in all haematopoietic cells ( Arntl ΔVav1 mice). Although Arntl ΔVav1 mice displayed normal numbers of intestinal natural killer (NK) cells and enteric group 1 and 2 ILCs, gut ILC3s were severely and selectively reduced in these mice when compared to their wild-type littermate controls (Fig. 1d, e , Extended Data Fig. 2a, b ). To more precisely define ILC3-intrinsic effects, we generated mixed bone marrow chimaeras by transferring Arntl -competent ( Arntl fl ) or Arntl -deficient ( Arntl ΔVav1 ) bone marrow against a third-party wild-type competitor into alymphoid hosts (Fig. 1f ). Analysis of such chimaeras confirmed cell-autonomous circadian regulation of ILC3s, while their innate and adaptive counterparts were unperturbed (Fig. 1g , Extended Data Fig. 2c ). Fig. 1: Intestinal ILC3s are controlled in a circadian manner. a , Gene expression in CLPs, ILCPs and intestinal ILC3s. CLP and ILCP n = 4; ILC3 n = 6. b , PER1–VENUS mean fluorescence intensity (MFI). CLP and ILCP n = 6; ILC3 n = 4. c , Circadian gene expression in enteric ILC3s; n = 5. d , Intestinal ILC subsets in Arntl fl and Arntl ΔVav1 mice; n = 4. e , Cell numbers of intestinal ILC3s and IL-17- and IL-22-producing ILC3 subsets in Arntl fl and Arntl ΔVav1 mice; n = 4. f , Generation of mixed bone marrow chimaeras. g , Percentage of donor cells and cell numbers of ILC3s, IL-17 and IL-22-producing ILC3 subsets in the gut from mixed bone marrow chimaeras. Arntl fl n = 5, Arntl ΔVav1 n = 7. b , c , White and grey represent light and dark periods, respectively. Data are representative of three independent experiments. n represents biologically independent samples ( a , c ) or animals ( b , d – g ). Data shown as mean ± s.e.m. a , Two-way ANOVA and Tukey’s test; b , c , cosinor analysis; d , e , g , Two-tailed Mann–Whitney U test. * P < 0.05; ** P < 0.01; *** P < 0.001; NS, not significant. Source Data . Full size image To investigate the functional effect of ILC3-intrinsic circadian signals, we deleted Arntl in RORγt-expressing cells by breeding Rorgt Cre mice (also known as Rorc Cre ) to Arntl fl mice ( Arntl ΔRorgt mice). When compared to their wild-type littermate controls, Arntl ΔRorgt mice showed a selective reduction of ILC3 subsets and IL-17- and IL-22-producing ILC3s (Fig. 2a, b , Extended Data Fig. 3a–j ). Notably, independent deletion of Nr1d1 also perturbed subsets of enteric ILC3s, further supporting a role of the clock machinery in ILC3s (Extended Data Fig. 4a–e ). ILC3s have been shown to regulate the expression of genes related to epithelial reactivity and microbial composition 1 . Analysis of Arntl fl and Arntl ΔRorgt mice revealed a profound reduction in the expression of reactivity genes in the Arntl ΔRorgt intestinal epithelium; notably, Reg3b , Reg3g , Muc3 and Muc13 were consistently reduced in Arntl -deficient mice (Fig. 2c ). Furthermore, Arntl ΔRorgt mice displayed altered diurnal patterns of Proteobacteria and Bacteroidetes (Fig. 2d , Extended Data Fig. 3j ). To investigate whether disruption of ILC3-intrinsic ARNTL affected enteric defence, we tested how Arntl ΔRorgt mice responded to intestinal infection. To this end, we bred Arntl ΔRorgt mice to Rag1 −/− mice to exclude putative T cell effects (Extended Data Fig. 3g–i ). Rag1 −/− Arntl ΔRorgt mice were infected with the attaching and effacing bacteria Citrobacter rodentium 2 . When compared to their wild-type littermate controls, Rag1 −/− Arntl ΔRorgt mice had marked gut inflammation, fewer IL-22-producing ILC3s, increased C. rodentium infection and bacterial translocation, reduced expression of epithelial reactivity genes, increased weight loss and reduced survival (Fig. 2e–j , Extended Data Fig. 5a–j ). These results indicate that cell-intrinsic circadian signals selectively control intestinal ILC3s and shape gut epithelial reactivity, microbial communities and enteric defence. Previous studies indicated that ILC3s regulate host lipid metabolism 8 . When compared to their wild-type littermate controls, the epithelium of Arntl ΔRorgt mice revealed a marked increase in mRNA that codes for key lipid epithelial transporters, including Fabp1 , Fabp2 , Scd1, Cd36 and Apoe (Fig. 2k ). Accordingly, these changes were associated with increased gonadal and subcutaneous accumulation of fat in Arntl ΔRorgt mice when compared to their wild-type littermate controls (Fig. 2l , Extended Data Fig. 5k–n ). Thus, ILC3-intrinsic circadian signals shape epithelial lipid transport and body fat composition. Fig. 2: ILC3-intrinsic Arntl regulates gut homeostasis and defence. a , Enteric ILC3s and subtypes in Arntl fl and Arntl ΔRorgt mice; n = 4. b , Gut T helper cells in Arntl fl and Arntl ΔRorgt mice; n = 5. c , Expression of epithelial reactivity genes in Arntl ΔRorgt mice compared with Arntl fl mice; n = 5. d , qPCR analysis of Proteobacteria in stools from Arntl fl and Arntl ΔRorgt mice (see Methods ). Arntl fl n = 5; Arntl ΔRorgt n = 6. e–j , Data from C. rodentium -infected Rag1 −/− Arntl fl and Rag1 −/− Arntl ΔRorgt mice. e , Histopathology of colon sections; n = 5. f , Colitis score; n = 5. g , Colon length; n = 5. h , Infection burden; Rag1 −/− Arntl fl n = 6, Rag1 −/− Arntl ΔRorgt n = 7. i , Bacterial translocation to the spleen; Rag1 −/− Arntl fl n = 6, Rag1 −/− Arntl ΔRorgt n = 7. j , Survival; n = 5. k , Expression of epithelial lipid transporter genes in Arntl fl ( n = 4) and Arntl ΔRorgt ( n = 5) mice. l , Gonadal and subcutaneous adipose tissue in Arntl fl and Arntl ΔRorgt mice; n = 5. d , White and grey represent light and dark periods, respectively. Scale bars, 250 μm. Data are representative of at least three independent experiments; n represents biologically independent animals. Data shown as mean ± s.e.m. a , b , f , g , i , k , Two-tailed Mann–Whitney U test; d , cosinor analysis; h , two-way ANOVA and Sidak’s test; j , log-rank test; l , two-tailed unpaired Student’s t -test. * P < 0.05; ** P < 0.01; *** P < 0.001; NS, not significant. Source Data . Full size image To further investigate how cell-intrinsic Arntl controls intestinal ILC3 homeostasis, initially we studied the diurnal oscillations of the ILC3 clock machinery. When compared to their wild-type littermate controls, Arntl ΔRorgt ILC3s displayed a disrupted diurnal pattern of activator and repressor circadian genes (Fig. 3a ). Sequentially, we used genome-wide transcriptional profiling of Arntl- sufficient and -deficient ILC3s to interrogate the effect of a deregulated circadian machinery. Diurnal analysis of the genetic signature associated with ILC3 identity 1 demonstrated that the vast majority of those genes were unperturbed in Arntl -deficient ILC3s, suggesting that ARNTL is dispensable to ILC3 lineage commitment (Fig. 3b , Extended Data Fig. 6a–c ). To test this hypothesis, we first studied the effect of ablation of Arntl in ILC3 progenitors. Arntl ΔVav1 mice had unperturbed numbers of common lymphoid progenitors (CLPs) and innate lymphoid cell progenitors (ILCPs; Fig. 3c , Extended Data Fig. 6d ). Sequentially, we analysed the effects of Arntl ablation in ILC3s in other organs. Compared to their littermate controls, Arntl ΔRorgt mice had normal numbers of ILC3s in the spleen, lungs and blood, in contrast to their pronounced reduction in the intestine (Figs. 2a , 3d, e , Extended Data Fig. 6e ). Notably, enteric Arntl ΔRorgt ILC3s showed unperturbed proliferation and apoptosis-related genetic signatures (Extended Data Fig. 6b, c ), suggesting that Arntl ΔRorgt ILC3s may show altered migration to the intestinal mucosa 9 . When compared to their wild-type littermate controls, ILC3s in Arntl ΔRorgt mice showed a marked reduction in gut postcode molecules—which are essential receptors for intestinal lamina propria homing—and accumulated in mesenteric lymph nodes 9 (Extended Data Fig. 6f ). Notably, the expression of the integrin and chemokine receptors CCR9, α4β7 and CXCR4 was selectively and hierarchically reduced in Arntl ΔRorgt ILC3s (Fig. 3f–h , Extended Data Fig. 6g–m ). To investigate whether ARNTL could directly regulate expression of Ccr9 , we performed chromatin immunoprecipitation (ChIP). Binding of ARNTL to the Ccr9 locus in ILC3s followed a diurnal pattern, with increased binding at Zeitgeber time (ZT) 5 (Fig. 3i ). Thus, ARNTL can contribute directly to the expression of Ccr9 in ILC3s, although additional factors may also regulate this gene. In conclusion, while a fully operational ILC3-intrinsic circadian machinery is not required for lineage commitment and development of ILC3s, cell-intrinsic clock signals are required for a functional ILC3 gut receptor postcode. Fig. 3: ILC3-intrinsic circadian signals regulate an enteric receptor postcode. a , Relative expression of circadian genes in enteric ILC3s from Arntl fl and Arntl ΔRorgt mice; n = 3. b , RNA sequencing (RNA-seq) analysis of gut ILC3s from Arntl fl and Arntl ΔRorgt mice at ZT5 and ZT23; n = 3. c , Numbers of CLPs and ILCPs in Arntl fl and Arntl ΔVav1 mice; n = 4. d , e , ILC3s in spleen ( d , n = 3) and lung ( e , n = 6) of Arntl fl and Arntl ΔRorgt mice. f , g , Expression of α4β7 ( f ) and CCR9 ( g ) by gut ILC3s in Arntl fl and Arntl ΔRorgt mice; n = 4. h , Circadian variation in expression of CCR9 by intestinal ILC3s in Arntl fl and Arntl ΔRorgt mice; n = 4. i , ChIP analysis of binding of ARNTL to the Ccr9 locus in enteric ILC3s; n = 3. A–J denote putative ARNTL DNA-binding sites. Data are representative of three independent experiments. n represents biologically independent animals ( a , c – h ) or samples ( b , i ). a , h , White and grey represent light and dark periods, respectively. Data shown as mean ± s.e.m. a , Two-way ANOVA; c – g , two-tailed Mann–Whitney U test; h , cosinor analysis; i , two-tailed unpaired Student’s t -test. * P < 0.05; ** P < 0.01; *** P < 0.001; NS, not significant. Source Data . Full size image Circadian rhythms allow organisms to adapt to extrinsic environmental changes. Microbial cues can alter the rhythms of intestinal cells 10 , 11 , and feeding regimens are major circadian entraining cues for peripheral organs, such as the liver 12 . In order to define the environmental cues that entrain circadian oscillations of ILC3, we initially investigated whether microbial signals affect the oscillations of ILC3s. Treatment of Per1 Venus reporter mice with antibiotics did not alter the amplitude of circadian oscillations, but did induce a minute shift in the acrophase (timing of the peak of the cycle; Fig. 4a ). We then tested whether feeding regimens, which are major entraining cues of oscillations in the liver, pancreas, kidney, and heart 12 , could alter ILC3 rhythms. To this end, we restricted food access to a 12-h interval and compared Per1 Venus oscillations to those observed in mice with inverted feeding regimens 12 . Inverted feeding had a small effect on the amplitude of ILC3 oscillations but did not invert the acrophase of ILC3s (Fig. 4b , Extended Data Fig. 7a ), in contrast to the full inversion of the acrophase of hepatocytes 12 (Extended Data Fig. 7b ). As these local intestinal cues could not invert the acrophase of ILC3s, we hypothesize that light–dark cycles are major regulators of enteric ILC3 oscillations 6 . To test this hypothesis, we placed Per1 Venus mice in light-tight cabinets on two opposing 12-h light–dark cycles. Inversion of light–dark cycles had a profound effect on the circadian oscillations of ILC3s (Fig. 4c ). Notably, and in contrast to microbiota and feeding regimens, light cycles fully inverted the acrophase of Per1 Venus oscillations in ILC3s (Fig. 4c , Extended Data Fig. 7c ). Furthermore, light–dark cycles entrained ILC3 oscillations, as revealed by their maintenance upon removal of light (constant darkness; Fig. 4d , Extended Data Fig. 7d ), confirming that light is a major environmental entraining signal for ILC3 intrinsic oscillations. Together, these data indicate that ILC3s integrate systemic and local cues hierarchically; while microbiota and feeding regimens locally adjust the ILC3 clock, light–dark cycles are major entraining cues of ILC3s, fully setting and entraining their intrinsic oscillatory clock. Fig. 4: Light-entrained and brain-tuned cues shape intestinal ILC3s. a – d , PER1–VENUS MFI in gut ILC3s from mice treated with or without antibiotics (Ab) ( a ; n = 3); with restricted or inverted feeding ( b ; n = 3); with opposing light–dark cycles ( c ; n = 3); and with opposing light–dark cycles followed by constant darkness ( d ; n = 3). e , Magnetic resonance imaging of sham- and SCN-ablated (xSCN) mice; n = 11. White arrows indicate location of lesion. f , PER1–VENUS MFI in enteric ILC3s from sham- or SCN-ablated mice; n = 3. g , Expression of circadian genes in enteric ILC3s from Arntl fl and Arntl ΔCamk2a mice; n = 3. h , CCR9 expression in gut ILC3s from Arntl fl and Arntl ΔCamk2a mice; n = 3. i , Expression of epithelial reactivity genes in the small intestine from Arntl fl and Arntl ΔCamk2a mice; n = 3. j , qPCR analysis of Proteobacteria in stools from Arntl fl and Arntl ΔCamk2a mice; n = 4. k , Expression of lipid transporter genes in the epithelium of the small intestine in Arntl fl and Arntl ΔCamk2a mice; n = 3. l , Gonadal and subcutaneous adipose tissue in Arntl fl mice ( n = 5) and Arntl ΔCamk2a mice ( n = 4). a , b , White and grey represent light and dark periods, respectively. Data shown as mean ± s.e.m. n represents biologically independent animals. a – d , f – k , Cosinor analysis; f – k , cosine fitted curves; amplitude (Amp) and acrophase (Acro) were extracted from the cosinor model. l , Two-tailed unpaired Student’s t -test. * P < 0.05; ** P < 0.01; *** P < 0.001; NS, not significant. Source Data . Full size image The suprachiasmatic nuclei (SCN) in the hypothalamus are main integrators of light signals 6 , suggesting that brain cues may regulate ILC3s. To assess the influence of the master circadian pacemaker on ILC3s, while excluding confounding light-induced, SCN-independent effects 13 , 14 , we performed SCN ablation by electrolytic lesion in Per1 Venus mice using stereotaxic brain surgery 15 . Strikingly, whereas sham-operated mice displayed circadian Per1 Venus oscillations in ILC3s, ILC3s in SCN-ablated mice lost the circadian rhythmicity of Per1 Venus and other circadian genes (Fig. 4e, f , Extended Data Fig. 8a–d ). Because electrolytic lesions of the SCN may cause scission of afferent and efferent fibres in the SCN, we further confirmed that brain SCN-derived cues control ILC3s by genetic ablation of Arntl in the SCN 14 . Arntl fl mice were bred to Camk2a Cre mice to allow forebrain- and SCN-specific deletion of Arntl ( Arntl ΔCamk2a ) 14 . When compared to their control counterparts, ILC3s from Arntl ΔCamk2a mice showed severe arrhythmicity of circadian regulatory genes and of the enteric postcode molecule CCR9 (Fig. 4g, h , Extended Data Fig. 9a–f ). In addition, Arntl ΔCamk2a mice showed alterations in epithelial reactivity genes and microbial communities, particularly Proteobacteria and Bacteroidetes (Fig. 4i, j , Extended Data Fig. 9g–i ). Finally, the intestinal epithelium of Arntl ΔCamk2a mice showed disrupted circadian expression of lipid epithelial transporters, and these changes were associated with increased gonadal and subcutaneous fat accumulation (Fig. 4k, l ). Together, these data indicate that light-entrained and brain-tuned circuits regulate enteric ILC3s, controlling microbial communities, lipid metabolism and body composition. Deciphering the mechanisms by which neuroimmune circuits operate to integrate extrinsic and systemic signals is essential for understanding tissue and organ homeostasis. We found that light cues are major extrinsic entraining cues of ILC3 circadian rhythms, and surgically or genetically induced deregulation of brain rhythmicity resulted in altered ILC3 regulation. In turn, the ILC3-intrinsic circadian machinery controlled the gut receptor postcode of ILC3s, shaping enteric ILC3s and host homeostasis. Our data reveal that ILC3s display diurnal oscillations that are genetically encoded, cell-autonomous and entrained by light cues. While microbiota and feeding regimens could locally induce small adjustments to ILC3 oscillations, light–dark cycles were major entraining cues of the ILC3 circadian clock. Whether the effects of photonic signals on ILC3s are immediate or rely on other peripheral clocks remains to be elucidated 16 , 17 . Nevertheless, cell-intrinsic ablation of important endocrine and peripheral neural signals in ILC3s did not affect gut ILC3 numbers (Extended Data Fig. 10a-i ). Our work indicates that ILC3s integrate local and systemic entraining cues in a distinct hierarchical manner, establishing an organismal circuitry that is an essential link between the extrinsic environment, enteric ILC3s, gut defence, lipid metabolism and host homeostasis (Extended Data Fig. 10j ). Previous studies demonstrated that ILCs integrate tissue microenvironmental signals, including cytokines, micronutrients and neuroregulators 3 , 4 , 18 , 19 . Here we show that ILC3s have a cell-intrinsic circadian clock that integrates extrinsic light-entrained and brain-tuned signals. Coupling light cues to ILC3 circadian regulation may have ensured efficient and integrated multi-system anticipatory responses to environmental changes. Notably, the regulation of ILC3 activity by systemic circadian circuits may have evolved to maximize metabolic homeostasis, gut defence and efficient symbiosis with commensal organisms that have been evolutionary partners of mammals. Finally, our current data may also contribute to a better understanding of how circadian disruptions in humans are associated with metabolic diseases, bowel inflammatory conditions and cancer 20 . Methods Mice Nod scid gamma (NSG) mice were purchased from Jackson Laboratories. C57BL/6J Ly5.1 mice were purchased from Jackson Laboratories and bred with C57BL/6J mice to obtain C57BL/6 Ly5.1/Ly5.2 (CD45.1/CD45.2). Mouse lines used were: Rag1 −/− (ref. 21 ), Rag2 −/− Il2rg −/− (ref. 22 , 23 ), Vav1 Cre (ref. 24 ), Rorgt Cre (ref. 25 ), Camk2a Cre (ref. 26 ), Il7ra Cre (ref. 27 ), Per1 Venus (ref. 28 ), Ret GFP (ref. 29 ), Rosa26 RFP (ref. 30 ), Nr1d1 −/− (ref. 31 ), Arntl fl (ref. 32 ), Nr3c1 fl (ref. 33 ) and Adrb2 fl (ref. 34 ). All mouse lines were on a full C57BL/6J background. All lines were bred and maintained at Champalimaud Centre for the Unknown (CCU) animal facility under specific pathogen-free conditions. Male and female mice were used at 8–14 weeks old, unless stated otherwise. Sex- and age-matched mice were used for analysis of small intestine epithelium lipid transporters and quantification of white adipose tissue. Mice were maintained in 12-h light–dark cycles, with ad libitum access to food and water, if not specified otherwise. For light inversion experiments mice were housed in ventilated, light-tight cabinets on defined 12-h light–dark cycles (Ternox). Camk2a Cre Arntl fl ( Arntl ΔCamk2a ) mice and wild-type littermate controls were maintained in constant darkness as previously described 14 . Mice were systematically compared with co-housed littermate controls unless stated otherwise. Power analysis was performed to estimate the number of experimental mice required. All animal experiments were approved by national and local institutional review boards (IRBs), Direção Geral de Veterinária and CCU ethical committees. Randomization and blinding were not used unless stated otherwise. Cell isolation Isolation of small intestine and colonic lamina propria cells was as previously described 2 . In brief, intestines and colons were thoroughly rinsed with cold PBS1×, Peyer patches were removed from the small intestine, and intestines and colons were cut into 1-cm pieces and shaken for 30 min in PBS containing 2% FBS, 1% HEPES and 5 mM EDTA to remove intraepithelial and epithelial cells. Intestines and colons were then digested with collagenase D (0.5 mg/ml; Roche) and DNase I (20 U/ml; Roche) in complete RPMI for 30 min at 37 °C, under gentle agitation. Cells were passed through a 100-μm cell strainer and purified by centrifugation for 30 min at 2,400 rpm in a 40/80 Percoll (GE Healthcare) gradient. Lungs were finely minced and digested in complete RPMI supplemented with collagenase D (0.1 mg/ml; Roche) and DNase I (20 U/ml; Roche) for 1 h at 37 °C under gentle agitation. Cells were passed through a 100-μm cell strainer and purified by centrifugation for 30 min at 2,400 rpm in a 40/80 Percoll (GE Healthcare) gradient. Spleen and mesenteric lymph node cell suspensions were obtained using 70-μm strainers. Bone marrow cells were collected by either flushing or crushing bones and filtered using 70-μm strainers. Erythrocytes from small intestine, colon, lung, spleen and bone marrow preparations were lysed with RBC lysis buffer (eBioscience). Leukocytes from blood were isolated by treatment with Ficoll (GE Healthcare). Flow cytometry analysis and cell sorting For cytokine analysis ex vivo, cells were incubated with PMA (phorbol 12-myristate 13-acetate; 50 ng/ml) and ionomycin (500 ng/ml) (Sigma-Aldrich) in the presence of brefeldin A (eBioscience) for 4 h before intracellular staining. Intracellular staining for cytokines and transcription factors analysis was performed using IC fixation and Staining Buffer Set (eBioscience). Cell sorting was performed using FACSFusion (BD Biosciences). Sorted populations were >95% pure. Flow cytometry analysis was performed on LSRFortessa X-20 (BD Biosciences). Data were analysed using FlowJo 8.8.7 software (Tree Star). Cell populations were gated in live cells, both for sorting and flow cytometry analysis. Cell populations Cell populations were defined as: bone marrow (BM) common lymphoid progenitor (CLP): Lin − CD127 + Flt3 + Sca1 int c-Kit int ; BM innate lymphoid cell progenitor (ILCP): Lin − CD127 + Flt3 − CD25 − c-Kit + α4β7 high ; BM ILC2 progenitor (ILC2P): Lin − CD127 + Flt3 − Sca1 + CD25 + ; small intestine (SI) NK: CD45 + Lin − NK1.1 + NKp46 + CD27 + CD49b + CD127 − EOMES + or CD45 + Lin − NK1.1 + NKp46 + CD27 + CD49b + CD127 − ; small intestine ILC1: CD45 + Lin − NK1.1 + NKp46 + CD27 + CD49b − CD127 + Tbet + or CD45 + Lin − NK1.1 + NKp46 + CD27 + CD49b − CD127 + ; small intestine ILC2: CD45 + Lin − Thy1.2 + KLRG1 + GATA3 + or CD45 + Lin − Thy1.2 + KLRG1 + Sca-1 + CD25 + ; lamina propria, spleen, mesenteric lymph node and lung ILC3: CD45 + Lin − Thy1.2 high RORγt + or CD45 + Lin − Thy1.2 high KLRG1 − ; ILC3-IL-17 + : CD45 + Lin − Thy1.2 high RORγt + IL-17 + ; ILC3-IL-22 + : CD45 + Lin − Thy1.2 high RORγt + IL-22 + ; for ILC3 subsets additional markers were used: ILC3-NCR − CD4 − : NKp46 − CD4 − ; ILC3-LTi CD4 + : NKp46 − CD4 + ; ILC3-CCR6 − NCR − : CCR6 − NKp46 − ; ILC3-LTi-like: CCR6 + NKp46 − ; ILC3-NCR + : NKp46 + ; SI Th17 cells: CD45 + Lin + Thy1.2 + CD4 + RORγt + ; colon Tregs: CD45 + CD3 + Thy1.2 + CD4 + CD25 + FOXP3 + ; colon Tregs RORγt + : CD45 + CD3 + Thy1.2 + CD4 + CD25 + FOXP3 + RORγt + . The lineage cocktail for BM, lung, small intestine lamina propria, spleen and mesenteric lymph nodes included CD3ɛ, CD8α, CD19, B220, CD11c, CD11b, Ter119, Gr1, TCRβ, TCRγδ and NK1.1. For NK and ILC1 staining in the small intestine, NK1.1 and CD11b were not added to the lineage cocktail. Antibody list Cell suspensions were stained with: anti-CD45 (30-F11); anti-CD45.1 (A20); anti-CD45.2 (104); anti-CD11c (N418); anti-CD11b (Mi/70); anti-CD127 (IL7Rα; A7R34); anti-CD27(LG.7F9); anti-CD8α (53-6.7); anti-CD19 (eBio1D3); anti-CXCR4(L276F12); anti-NK1.1 (PK136); anti-CD3ɛ (eBio500A2); anti-TER119 (TER-119); anti-Gr1 (RB6-8C5); anti-CD4 (RM4-5); anti-CD25 (PC61); anti-CD117 (c-Kit; 2B8); anti-CD90.2 (Thy1.2; 53-2.1); anti-TCRβ (H57-595); anti-TCRγδ (GL3); anti-B220 (RA3-6B2); anti-KLRG1 (2F1/KLRG1); anti-Ly-6A/E (Sca1; D7); anti-CCR9 (CW-1.2); anti-IL-17 (TC11-18H10.1); anti-rat IgG1k isotype control (RTK2071); anti-streptavidin fluorochrome conjugates from Biolegend; anti-α4β7 (DATK32); anti-Flt3 (A2F10); anti-NKp46 (29A1.4); anti-CD49b (DX5); anti-Ki67 (SolA15); anti-rat IgG2ak isotype control (eBR2a); anti-IL-22 (1H8PWSR); anti-rat IgG1k isotype control (eBRG1); anti-EOMES (Dan11mag); anti-Tbet (eBio4B10); anti-FOPX3 (FJK-16s); anti-GATA3 (TWAJ); anti-CD16/CD32 (93); 7AAD viability dye from eBiosciences; anti-CD196 (CCR6; 140706) from BD Biosciences; anti-RORγt (Q31-378) and anti-mouse IgG2ak isotype control (G155-178) from BD Pharmingen. LIVE/DEAD Fixable Aqua Dead Cell Stain Kit was purchased from Invitrogen. Bone marrow transplantation Bone marrow CD3 − cells were FACS sorted from Arntl fl , Vav1 Cre Arntl fl , Rag1 −/− Arntl fl , Rag1 −/− Rorgt Cre Arntl fl , Nr1d1 +/+ , Nr1d1 −/− and C57BL/6 Ly5.1/Ly5.2 mice. Sorted cells (2 × 10 5 ) from Arntl - or Nr1d1 -deficient and -competent wild-type littermate controls were intravenously injected in direct competition with a third-party wild-type competitor (CD45.1/CD45.2), in a 1:1 ratio, into non-lethally irradiated NSG (150cGy) or Rag2 −/− Il2rg −/− (500cGy) mice (CD45.1). Recipients were analysed 8 weeks after transplantation. Quantitative RT–PCR RNA from sorted cells was extracted using RNeasy micro kit (Qiagen) according to the manufacturer’s protocol. Liver, small intestine (ileum) and colon epithelium was collected for RNA extraction using Trizol (Invitrogen) and zirconia/silica beads (BioSpec) in a bead beater (MIDSCI). RNA concentration was determined using Nanodrop Spectrophotometer (Nanodrop Technologies). For TaqMan assays (Applied Biosystems) RNA was retro-transcribed using a High Capacity RNA-to-cDNA Kit (Applied Biosystems), followed by a pre-amplification PCR using TaqMan PreAmp Master Mix (Applied Biosystems). TaqMan Gene Expression Master Mix (Applied Biosystems) was used in real-time PCR. Real-time PCR analysis was performed using StepOne and QuantStudio 5 Real-Time PCR systems (Applied Biosystems). Hprt , Gapdh and Eef1a1 were used as housekeeping genes. When multiple endogenous controls were used, these were treated as a single population and the reference value calculated by arithmetic mean of their CT values. The mRNA analysis was performed as previously described 35 . In brief, we used the comparative C T method (2 −ΔCT ), in which Δ C T(gene of interest) = C T(gene of interest) − C T(housekeeping reference value) . When fold change comparison between samples was required, the comparative Δ C T method (2 −ΔΔCT ) was applied. TaqMan gene expression assays TaqMan Gene Expression Assays (Applied Biosystems) were the following: Hprt Mm00446968_m1; Gapdh Mm99999915_g1; Eef1a1 Mm01973893_g1; Arntl Mm00500223_m1; Clock Mm00455950_m1; Nr1d1 Mm00520708_m1; Nr1d2 Mm01310356_g1; Per1 Mm00501813_m1; Per2 00478113_m1; Cry1 Mm00500223_m1; Cry2 Mm01331539_m1; Runx1 Mm01213404_m1 ; Tox Mm00455231_m1; Rorgt Mm01261022_m1; Ahr Mm00478932_m1; Rora Mm01173766_m1; Ccr9 Mm02528165_s1; Reg3a Mm01181787_m1; Reg3b Mm00440616_g1; Reg3g Mm00441127_m1; Muc1 Mm00449604_m1; Muc2 Mm01276696_m1; Muc3 Mm01207064_m1; Muc13 Mm00495397_m1; S100a8 Mm01276696_m1; S100a9 Mm00656925_m1; Epcam Mm00493214_m1; Apoe Mm01307193_g1; Cd36 Mm01307193_g1; Fabp1 Mm00444340_m1; Fabp2 Mm00433188_m1; and Scd1 Mm00772290_m1. Quantitative PCR analysis of bacteria in stools at the phylum level DNA from faecal pellets of female mice was isolated with ZR Fecal DNA MicroPrep (Zymo Research). Quantification of bacteria was determined from standard curves established by qPCR as previously described 2 . qPCRs were performed with NZY qPCR Green Master Mix (Nzytech) and different primer sets using a QuantStudio 5 Real-Time PCR System (Applied Biosystems) thermocycler. Samples were normalized to 16S rDNA and reported according to the 2 −ΔCT method. Primer sequences were: 16S rDNA, F-ACTCCTACGGGAGGCAGCAGT and R-ATTACCGCGGCTGCTGGC; Bacteroidetes, F-GAGAGGAAGGTCCCCCAC and R-CGCTACTTGGCTGGTTCAG; Proteobacteria, F-GGTTCTGAGAGGAGGTCCC and R-GCTGGCTCCCGTAGGAGT; Firmicutes, F-GGAGCATGTGGTTTAATTCGAAGCA and R-AGCTGACGACAACCATGCAC. C. rodentium infection Infection with C. rodentium ICC180 (derived from DBS100 strain) 36 was performed at ZT6 by gavage inoculation with 10 9 colony-forming units (CFUs) 36 , 37 . Acquisition and quantification of luciferase signal was performed in an IVIS Lumina III System (Perkin Elmer). Throughout infection, weight loss, diarrhoea and bloody stools were monitored daily. CFU measurement Bacterial translocation was determined in the spleen, liver, and mesenteric lymph nodes, taking in account total bacteria and luciferase-positive C. rodentium . Organs were removed, weighed and brought into suspension. Bacterial CFUs from organ samples were determined via serial dilutions on Luria broth (LB) agar (Invitrogen) and MacConkey agar (Sigma-Aldrich). Colonies were counted after 2 days of culture at 37 °C. Luciferase-positive C. rodentium was quantified on MacConkey agar plates using an IVIS Lumina III System (Perkin Elmer). CFUs were determined per volume (ml) for each organ. Antibiotic and dexamethasone treatment Pregnant females and newborn mice were treated with streptomycin (5 g/l), ampicillin (1 g/l) and colistin (1 g/l) (Sigma-Aldrich) in drinking water with 3% sucrose. Control mice were given 3% sucrose in drinking water as previously described 38 . Dexamethasone 21-phosphate disodium salt (200 μg) (Sigma) or PBS was injected intraperitoneally at ZT0. After 4, 8, 12 and 23 h (ZT 4, 8, 12 and 23) mice were killed and analysed. ChIP assay Enteric ILC3s from adult C57BL/6J mice were isolated by flow cytometry. Cells were fixed, cross-linked and lysed, and chromosomal DNA–protein complexes were sonicated to generate DNA fragments ranging from 200 to 400 base pairs as previously described 2 . DNA–protein complexes were immunoprecipitated using LowCell# ChIP kit (Diagenode), with 1 μg of antibody against ARNTL (Abcam) and IgG isotype control (Abcam). Immunoprecipitates were uncrosslinked and analysed by qPCR using primer pairs flanking ARNTL putative sites (E-boxes) in the Ccr9 locus (determined by computational analysis using TFBS tools and Jaspar 2018). Results were normalized to input intensity and control IgG. Primer sequences were: A: F-CATTTCATAGCTTAGGCTGGCATGG; R-CTAGCTAACTGGTCTCAAAGTCCTC; B: F-GCCTCCCTTGTACTACCTG AAGC; R-TCCCAACACCAGGCCGAGTA; C: F-AGGGTCAATTTCTT AGGGCGACA; R-GCCAAGTGTTCGGTCCCAC; D: F-TCTGGCTTCT CACCATGACCACT; R-TCTAAGGCGTCACCACTGTTCTC, E: F-TTTGG GGAATCATCTTACAGC AGAG; R-ATTCATCCTGGCCCTTTCCTTCTTA; F: F-GCTCCACCTCATAGTTGTCTGG; R-CCATGAGCACGTGGAGAGAAAG; G: F-GGTCGAATACCGCGTGGGTT; R-CCCGGTAGAGGCTGCAAGAAA; H: F-AGGCAAATCTGGGCCTATCC; R-GGCCCAGTACAGAGGGGTCT; I: F-GGCTCAGGCTAGCAGGTCTC; R-TGTTTGGCCAGCATCCTCCA; J: F-ACTCAGAGGTGCTGTGACTCC; R-AGCTTTAGGACCACAATGGGCA. Food restriction (inverted feeding) Per1 Venus mice fed during the night received food from 21:00 to 9:00 (control group), whereas mice fed during the day had access to food from 9:00 to 21:00 (inverted group). Food restriction was performed during nine consecutive days as previously described 12 . For food restriction in constant darkness, Per1 Venus mice were housed in constant darkness with ad libitum access to food and water for 2 weeks. Then, access to food was restricted to the subjective day or night, for 12 days, in constant darkness. Inverted light–dark cycles To induce changes in light regime, Per1 Venus mice were placed in ventilated, light-tight cabinets on a 12-h light–dark cycle (Ternox). After acclimation, light cycles were changed for mice in the inverted group for 3 weeks to completely establish an inverse light cycle, while they remained the same for mice in the control group, as previously described 39 . For inverted light–dark cycle experiments followed by constant darkness, after establishing an inverse light–dark cycle, mice were transferred into constant darkness for 3 weeks. SCN lesions Bilateral ablation of the SCN was performed in 9–12-week-old Per1 Venus males by electrolytic lesion using stereotaxic brain surgery, as described previously 15 . Mice were kept under deep anaesthesia using a mixture of isoflurane and oxygen (1–3% isoflurane at 1 l/min). Surgeries were performed using a stereotaxic device (Kopf). After identification of the bregma, a hole was drilled through which the lesion electrode was inserted into the brain. Electrodes were made by isolating a 0.25-mm stainless steel insect pin with a heat shrink polyester tubing, except for 0.2 mm at the tip. The electrode tip was aimed at the SCN, 0.3 mm anterior to bregma, 0.20 mm lateral to the midline, and 5.8 mm ventral to the surface of the cortex, according to the Paxinos Mouse Brain Atlas, 2001. Bilateral SCN lesions were made by passing a 1-mA current through the electrode for 6 s, in the left and right SCN separately. Sham-lesioned mice underwent the same procedure, but no current was passed through the electrode. After surgery animals were housed individually under constant dark conditions with ad libitum food and water and were allowed to recover for 1 week before behavioural analysis. Successfully SCN-lesioned mice were selected by magnetic resonance imaging (MRI), arrhythmic behaviour and histopathology analysis. Magnetic resonance imaging Screening of SCN ablated mice was performed using a Bruker ICON scanner (Bruker, Karlsruhe, Germany). RARE (Rapid Acquisition with Refocused Echoes) sequence was used to acquire coronal, sagittal and axial slices (five slices in each orientation) with the following parameters: RARE factor = 8, TE = 85 ms, TR = 2,500 ms, resolution = 156 × 156 × 500 µm 3 (30 averages). For high-quality images, a 9.4-T BioSpec scanner (Bruker, Karlsruhe, Germany) was used. This operates with Paravision 6.0.1 software and is interfaced with an Avance IIIHD console. Anatomical images (16 axial and 13 sagittal slices) were acquired using a RARE sequence with RARE factor = 8, TE = 36 ms, TR = 2,200 ms and resolution of 80 × 80 × 500 µm 3 (12 averages). Behavioural analysis Sham-operated and SCN-ablated mice were individually housed and after a 24-h acclimation period their movement was recorded for 72 h, in constant darkness, using the automated animal behaviour CleverSys system. Data were auto scored by the CleverSys software. Videos and scoring were visually validated. Circadian rhythmicity was evaluated using the cosinor regression model 40 , 41 . Histopathology analysis Mice infected with C. rodentium were killed by CO 2 narcosis, the gastrointestinal tract was isolated, and the full length of caecum and colon was collected and fixed in 10% neutral buffered formalin. Colon was trimmed in multiple transverse and cross-sections and caecum in one cross-section 42 , and all were processed for paraffin embedding. Sections (3–4 μm) were stained with haematoxylin and eosin and lesions were scored by a pathologist blinded to experimental groups, according to previously published criteria 43 , 44 , 45 . In brief, lesions were individually scored (0–4 increasing severity) for: mucosal loss; mucosal epithelial hyperplasia; degree of inflammation; extent of the section affected in any manner; and extent of the section affected in the most severe manner, as previously described 45 . The score was derived by summing the individual lesion and extent scores. Mesenteric (mesocolic) inflammation was noted but not scored. Liver, gonadal and subcutaneous fat from Arntl ∆Rorgt mice was collected, fixed in 10% neutral buffered formalin, processed for paraffin embedding, sectioned into 3-μm-thick sections and stained with haematoxylin and eosin. The presence of inflammatory infiltrates was analysed by a pathologist blinded to experimental groups. For the SCN lesions experiment, mice were killed with CO 2 narcosis, necropsy was performed and brain was harvested and fixed in 4% PFA. Coronal sections of 50-µm thickness were prepared with a vibratome (Leica VT1000 S), from 0.6 to –1.3 relative to the bregma, collected on Superfrost Plus slides (Menzel-Gläser) and allowed to dry overnight before Nissl staining. Stained slides were hydrated in distilled water for a few seconds and incubated in Cresyl Violet stain solution (Sigma-Aldrich) for 30 min. Slides were dehydrated in graded ethanol and mounted with CV Mount (Leica). Coronal sections were analysed for the presence or absence of an SCN lesion (partial versus total ablation, unilateral versus bilateral) in a Leica DM200 microscope coupled to a Leica MC170HD camera (Leica Microsystems, Wetzlar, Germany). Microscopy Adult intestines from Ret GFP mice were flushed with cold PBS (Gibco) and opened longitudinally. Mucus and epithelium were removed, and intestines were fixed in 4% PFA (Sigma-Aldrich) at room temperature for 10 min and incubated in blocking/permeabilizing buffer solution (PBS containing 2% BSA, 2% goat serum, 0.6% Triton X-100). Samples were cleared with benzyl alcohol-benzyl benzoate (Sigma-Aldrich) prior to dehydration in methanol 18 , 46 . Whole-mount samples were incubated overnight or for 2 days at 4 °C using the following antibodies: anti-tyrosine hydroxylase (TH) (Pel-Freez Biologicals) and anti-GFP (Aves Labs). Alexa Fluor 488 goat anti-chicken and Alexa Fluor 568 goat anti-rabbit (Invitrogen) were used as secondary antibodies overnight at room temperature. For SCN imaging, RFP ΔCamk2a and RFP ΔRorgt mice were anaesthetized and perfused intracardially with PBS followed by 4% paraformaldehyde (pH 7.4, Sigma-Aldrich). The brains were removed and post-fixed for 24 h in 4% paraformaldehyde and transferred to phosphate buffer. Coronal sections (50 µm) were collected through the entire SCN using a Leica vibratome (VT1000s) into phosphate buffer and processed free-floating. Sections were incubated with neurotrace 500/525 (Invitrogen, N21480) diluted 1/200 and mounted using Mowiol. Samples were acquired on a Zeiss LSM710 confocal microscope using EC Plan-Neofluar 10×/0.30 M27, Plan Apochromat 20×/0.8 M27 and EC Plan-Neofluar 40×/1.30 objectives. RNA sequencing and data analysis RNA was extracted and purified from sorted small intestinal lamina propria cells isolated at ZT5 and ZT23. RNA quality was assessed using an Agilent 2100 Bioanalyzer. SMART-SeqII (ultra-low input RNA) libraries were prepared using Nextera XT DNA sample preparation kit (Illumina). Sequencing was performed on an Illumina HiSeq4000 platform, PE100. Global quality of FASTQ files with raw RNA-seq reads was analysed using fastqc (ver 0.11.5) ( ). Vast-tools 47 (version 2.0.0) aligning and read processing software was used for quantification of gene expression in read counts from FASTQ files using VASTD-DB 47 transcript annotation for mouse genome assembly mm9. Only the 8,443 genes with read count information in all 12 samples and an average greater than 1.25 reads per sample were considered informative enough for subsequent analyses. Preprocessing of read count data, namely transforming them to log 2 (counts per million) (logCPM), was performed with voom 48 , included in the Bioconductor 49 package limma 50 (version 3.38.3) for the statistical software environment R (version 3.5.1). Linear models and empirical Bayes statistics were used for differential gene expression analysis, using limma. For heat maps, normalized RNA-seq data were plotted using the pheatmap (v1.0.10) R package ( ). Heat-map genes were clustered using Euclidean distance as metric. Statistics Results are shown as mean ± s.e.m. Statistical analysis was performed using GraphPad Prism software (version 6.01). Comparisons between two samples were performed using Mann–Whitney U test or unpaired Student’s t -test. Two-way ANOVA analysis was used for multiple group comparisons, followed by Tukey’s post hoc test or Sidak’s multiple comparisons test. Circadian rhythmicity was evaluated using the cosinor regression model 40 , 41 , 51 , using the cosinor (v1.1) R package. A single-component cosinor fits one cosine curve by least squares to the data. The circadian period was assumed to be 24 h for all analysis and the significance of the circadian fit was assessed by a zero-amplitude test with 95% confidence. A single-component cosinor yields estimates and defines standard errors with 95% confidence limits for amplitude and acrophase using Taylor’s series expansion 51 . The latter were compared using two-tailed Student’s t -test where indicated. Results were considered significant at * P < 0.05, ** P < 0.01, *** P < 0.001. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Source data for quantifications shown in all graphs plotted in the Figures and Extended Data Figures are available in the online version of the paper. The datasets generated in this study are also available from the corresponding author upon reasonable request. RNA-seq datasets analysed are publicly available in the Gene Expression Omnibus repository with accession number GSE135235 . Change history 22 November 2019 An Amendment to this paper has been published and can be accessed via a link at the top of the paper.
It is well known that individuals who work night shifts or travel often across different time zones have a higher tendency to become overweight and suffer from gut inflammation. The underlying cause for this robust phenomenon has been the subject of many studies that tried to relate physiological processes with the activity of the brain's circadian clock, which is generated in response to the daylight cycle. Now, the group of Henrique Veiga-Fernandes, at the Champalimaud Centre for the Unknown in Lisbon, Portugal, discovered that the function of a group of immune cells, which are known to be strong contributors to gut health, is directly controlled by the brain's circadian clock. Their findings were published today in the scientific journal Nature. "Sleep deprivation, or altered sleep habits, can have dramatic health consequences, resulting in a range of diseases that frequently have an immune component, such as bowel inflammatory conditions," says Veiga-Fernandes, the principal investigator. "To understand why this happens, we started by asking whether immune cells in the gut are influenced by the circadian clock." The big clock and the little clock Almost all cells in the body have an internal genetic machinery that follows the circadian rhythm through the expression of what are commonly known as "clock genes." The clock genes work like little clocks that inform cells of the time of day and thereby help the organs and systems that the cells make up together, anticipate what is going to happen, for instance if it's time to eat or sleep. Even though these cell clocks are autonomous, they still need to be synchronized in order to make sure that "everyone is on the same page." "The cells inside the body don't have direct information about external light, which means that individual cell clocks can be off," Veiga-Fernandes explains. "The job of the brain's clock, which receives direct information about daylight, is to synchronize all of these little clocks inside the body so that all systems are in synch, which is absolutely crucial for our wellbeing." Among the variety of immune cells that are present in the intestine, the team discovered that Type 3 Innate Lymphoid Cells (ILC3s) were particularly susceptible to perturbations of their clock genes. "These cells fulfill important functions in the gut: they fight infection, control the integrity of the gut epithelium and instruct lipid absorption," explains Veiga-Fernandes. "When we disrupted their clocks, we found that the number of ILC3s in the gut was significantly reduced. This resulted in severe inflammation, breaching of the gut barrier, and increased fat accumulation." These robust results drove the team to investigate why is the number of ILC3s in the gut affected so strongly by the brain's circadian clock. The answer to this question ended up being the missing link they were searching for. It's all about being in the right place at the right time When the team analyzed how disrupting the brain's circadian clock influenced the expression of different genes in ILC3s, they found that it resulted in a very specific problem: the molecular zip-code was missing! It so happens that in order to localize to the intestine, ILC3s need to express a protein on their membrane that works as a molecular zip-code. This 'tag' instructs ILC3s, which are transient residents in the gut, where to migrate. In the absence of the brain's circadian inputs, ILC3s failed to express this tag, which meant they were unable to reach their destination. According to Veiga-Fernandes, these results are very exciting, because they clarify why gut health becomes compromised in individuals who are routinely active during the night. "This mechanism is a beautiful example of evolutionary adaptation," says Veiga-Fernandes. "During the day's active period, which is when you feed, the brain's circadian clock reduces the activity of ILC3s in order to promote healthy lipid metabolism. But then, the gut could be damaged during feeding. So after the feeding period is over, the brain's circadian clock instructs ILC3s to come back into the gut, where they are now needed to fight against invaders and promote regeneration of the epithelium." "It comes as no surprise then," he continues, "that people who work at night can suffer from inflammatory intestinal disorders. It has all to do with the fact that this specific neuro-immune axis is so well-regulated by the brain's clock that any changes in our habits have an immediate impact on these important, ancient immune cells." This study joins a series of groundbreaking discoveries produced by Veiga-Fernandes and his team, all drawing new links between the immune and nervous systems. "The concept that the nervous system can coordinate the function of the immune system is entirely novel. It has been a very inspiring journey; the more we learn about this link, the more we understand how important it is for our wellbeing and we are looking forward to seeing what we will find next," he concludes.
10.1038/s41586-019-1579-3
Biology
Amber researcher finds new species of cockroach, first fossilized roach sperm
George Poinar, Supella dominicana, a new species of cockroach (Blattida: Ectobiidae) with developed spermatids in Dominican amber, Biologia (2022). DOI: 10.1007/s11756-022-01271-9
https://dx.doi.org/10.1007/s11756-022-01271-9
https://phys.org/news/2022-12-amber-species-cockroach-fossilized-roach.html
Abstract A small, male cockroach (7 mm in length) in Dominican amber is described as Supella dominicana sp. n. (Blattida: Ectobiidae = Blattellidae). The dark tegmina, which are equal to the length of the abdomen, have a yellow cross bar and a central stripe giving the illusion that the body is divided into two halves. The pronotum is partially triangular in outline, with rounded edges and unusually flat surface. The fore femora contain two short apical terminal spines and a series of short wide-spaced marginal spines. The fore tarsus has the first article surpassing the others combined. The 7-segmented cerci are longer than wide. The arolia are well developed and the tarsal claws are symmetrical, of equal length, each with a blunt tooth. The two styles are small, equal in shape and with a branched seta. Developing spermatids are present at the tip of the abdomen. This fossil, which is the first ectobiid cockroach described from Dominican amber, provides some new features of the genus Supella Shelford, 1911. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes Introduction Dominican amber contains a wealth of information on the biodiversity, ecology, biogeography, speciation and extinction of plants and animals in the mid-Cenozoic (Poinar 1999 , 2010 ). It contains valuable information on the parasites and pathogens of past organisms, including malaria-carrying mosquitoes and nematomorph-infected cockroaches. Regarding cockroaches, we remember those maintained in laboratory cultures at schools and universities for various physiological and behavioral experiments. Yet, these represented only a small fraction with the majority of species occurring throughout tropical and subtropical regions around the world (Hinkelman 2021a , b , 2022 ; Wappler and Vršanský 2021 ). Habits of cockroaches vary considerably and not all live around human habitations. Some live in caves while others prefer stumps or under bark and some genera are soil-burrowing (see Sendi 2021 ; Vršanský et al. 2019a ; Song et al. 2021 ). Some are active during the day while others come out at night. Body size can also vary greatly, ranging from 2 mm long in myrmecophilic cockroaches (Bohn et al. 2021 ) to 66 mm for the Neotropical species Megaloblatta blaberoides (Walker, 1871) (Bell et al. 2007 ). This species can have an overall length of over 120 mm. Cockroaches are considered medically important insects since they are carriers of human pathogens, including bacteria that cause gastroenteritis infections such as salmonella, staphylococcus and streptococcus, resulting in diarrhea, fever, and vomiting. They can also carry viruses (Vršanský et al. 2019b ). Among those closely associated with human habitations around the world are the German cockroach ( Blatella germanica Linnaeus, 1767), the oriental cockroach ( Blatta orientalis Linnaeus, 1758), the brown-banded cockroach ( Supella longipalpa (Fabricius, 1798)) and the American cockroach ( Periplaneta americana Linnaeus, 1758). Aside from these insects spreading pathogens and causing allergic reactions, just their presence and inability to eliminate them from homes can result in psychological stress and low morale (Bowles et al. 2018 ). Currently, the genus Supella contains three subgenera (Rehn 1947 ) and ten species. Nine of them occur in the Ethiopian zoogeographical region (Princis 1969 ; Roth 1985 , 1991 ) and one species in the Saharo - Sindian regional zone (specifically the Arabian Peninsula – Grandcolas 1994 ). The present paper describes a cockroach of the genus Supella Shelford, 1911 from Dominican amber, thus revealing another insect taxon which has no native species remaining in Hispaniola (see also Lewis et al. 1990 ) or even in the entire Nearctic and Neotropics. Materials and methods The fossil originated from La Toca amber mine in the northern mountain range (Cordillera Septentrional) of the Dominican Republic between Puerto Plata and Santiago (Poinar 1991 ; Donnelly 1988 ). Amber from mines in this region was produced by Hymenaea protera Poinar, 1991 (Poinar 1991 ) (Fabaceae). Dating of Dominican amber is controversial, with the youngest proposed age of 20–15 Mya based on Foraminifera (Iturralde-Vinent and MacPhee 1996 ) and the oldest of 45–30 Mya based on coccoliths (Cepek in Schlee 1990). These dates are based on microfossils in the strata containing the amber that is secondarily deposited in turbiditic sandstones of the Upper Eocene to Lower Miocene Mamey Group (Draper et al. 1994 ). Dilcher et al. ( 1992 ) stated that “…the amber clasts, from all physical characteristics, were already matured amber at the time of re-deposition into marine basins. Therefore, the age of the amber is greater than Miocene and quite likely is as early as late Eocene”. The issue is further complicated by the discovery of Early Oligocene amber in Puerto Rico and Maastrichtian-Paleocene amber in Jamaica (Iturralde-Vinent, 2001 ) showing that amber from a range of deposits occurs in the Greater Antilles. Observations and photographs were made with a Nikon SMZ-10 R stereoscopic microscope and Nikon Optiphot compound microscope with magnifications up to 600 X. Helicon Focus Pro X64 was used to stack photos for better clarity and depth of field. Systematic paleontology Order: Blattida Latreille, 1810 (typified Blattariae Latreille, 1810) Family: Ectobiidae Brunner von Wattenwyl, 1865 = Blatellidae Karny, 1908 Subfamily: Pseudophyllodromiinae Hebard, 1929 Genus Supella Shelford, 1911 Supella dominicana sp. n. Zoobank code: E64B2B5E-CC3C-423 A-9813-00B4D238724A. Diagnosis (based on a complete adult male holotype) Small body (7 mm long); tegmina dark, with yellow cross bar and central stripe giving the illusion that the body is divided into two halves, external border of tegmina pale; pronotum triangular to trapezoidal in outline, with rounded edges and flat dorsum; tegmina short, not surpassing abdomen; twice as long as wide, apicies rounded; marginal field of tegmina extends to 43% of wing length; forewing costo-radial field with veins numerous (16), oblique. Fore femora with two apical spines and a series of short marginal spines. Foretarsus with first article surpassing the other articles combined; pulvillus only on fourth tarsomere, arolia well developed, tarsal claws symmetrical, blunt toothed; styles small, equal in shape, with branched setae. Description ( Figs. 1 , 2 , 3 , 4 , 5 and 6 ) . Fig. 1 Dorsal view of holotype of Supella dominicana in Dominican amber. Scale bar = 1.5 mm Full size image Fig. 2 Ventral view of holotype of Supella dominicana in Dominican amber. Scale bar = 1.4 mm Full size image Fig. 3 Lateral view of holotype of Supella dominicana in Dominican amber. Scale bar = 1.4 mm Full size image Fig. 4 Holotype of Supella dominicana in Dominican amber. a Fields of tegmen. A = Anal field; D = Discoidal field; S = Scapular field; M = Marginal field. Scale bar = 1.4 mm. b Face. A= antenna; Ai= antennal insertion; C= clypeus; G= gena; L= labrum; M= maxillary palp; O= ocellar spot. Scale bar = 90 μm Full size image Fig. 5 Holotype of Supella dominicana in Dominican amber. a Terminal femoral spines Scale bar = 210 μm. b Fore tarsus. Scale bar = 240 μm Full size image Fig. 6 Holotype of Supella dominicana in Dominican amber. a Arolium (arrow) on hind leg. Note small basal tooth on claws. Scale bar = 75 μm. b Style. Scale bar = 90 μm. c Cercus. Scale bar = 210 μm. d Male genital hook (sclerite HLA). Scale bar = 210 μm. e Acrosomes (dark spots) on sperm cells. Scale bar = 200 μm Full size image Well-preserved male with body complete except for distorted left antennae formed during burial (Figs. 1 , 2 and 3 ). Small robust species (L = 7 mm; width = 2.8 mm). Body elliptical in outline; tegmina dark (L = 5.0 mm), short, not surpassing abdomen, with yellow cross bar and central stripe giving the illusion that the body is divided into two halves; marginal field extends to 43% of wing length; costal veins numerous, oblique (16) (Fig. 4 a). No vein deformity (sensu Vršanský 2005 ) recorded. Head small, triangular in shape (Fig. 4 b), with very fine antennae almost as long as body, two lateral ocelli (30 μm in diameter) present; pronotum broadly triangular in outline, with rounded edges, dorsum flat, pale yellow with incomplete wide, dark reddish border, length, 1.4 mm, width, 1.8 mm; legs slender, yellowish brown, cursorial; lower front margin of fore femur armed with 2 long apical spines (210 μm in length) (Fig. 5 a), followed by 4 medium-long spines with a row of very short spines (setae) between them; mid and hind tibia armed with a series of widely separated long, robust spines; all tarsi 5-segmented and terminated with claws and well-developed arolia; fore tarsus with first tarsomere surpassing the other articles combined (Fig. 5 b); tarsal claws of equal length (105 μm), each bearing a blunt basal tooth (Fig. 6 a); pulvilli reduced, only noticeable on fourth tarsomere; abdominal sternites distinct, yellowish brown; styles arising from sockets (105 μm wide at base) (Fig. 6 b); style seta 5–branched, central seta 315 μm in length; cerci of medium length (L = 1.3 mm), with 7 segments, all longer than wide (Fig. 6 c). Male genital hook yellowish, 0.7 mm in length (Fig. 6 d); tip of abdomen shows sperm bundle (spermatodesm) containing spermatozoa with dark acrosomes (Fig. 6 e) that are surrounded by mucopolysaccharides (gelatinous material). Derivation of name the specific epithet indicates the place of origin of the fossil. Type material Holotype (O– 2–43) deposited in the Poinar amber collection maintained at Oregon State University. Type locality La Toca mine in the northern mountain ranges of the Dominican Republic. Systematical remarks Placement in Supella is based on body shape, coloration and features of the tegmina as outlined by Rehn ( 1947 ). While the new species shares several features with Supella miocenica Vršanský, Cifuentes-Ruiz, Vidlička, Čampor, Jr. et Vega, 2011 from Mexican amber (Vršansky et al. 2011), the two species can be easily separated by color patterns alone. S. miocenica has two distinct triangular patches on the pronotum, while the pronotum of S. dominicana is basally yellow except for a reddish brown partial margin (these color differences could represent sexual dimorphisms). Also the 13 cercomeres of S. miocenica are wider that long, while the 7 cercomeres of S. dominicana are longer than wide. In addition, the pronotum is significantly vaulted in S. miocenica but essentially flat in S. dominicana . Another Mexican amber cockroach is Anaplecta vega Barna, Šmídová et José, 2019 (Barna et al. 2019 ). This species, which is assigned to the (sub)family Anaplectinae is under 5 mm in length, has prolonged mouthparts, long maxillary palps with a flattened terminal palpomere, and large eyes, which separates it from S. dominicana . S. dominicana is homoplasically very similar to Anaplecta calosoma Shelford, 1912, but Anaplecta Burmeister, 1838 has different cerci with long setae. Other ectobiid genera in the Pseudophyllodromiinae with a similar size range have various characters that separate them from the fossil (Blatchley 1920 ; Li et al. 2017 , 2020 ; Gutiérrez 2002a , b , 2005 , 2009; Gutiérrez and Pérez-Gelabert 2000 , 2001 ; Greenwalt and Vidlička 2015 ; Vršanský et al. 2021 ). There are no extant native members of Supella in Hispaniola presently (Gutiérrez, 2002a , b , 2005 , 2006 ; Gutiérrez and Pérez-Gelabert, 2001 ). Discussion This is the first ectobiid cockroach described from Dominican amber. While Gutiérrez and Pérez-Gelabert ( 2000 ) provided a list of the genera of fossil cockroaches in Dominican amber, which included Anaplecta, Cariblatta Hebard, 1916, Euthlastoblatta Hebard, 1917, Holocompsa Burmeister, 1838, Plectoptera Saussure, 1864 and Pseudosymploce Rehn et Hebard, 1927, none have been described at that time (Arillo and Ortuño 2005 ). Later (Gorochov 2007 ) described Erucoblatta semicaeca, Holocompsa nigra and H. abbreviata . Poinar ( 1999 ) described hairworms (Nematomorpha) emerging from a cockroach in or near the genus Supella in Dominican amber (Bell et al., 2007 ). Aside from S. longipalpa (Fabricius, 1798) which is cosmopolitan in distribution, S. miocenica and S. dominicana are the only known Neotropical members of the genus while a number of living species occur in Africa (Rehn 1947 ; Vršanský et al. 2011 ). It is well known that many insect lineages in Dominican amber, such as Mastotermes Froggatt, 1897 termites (Isoptera: Mastotermitidae) and Leptomyrmex Mayr, 1862 ants (Hymenaea: Formicidae), are now absent in Hispaniola. Also the closest living species of the tree that produced Dominican amber ( Hymenaea protera Poinar, 1991) is Hymenaea verrucosa Gaertner, 1830 in East Africa (Poinar 1991 , 1992 , 1999 , 2010 ). Vršanský et al. ( 2011 ) discussed the same phenomenon with Supella miocenica in Mexican amber and lists its extant African and Asian con-subgeners. He postulated that general extinctions in Americas probably took place during the time between deposition of Dominican and Chiapas amber. Nevertheless, this discovery with presence of cosmopolitan fauna in American (Dominican) amber might support his later hypothesis of much later American extinctions approaching 3.92 Ma (Vršanský 2005 ; Vršanský et al. 2017 ). Other explanation would be that the age of Dominican amber is comparable or even older than that of the Chiapas amber (see Material and Methods). Differences separating S. dominicana from the African and Asian members of the subgenus are color patterns and size (lengths vary from 16.0 to 25.5 mm in S. mirabilis (Shelford, 1908), S. gemma Rehn, 1947, S. tchadiana Roth, 1987 and S. occidentalis Princis, 1963 but only 7.0 mm in S. dominicana ). At the very tip of the abdomen is a portion of a sperm bundle (spermatodesm) containing spermatozoa with dark acrosomes. The spermatodesm is surrounded by mucopolysaccharides (gelatinous material). The sperm would have been set free (become motile) in the seminal receptacle of a female. This appears to be the first record of fossilized cockroach sperm.
The cockroach, reviled around the world for its sickness-causing potential and general creepiness, now occupies an important position in the study of amber fossils thanks to research by an Oregon State University scientist. George Poinar Jr., professor emeritus in the OSU College of Science, has identified a new cockroach species. The male specimen, which Poinar named Supella dominicana, is encased in Dominican amber and is the first fossil cockroach to be found with sperm cells. "It is well preserved with a yellow cross bar across the wings and a central, vertical, yellow stripe that appears to divide the body into two parts," he said. "It has long spines, used for defense, on its legs, especially the hind legs. Also of interest is the sperm bundle containing spermatozoa with dark acrosomes, structures covering the head of the sperm, since fossil sperm are rare." The specimen, about 30 million years old, is also the only cockroach of its variety, ectobiid, to be discovered in amber from the Dominican Republic, though it has no living descendants in the Dominican or anywhere in the West Indies. As is the case with another Supella cockroach described earlier from Mexican amber, S. dominicana's closest living relatives are in Africa and Asia. Credit: Oregon State University "So what caused these cockroaches to become extinct when it is so difficult to get rid of them today?" wondered Poinar, an international expert in using plant and animal life forms preserved in amber to learn about the biology and ecology of the distant past. There are more than 4,000 species of cockroaches crawling around multiple habitats all over the Earth, but only about 30 types of roaches share habitat with humans, and just a handful of those are regarded as pests. But they are highly regarded as such, Poinar notes. Ancient, primitive and extraordinarily resilient, cockroaches can survive in temperatures well below freezing and can withstand pressures of up to 900 times their body weight, he said—which means if you try to kill one by stepping on it, you probably won't succeed. Cockroaches are so tough that they can live for a week after being decapitated, he added, and they can scuttle at a lightning pace—their speed to body length ratio is equivalent to a human running at about 200 mph. Since it doesn't bother cockroaches to walk through sewage or decaying matter, they'll potentially contaminate whatever surface they touch in your home as they search for food in the form of grease, crumbs, pantry items, even book bindings and cardboard. Credit: Oregon State University "They are considered medically important insects since they are carriers of human pathogens, including bacteria that cause salmonella, staphylococcus and streptococcus," Poinar said. "They also harbor viruses. And in addition to spreading pathogens and causing allergic reactions, just their presence is very unsettling." Prodigiously reproductive, able to squeeze into tiny hiding places and equipped with enzymes that protect them from toxic substances, cockroaches are not easily evicted once they show up somewhere, he said. There's also growing evidence that they're developing resistance to many insecticides. "The difficulty in eliminating them from homes once they've taken up residence can cause a lot of stress," Poinar said. "Many might say that the best place for a cockroach is entombed in amber." Poinar's identification of the new species was published in the journal Biologia.
10.1007/s11756-022-01271-9
Medicine
An early neuronal dysfunction in Parkinson's that could help early diagnosis
G. Carola et al, Parkinson's disease patient-specific neuronal networks carrying the LRRK2 G2019S mutation unveil early functional alterations that predate neurodegeneration, npj Parkinson's Disease (2021). DOI: 10.1038/s41531-021-00198-3
http://dx.doi.org/10.1038/s41531-021-00198-3
https://medicalxpress.com/news/2021-07-early-neuronal-dysfunction-parkinson-diagnosis.html
Abstract A deeper understanding of early disease mechanisms occurring in Parkinson’s disease (PD) is needed to reveal restorative targets. Here we report that human induced pluripotent stem cell (iPSC)-derived dopaminergic neurons (DAn) obtained from healthy individuals or patients harboring LRRK2 PD-causing mutation can create highly complex networks with evident signs of functional maturation over time. Compared to control neuronal networks, LRRK2 PD patients’ networks displayed an elevated bursting behavior, in the absence of neurodegeneration. By combining functional calcium imaging, biophysical modeling, and DAn-lineage tracing, we found a decrease in DAn neurite density that triggered overall functional alterations in PD neuronal networks. Our data implicate early dysfunction as a prime focus that may contribute to the initiation of downstream degenerative pathways preceding DAn loss in PD, highlighting a potential window of opportunity for pre-symptomatic assessment of chronic degenerative diseases. Introduction Parkinson’s disease (PD) is the most common neurodegenerative movement disorder, with an estimated prevalence in industrialized countries of 0.3% in the general population, which increases to 1.0% in people older than 60 years and to 3.0% in people older than 80 years 1 . Clinically, PD is characterized by classical motor syndrome linked to a progressive loss of dopamine-containing neurons (DAn) in the substantia nigra pars compacta, and disabling non-motor symptoms related to extranigral lesions. Current therapies for PD are symptomatic and do not limit the progression of disability with time. It has been proposed that early intervention might slow down or even stop disease progression, by preserving neurons from the undergoing irreversible neurodegeneration 1 , 2 . However, early treatment relies on early diagnosis, which unfortunately is especially complicated in the case of PD. Current diagnostic modalities in PD are based on the presence of motor symptoms, a stage at which up to 70% of DAn have been lost 3 . Even though pre-motor symptoms are known to precede clinical diagnosis of PD by as much as a decade, they are rather unspecific and unsuitable as stand-alone biomarkers of the disease 4 . Therefore, the identification of early diagnostic or progression markers of PD represents an urgent medical need. Although the majority of PD cases are of unknown cause, so-called idiopathic PD, around 5% have been shown to have a genetic basis, with mutations in the LRRK2 gene accounting for the largest number of patients of familial PD 5 . Interestingly, LRRK2 polymorphisms are also considered a relevant genetic determinant for sporadic PD 6 , and LRRK2 function appears dysregulated in sporadic cases of PD 7 , even in the absence of LRRK2 mutations/polymorphisms. These findings, together with the fact that PD associated with mutations in LRRK2 (L2-PD) is clinically indistinguishable from sporadic PD, position LRRK2 as an essential player for understanding both genetic and idiopathic PD 8 . LRRK2 is a highly complex protein with multiple enzymatic domains, involved in a variety of intracellular signaling pathways and cellular processes such as cytoskeleton dynamics, vesicle trafficking and endocytosis, autophagy, reactive oxygen species, mitochondrial metabolism, and function of immune cells. However, the exact physiological role of LRRK2 and its implication for PD pathogenesis remains unknown 8 . Of especial relevance for the investigation of early disease markers, transgenic mouse models of L2-PD display, before any events of neurodegeneration, an abnormally elevated excitatory activity and altered spine morphology in dorsal striatal spiny projection neurons 9 . Moreover, experimental models for other neurodegenerative conditions such as Alzheimer’s disease 10 , 11 and amyotrophic lateral sclerosis 12 , have been shown to exhibit neuron hyperexcitability before the disease onset. It has also been demonstrated that the combination of PD with dementia often correlates with a disruption of both functional and effective connectivity in the cortex 13 . In contrast, the association of PD with depression correlates with disrupted functional connectivity between the median cingulate cortex and the prefrontal cortex and cerebellum 14 , 15 . The development of induced pluripotent stem cell (iPSC) technologies enables the generation of patient-specific, disease-relevant, cell-based experimental models of human diseases. Importantly, iPSC-based models can recapitulate some of the earliest signs of disease, even at pre-symptomatic stages 12 , 16 . In this study, we used an experimental platform based on DAn-enriched neuronal cultures derived from L2-PD patients, their gene-edited isogenic counterparts, or from healthy individuals. Such cultures formed active neuronal networks, the functionality of which was analyzed by calcium imaging. After multiple iterations of experimental characterization and biophysical modeling of neuronal network behavior, we could identify early alterations in PD neuronal function that were not present in control networks, and that predated the onset of neuron degeneration. Results Generation and characterization of iPSC-derived DA neurons A total of seven iPSC lines representing L2-PD patients and healthy aged-matched controls, along with gene-edited counterparts and fluorescent TH reporters, were used for the current studies (see Table 1 and “Methods” for further details). Some of these iPSC lines have been previously generated and fully characterized in our laboratories 17 , 18 , 19 , whereas two additional TH reporter lines were generated for this study (Supplementary Fig. 1 ). Table 1 Summary of the healthy controls and patients used in this study. Full size table iPSC differentiation toward DAn fate was performed using a modified version of the previously established midbrain floor-plate protocol 20 , which enabled the maintenance of differentiated cells for up to 10 weeks 21 . Briefly, we first cultured iPSCs on Matrigel with mTeSR medium until they reached 80% confluence, then we induced specification toward the ventral midbrain (VM) fate using a combination of knockout serum medium and neural induction medium (Fig. 1a ). At day 12 post-plating (D12) the cells exhibited a homogeneous morphology and marker profile of VM floor-plate progenitors, expressing FOXA2+/LMX1A+ and the VM NPC markers such as OTX2 and EN1 paired with the neuroectodermal stem cell marker Nestin (Fig. 1b ). These progenitors were then cultured in neuronal differentiation medium supplemented with growth factors including BDNF, GNDF, TGF-β, and DAPT, with the aim to foster neuronal differentiation and survival (Fig. 1a ). At D50, the majority of the cells were positive for the dendritic marker MAP2, and quantitative immunolabeling for tyrosine hydroxylase (TH) and FOXA2 revealed that 30–40% of them were also committed to DA neuronal fate (Fig. 1c, d ). Fig. 1: Generation of ventral midbrain (VM) dopaminergic neurons from human iPSCs. a Scheme showing VM dopaminergic neurons differentiation protocol. b Representative immunofluorescence (IF) images of CTR (SP11) and PD1 (SP12) NPCs at day 12 of the differentiation process. iPSC-derived neural cultures express floor plate progenitor markers, such as Lmx1A, FoxA2, Otx2, Nestin, and EN1. c Representative IF images of CTR (SP11) and PD1 (SP12) differentiated neuronal cultures expressing neuronal markers (MAP2, TH) and midbrain-type DA markers (FoxA2 and Girk2) at day 50. Scale bar is 50 μm. d Percentage of TH/FoxA2 and TH/Girk2 vmDAn at day 50 of all the lines [CTR (SP11), CTR TH (SP11 TH), gene-edited isoPD1 (SP12 wt/wt), PD1 (SP12), PD1 TH (SP12 TH), PD2 (SP13)]. A number of independent experiments n = 3. e – f Representative images of neuronal cultures at D50 expressing specific markers of maturation: e DAT (dopamine transporter) (scale bar, 50 μm); and f PSD95 (post-synaptic marker, red) and synapsin (synaptic marker, green). Orthogonal views show colocalization (scale bar, 10 μm). g Representative images of CTR (SP11) and PD1 (SP12) neuronal culture at D35-D50 and D80 showing the expression of TH and MAP2 markers at the different timepoints. h Percentage of TH+ cells over DAPI at the three different timepoints (D35-D50 and D80). i Heatmap of gene expression profiles of neuronal cultures of CTR (SP11), PD1 (SP12), PD2 (SP13), and gene-edited isogenic PD1 line (isoPD1) at D50, with dendrograms showing the strong similarities between independent experiments and the absence of clustering between control (CTR and isoPD lines, light brown) and PD (PD1 and PD2 lines, blue) conditions. Shown are transcripts with ≥2-fold change in expression, grouped as upregulated (50 transcripts, green bar) or downregulated (107 transcripts, purple bar). No statistically significant differences were found at p-Adj ≤ 0.1 when comparing control and PD conditions. Each type of culture was analyzed a minimum of 2 independent times, except for the gene-edited isogenic PD1 line, for which only one RNA preparation passed the quality control and could be analyzed. Full size image By the same day of differentiation (D50), about 35% of TH+ neurons also expressed the A9 domain-specific marker G protein-activated inward rectifier potassium channel 2 (GIRK2) (Fig. 1c, d ), and displayed Dopamine Transporter (DAT), the essential marker of mature DAn (Fig. 1e ). TH+ neurons also showed expression of pre-synaptic marker Synapsin and post-synaptic marker PSD95, indicating their capability to form synapses (Fig. 1f ). Under these conditions, controls (CTR) and PD-iPSCs gave rise to VM DAn that were morphologically homogeneous and showed the expected features of mature VM DAn, including complex dendritic arborization (Fig. 1g ). Although the protocol used was comparably effective in all iPSC lines analyzed, we found some variability in the number of VM DAn across lines at D35, ranging from 10 to 20% VM DAn ratio concerning all differentiated cells. This ratio did not depend on the presence or type of disease (Fig. 1h ). Instead, it seemed related to the specific iPSC clone used and to the evolution at the early stages of differentiation. After establishing equivalent DAn-enriched cultures from both control and patient iPSC lines, we evaluated whether there were any differences in cell viability between DAn derived from controls and patient iPSC lines. We first noted that control and L2-PD iPSC-derived DAn were indistinguishable, based on immunofluorescence (IF) images (Fig. 1g ) up to 80 days in culture, the latest timepoint investigated. By counting the number of TH-expressing cells (Fig. 1h ) we found no decline when cultured over time up to D80, strongly suggesting that DAn are not degenerating under these conditions (Fig. 1g, h ). We also found no differences in the percentage of cells with pyknotic nuclei in patient lines compared to control lines (data not shown; D50: 12–15% in all the lines; D80: 15–20% in all the lines). We next analyzed whether changes in transcriptome profiles suggestive of neurodegeneration appeared under these conditions. For this purpose, we measured the expression of a panel of 770 genes relevant for this process (NanoString Human Neuropathology Panel) in neuronal cultures differentiated from control and L2-PD iPSC at D50. Independent differentiation experiments from the same iPSC lines displayed highly similar gene expression profiles (Fig. 1i , Supplementary Fig. 2a ), highlighting the robustness of the differentiation protocol. A comparison of gene expression levels between control and L2-PD cultures identified 157 differentially-expressed transcripts with ≥2-fold change (50 upregulated and 107 downregulated), but no differences reached statistical significance with p-Adj ≤ 0.1 ( Supplementary Dataset ), nor did they correlate with PD-related hallmarks (Supplementary Fig. 2b–d ). Thus, even though previous studies have found overt signs of neurodegeneration upon long-term culture of DAn differentiated from L2-PD iPSC 17 , 18 , 22 , 23 , under the culture conditions used in the present work (that include neurotrophic factors), they appeared to form overall healthy neuronal cultures similar to the ones generated by control iPSC. Characterization of neuronal activity Calcium activity recordings were then performed across all the seven independent lines to examine the functional maturity of the iPSC-derived neuronal networks after 35, 50, and 80 days of differentiation (D35-D50-D80) (Fig. 2a ). We noted that our calcium fluorescence assay enabled tracking the behavior of ~500 neurons in the field of view with a high spatial and temporal resolution, allowing us to resolve single cells and their dynamic interactions. Fig. 2: Network dynamics of vmDA neurons. a Representative image of a bright field recording at D80 (isoPD1) using the calcium imaging assay. Scale bar is 100 μm. Regions of interest were manually selected for each neuron (diameter 10 μm; color circles) to obtain the normalized calcium fluorescence time series of spontaneous activity, DFF (%) \(\equiv 100 \cdot(F - F_0)/F_0\) , with F 0 the fluorescence signal of the neuron at rest. The green box illustrates the fast rise of fluorescence upon activation that procures the spiking onset time (arrowhead). The dashed black boxes illustrate coordinated neuronal activity that shape network bursts when several neurons are involved. b Average neuronal activity along maturation for the different cell lines. Data points are expressed as mean±SD. Trend lines are linear regressions. The number of cultures used in each condition and timepoint were: D35 (CTR: n = 3; isoPD1: 3; PD1: 4; PD2: 4); D50 (3; 9; 9; 5); D80 (3; 9; 8; 5). c Representative raster plots (top) and global network activity (GNA, bottom) for CTR (SP11), isoPD1, and PD1 (SP12) neuronal cultures at D80. Each plot shows 5 min of recording. Peaks in the GNA reveal network bursts (blue dots). Extreme bursting events (red dots) are those that are above a threshold (red dotted line) set as 95% confidence interval of CTR bursts’ distribution. CTR and isoPD1 networks show a relative low percentage of extreme events, which contrasts with the rich abundance of them in PD1 networks. d Ratio of extreme events for all studied cell lines at D50 and D80. The colored boxes are a guide to visualize the distributions, which show the mean ± SD and the individual realizations (dots). For panels ( b ) and ( d ), the number of independent experiments for each condition and timepoint are D35 (CTR: n = 3, isoPD1: 3, PD1: 4, PD2: 4); D50 (3, 9, 9, 5); D80 (3, 9, 8, 5). *** p < 0.001 (ANOVA with multiple comparison analysis). Full size image Sharp increases in the fluorescence traces (Fig. 2a ) revealed spontaneous neuronal activations, which were analyzed to extract the onset times of elicited action potentials. With these data we first computed the average neuronal activity along culture time (Fig. 2b ) and observed that all lines evolved similarly, indicating that any anomalies in PD networks would depend on the structure of the activity patterns, rather than on the strength of activity itself. Thus, we next computed the Global Network Activity (GNA), defined as the fraction of neurons in the network that coactivated in a time window of 1 s. As shown in Fig. 2c , the GNA captures the level of neuronal synchronization present in the raster plot. For control cultures and also for the rescued isogenic PD line (left and central panels), collective activity encompassed synchronous activations of moderate size, between 10 and 40% of the network. In contrast, PD neuronal cultures (Fig. 2c , right panels) displayed a two-state dynamics, with strong whole-network synchronous events combined with quiescent intervals. Importantly, the distinct GNA patterns of control and PD neuronal cultures hint at the existence of intrinsically different network mechanisms in the two systems, which orchestrate a markedly different collective behavior. The GNA also informed that average neuronal activity by itself was not sufficiently informative to reveal alterations due to disease. To quantify the differences between cell lines exposed by the GNA analysis, we next analyzed the amplitude of the GNA events and extracted those that exceeded a given threshold (Fig. 2c , red dotted line), termed ‘extreme event’. We observed that the ‘ratio of extreme events’, i.e., the number of large bursting episodes relative to all detected episodes, was much higher in PD lines than in CTR or in genetically-corrected isogenic control (isoPD) lines, particularly at late stages of maturation (D80) (Fig. 2d ), which could render them more susceptible to stress by creating an overly rigid, synchrony-locked network despite their continued viability in culture. Functional connectivity of control and PD neuronal networks The different structure of global activity of CTR, isoPD, and PD lines hints at the existence of distinct functional connectivity traits between them. To shed light on network functionality, we used transfer entropy (see “Methods” for details) to compute the functional connectivity among all pairs of active neurons in a given network. As shown in Fig. 3a (top panels), the functional networks displayed abundant connections, with a combination of short-range and long-range links that extended across the network. Even though the functional networks seemed similar in spatial organization, we observed significant differences between control and PD networks. The first difference was a lower density of connections in the PD line, suggesting an overall degradation of functionality. A second difference concerns the structure of functional communities, i.e., groups of neurons that tend to connect among themselves more strongly than with the rest of the network. As shown in the bottom panels of Fig. 3a , functional communities were abundant in CTR and isoPD cultures (highlighted blue boxes), indicating cross-talking of neurons in small groups, in agreement with the moderate size of collective activations. For PD cultures, however, the communities were much larger (orange boxes), indicating not only a failed formation of functional microcircuits, but a tendency toward excessively strong network synchronicity, as also seen in the raster plots. Fig. 3: Functional connectivity of CTR, isoPD, and PD lines. a Top, Representative functional connectivity maps at D80 for CTR, isoPD1, and PD1 cultures. Dots are neurons and lines functional connections. The diameter of a dot and its opacity is proportional to the connectivity of the neuron it represents. Bottom, corresponding functional connectivity matrices. Black points are functional connections and colored boxes are functional communities. b Comparison of the number of communities and community statistic Q for the three lines at D50 and D80. PD1 networks are excessively integrated, with a relatively small number of communities strongly interconnected (low Q) as compared to CTR and isoPD1 networks. The colored boxes are a guide to visualize the distributions, which show the mean±SD and the individual realizations (dots). Number of independent experiments: D50 (CTR: n = 3, isoPD1: 4, PD1: 6); D80 (3, 5, 6). **** p < 0.001; ** p < 0.01; * p < 0.05 (ANOVA with multiple comparison analysis). c Sketch of functional differences between normal and PD networks according to the data, with PD networks displaying on average a lower connectivity combined with fewer and excessively connected communities. d Cumulative distribution functions (CDF) of functional connections (in % of network) at D50 for CTR, PD1, PD2, and gene-edited isogenic PD1 line (isoPD1). The CDFs of the two PD lines exhibit a trend toward a lower connectivity as compared to CTR and isoPD1 ones, which are similar. e Corresponding distributions at D80. The two PD lines maintain their low connectivity profiles and strengthen their differences relative to CTR and isoPD1, which remain similar. The number of independent experiments used in panels ( d ) and ( e ) are D50 (CTR: n = 3, isoPD1: 4, PD1: 6, PD2: 5); D80 (3, 5, 6, 5). Full size image These results were reproducible across realizations. As shown in Fig. 3b , PD networks exhibited a clear trend toward a small number of communities, which was accompanied by a tendency toward a stronger bond between these communities, i.e., an abnormal excessive integration. The latter was captured by the ‘community statistic Q’. The lower the Q value, the stronger the integration in the network. These differences appeared at D50 and substantially strengthened at D80. While CTR and isoPD cultures showed a moderate integration, PD cultures exhibited an abnormally strong integration. The combination of these results is summarized in Fig. 3c , which depicts two toy networks with the same number of neurons but with different functional organization. The normal, healthy network is characterized by a high average connectivity and well-defined small communities, while the PD network is characterized by a lower average connectivity and large, strongly linked communities that effectually almost shape a unique structure. To complete the functional analysis, we compared in more detail the statistics of functional connections. Figure 3d, e shows, for D50 and D80 stages of development, the cumulative distribution function of connections, CDF(k), for control, isoPD, and PD lines. This distribution portrays the probability that a neuron in the network has a number of connections less or equal than k. For D50, all distributions were similar and relatively close to one another, illustrating that they originated from similar dynamics, namely a combination of individual activity and network bursting. The distributions also showed a subtle growth, indicating that weakly connected neurons were rare. However, the PD distributions at D50 revealed a tendency toward a lower connectivity. Indeed, although the distributions were similar in shape, an analysis of the distance between distributions (Kullback–Leibler divergence and Kolmogorov–Smirnov test, Supp. Fig. 3 ) showed statistical differences among them, indicating that D50 could be the characteristic timepoint at which functional alterations in PD lines start to be detectable. The differences among distributions accentuated at D80, with PD cultures exhibiting a more pronounced trend toward a lower connectivity (a sharp increase of CDF(k) at low k values) that markedly departed from CTR and isoPD cultures. Interestingly, there were no statistically significant differences between CTR and isoPD cultures at this timepoint, a result that suggests the successful rescue of affected cell lines through correction of the LRRK2 mutation by gene edition. In light of these results, we considered the control network as the reference for a healthy development and function, and the departure from it as a signature of the pathology. Thus, we hypothesize that the LRRK2 mutation undermines the development of neuronal circuitry to such an extent that it alters collective network activity and functional organization. Contrasting dynamics of TH and non-TH neurons in control and PD lines To investigate the origin of the functional impairment found in PD DAn, we took advantage of the genetic TH-reporter tool created in our lab 19 . This reporter allows identifying TH and non-TH neuronal populations in the networks and analyzing their functional characteristics separately. Figure 4a, b exemplifies such a construction for representative isoPD and PD networks at D80, in which two neuronal layers, one corresponding to TH neurons (red) and another one corresponding to non-TH neurons (blue), interact functionally. The spontaneous activity of each subpopulation of neurons at D80 is shown in the accompanying raster plots. A comparison of the activity patterns between isoPD and PD cultures reveals the strong contrast in their dynamics. While the non-TH population in isoPD cultures shows a sustained activity with weak collective events, PD cultures show strong synchrony episodes that extend to both subpopulations. Fig. 4: Functional connectivity and dynamics of TH+ and non-TH+ neurons. a Left, representative functional network of an isoPD1 culture at D80, signaling the location of TH+ neurons (red) and non-TH+ neurons (blue). The diameter of a dot is proportional to the connectivity of the neuron it represents. Only 10% of the connections are shown for clarity. Right, corresponding raster plots of the two populations. b Corresponding analysis for a PD1 culture, in which the non-TH+ subpopulation exhibits a much stronger synchronous behavior as compared to controls. c Ratio of extreme events for each subpopulation of neurons at two maturation stages, D50 and D80. PD cultures show in general a higher ratio of extreme events as compared to controls. The non-TH+ population in the PD network at D50 shows a strong variability in the ratio of extreme events across realizations, indicating the onset of malfunction. The same population at D80 is dominated by extreme events that reflect the strong synchronous behavior. The colored boxes are a guide to visualize the distributions, which show the mean ± SD and the individual realizations (dots). The number of independent experiments in panel ( c ) are D50 (CTR: n = 3, isoPD1: 5, PD1: 5); D80 (3, 5, 5). *** p < 0.001; ** p < 0.01; * p < 0.05 (ANOVA with multiple comparison analysis). Full size image In addition, as shown in Fig. 4c , a comparison of the ratio of extreme events for TH and non-TH subpopulations indicates that, on average, PD cultures exhibited a higher number of extreme events in both subpopulations when compared to controls. However, along development, TH and non-TH ratios for controls were similar at D50 and D80, whereas the ratios for the PD line switched from mostly TH at D50 to mostly non-TH at D80. Altogether, these results unfold an abnormal subpopulation dynamics in PD networks, with an overall excessive bursting and a reversing leadership of TH and non-TH subpopulations along development. We also noted that PD cultures at D50 showed important variability among realizations, with extreme events in the non-TH population being absent in some experiments and abundant in others. Given this variability, we hypothesize that the D50 timepoint might mark the onset of structural alterations that later translate into dynamic and functional deficits. Since about 70% of the neurons in our networks are non-TH (Fig. 1g, h ), the above results suggest that, in healthy control cultures, non-TH neurons drive spontaneous activity in the network. In contrast, TH neurons play a regulatory yet essential role by facilitating the coexistence of small neuronal coactivations and whole-network bursting events. This regulatory role appears to be lost in PD cultures, in which TH neurons frame spontaneous activity patterns with excessive synchrony that translate into an abundance of extreme bursting episodes. We, therefore, hypothesize that the LRRK2 mutation alters the balance between neuronal subpopulations by degrading the physical coupling among neuronal types. An in silico model captures the dynamic alterations in PD networks Next, we carried out numerical simulations to better understand the impact of TH+ cells structural failure on PD network dynamics. Specifically, we evaluated whether a reduction in TH cells connectivity was sufficient to switch global network dynamics from balanced to excessively synchronous. We first reproduced the behavior of control networks. As shown in Fig. 5a , we used the same neuronal spatial arrangement of the cultured networks and considered a mixed population of 55% excitatory non-TH neurons, 25% excitatory TH+ neurons, and 20% inhibitory neurons. These values were selected to concord with both the typical 80% excitation of cortical circuits in vitro 24 and the TH/DAPI ratio found in our cultures (Fig. 1 ). Neuronal dendritic trees and axons were then incorporated according to realistic biological rules, so that a connection was established whenever an axon crossed the dendritic tree of a neuron (Fig. 5b , top). Once the network structure was set, dynamics was incorporated through an extended Hodgkin–Huxley model with parameters adjusted to replicate the activity in controls. As shown in the top panels of Fig. 5c , normal network dynamics was qualitatively similar to the experiments shown in Fig. 2c (left and center) and characterized by an intense background activity in combination with coordinated activity episodes. The inspection of the different populations (Fig. 5c , top, right panels) shows that non-TH neurons were also the drivers of network dynamics. It should be noted that inhibition was necessary in the simulations to ensure a sufficiently high spontaneous activity. Fig. 5: Numerical simulations of normal and PD networks. a Representative CTR culture at day 80, used as a reference for the spatial allocation of neurons in the numerical model. The enlarged area depicts a detail of the neuronal spatial arrangement. 300 neurons are used in the simulation, and are randomly assigned as dopaminergic (DA, red, 25% of network), inhibitory (green, 20%), and excitatory (blue, 55%). b Random pruning algorithm, in which PD damage is simulated by a shortening of either axons (red curve) or dendritic arbor (dotted red circle) in DA neurons (red hexagons). In the sketch, normal connectivity is established when the dendritic arbor of any neuron (dotted circles) intersects an axon. Neurite pruning disconnects neurons either because of axonal shortening (neuron #1) or dendritic loss (neuron #2) in DAn. c Raster plots of simulated normal and PD networks, with the latter corresponding to an axonal pruning on 10% of DA neurons, each one losing about 80% of connections. In the raster plots, the left panels show the dynamics of the entire network, whereas the right panels show the dynamics in each subpopulation. All raster plots range from 0 to 300, but the number of active neurons in each plot vary according to the population monitored. PD simulations show a markedly synchronous behavior of the excitatory population that translates onto the entire network. d Ratio of extreme events for different percentages of pruned DAn subpopulation, showing that the presence of extreme bursting events increases as network affectation progresses. The colored boxes are a guide to visualize the distributions, which show the mean ± SD and the individual realizations (dots). The number of individual simulations is n = 4 for each condition. *** p < 0.001; * p < 0.05 (ANOVA with multiple comparison analysis). e Cumulative distribution of functional connections (CDF) for two PD networks (at 10 and 30% pruning) and a normal, non-pruned network. The distributions show the same trend as in the experiments, with PD departing from normal toward a low degree functional connectivity scenario. Full size image To explore PD dysfunction, we used the same control networks and modeled the shortening of neurites in TH neurons, effectually reducing the connectivity probability in the TH population (Fig. 5b , bottom). Such a construction signified that a single TH cell lost about 80% of its connections. We then explored the minimum fraction of TH cells that had to be affected to observe an impact on the network dynamic. As shown in the bottom panels of Fig. 5c , PD simulations were characterized by an exceedingly synchronous behavior that captured well the experimental observations of PD cultures (Fig. 2c , right), with a much lower background activity and a similar dynamic in both TH and non-TH populations. Simulations also demonstrated that the affectation of ~10% TH cells sufficed to drive the networks toward a chronic bursting behavior with an abundance of extreme, whole-network synchronous events (Fig. 5c ). As shown in Fig. 5d , we compared three different ratios of affected TH neurons. An increase in this ratio mimics disease progression, i.e., developmental time in the experiments. The results show that the ratio and occurrence of extreme events are similar for different damage rates, possibly because the remaining excitatory and inhibitory neurons are sufficiently large to maintain activity. Interestingly, these results also indicate that a small damage suffices to drive the network toward an excessively synchronous scenario. Finally, an analysis of functional connectivity showed that the model also reproduces well the experimental observations, with a tendency for simulated PD cultures to shift toward lower connectivity states (Fig. 5e ). TH+ neurons carrying LRRK2 mutation feature a lower number of dendrites To verify the prediction of the in silico model above and expose physical alterations in the circuitry of PD lines, we investigated the morphology of TH neurons in CTR, isoPD, and PD neuronal cultures at D50 and D80 (Fig. 6a and Supplementary Fig. 4a ). We found that DAn differentiated from PD iPSC lines showed a lower number of TH neurites compared to those derived from CTR or isoPD lines (1.2 ± 0.1 neurites for PD vs . 4.5 ± 0.2 for CTR and 4.1 ± 0.3 for isoPD, Fig. 6b ). In addition, the number of neurites in CTR and isoPD cultures increased along with development while it decreased in PD cultures, indicating a progressive deterioration of network structure for the latter (Fig. 6b ). In contrast, the number of neurites in MAP2+ neurons from control, isoPD, and PD lines did not show any significant differences (Fig. 6c ), confirming that the dynamic and functional deficits of PD lines are localized to the TH subpopulation, and that these cells experience a gradual structural failure in the form of neurite loss. Fig. 6: Biological confirmation of the in silico hypothesis. a Gene-edited isoPD1 (SP12 wt/wt) and PD1 (SP12) TH+ neuronal processes are traced (orange) to determine the number and the structure of neurites at D50 (left panels) and D80 (right panels). The top panels for each cell line show the original image of TH+ neurons. A blue triangle highlights a representative traced neuron shown in the middle panels. The bottom panels for each cell line show MAP2+ neuronal processes. Scale bar is 50 μm in all images. b The quantification of the number of TH neurites shows significant differences between CTR and PD1 both at D50 and D80. c No differences are detected on the number of MAP2 neurites. d Levels of released DA measured in culture media from CTR (SP11), PD1 (SP12), PD2 (SP13), and isoPD1 (SP12 wt/wt) at D50 and D80 of differentiation, relative to CTR levels. * p -value < 0.05; ** p -value < 0.01; *** p -value < 0.001 (ANOVA with multiple comparison analysis). Number of independent experiments n = 3. Full size image Next, we evaluated DA release in the media collected from CTR, isoPD, and PD neuronal cultures. Supernatants of PD cultures revealed decreased dopamine levels at D50 and D80 compared with those of CTR and isoPD cultures (Fig. 6d ), indicating that the reduction in TH+ neurites has functional consequences on DA release. Finally, to investigate whether the early sign of overall alteration in the network structure and dynamics could contribute to neurodegeneration, we maintained iPSC-derived DAn over culture times longer than D80. Interestingly, in aged (110 days, latest timepoint analyzed) cultures, PD DAn showed morphological alterations, including reduced number and length of neurites, and significantly decreased cell survival compared with isoPD DAn (Supplementary Fig. 5a–d ). In contrast, neuronal degeneration was not evident in non-TH+ cells, as judged by the percentage of MAP2+/TH− neurons in the neuronal cultures (Supplementary Fig. 5e ). Discussion Since early diagnosis of PD is expected to dramatically improve the outcome of therapies under current development, in the present study we interrogated a human neuronal cell-based model of PD for the earliest detectable functional alterations. We found that DA neurons derived from iPSC representing healthy individuals or PD patients harboring LRRK2 mutation developed appropriate physiological characteristics forming complex and mature networks during the differentiation process. However, PD neuronal networks developed abnormal hypersynchrony at the latest timepoint analyzed (D80), in contrast with healthy or gene-edited isogenic PD networks. These new data from PD-affected human DAn indicate that early dysfunction may contribute to the initiation of downstream degenerative pathways that ultimately lead to DAn loss in PD. A general limitation of human iPSC-based disease modeling strategies that should be taken into account when interpreting the results of our studies is the notorious variability described among iPSC lines and clones 25 . To rule out the impact of interline/interclone variability in our findings, we used several iPSC lines and iPSC clones to represent each experimental condition. Specifically, controls included two independent iPSC lines generated from two independent individuals, and a total of four iPSC clones, whereas the PD condition was similarly represented by two independent iPSC lines generated from two independent individuals, and a total of three iPSC clones (Table 1 ). Moreover, the confirmation of our findings in neuronal cultures differentiated from gene-edited isogenic controls of PD iPSC provides further reassurance that the alterations identified in PD samples in our studies are indeed related to the disease condition, rather than reflecting the specific iPSC line/clone utilized. Our previous work showed that PD patient-specific iPSC-derived DAn are particularly susceptible to undergo neurodegeneration upon long-term culture 17 , 18 , 23 , 26 . In those studies, DAn were differentiated from iPSC and then maintained in culture in the absence of neurotrophic factors, which resulted in evident signs of DAn neurodegeneration only in PD samples, after 75 days in culture. For the present study, we used a different protocol for the generation and maintenance of iPSC-derived DAn in culture, which included neurotrophic factors and allowed comparing the function of mature DAn from PD and control iPSC in the absence of neurodegeneration for up to 80 days. Following these conditions, we used a calcium fluorescence imaging assay to monitor the functional neuronal activity of control and L2-PD iPSC-derived DAn at different timepoints of differentiation. We found that DAn derived from control iPSC exhibited an increase in both the number of spontaneous activity events and the number of bursting episodes, i.e., network-spanning synchronous activations. Such a trend is the one that could be expected from maturing healthy iPSC-derived neurons in vitro 27 . Conversely, L2-PD iPSC-derived DAn showed a two-state dynamic completely different from controls, characterized by strong synchronous events combined with quiescent intervals. The dynamics of PD cultures suggests that the structure of collective activation —but not the average individual neuronal activity—was the critical feature of PD malfunctioning neuronal behavior. This apparent sign of functional alterations was only displayed by the diseased neurons at a late timepoint. Early in development, both control and PD neuronal cultures showed similar functional behavior, indicating that there is a defined period of time in which PD network development starts to degrade and functional deficiencies emerge. The proportion of control iPSC-derived DAn in each of the cell lines remained unchanged along the ten weeks of culturing. Thus, despite remaining viable in culture, PD-affected DAn failed at procuring the necessary structural support for a rich collective dynamic and brought the network toward an excessively synchronous state. We introduced the connectivity probability of the TH population into a biophysical model to address the role of neurite connectivity. Numerical simulations of the model revealed that neurite density loss in DAn was among the first causes of dynamic and functional alterations. A random neurite loss in 10% of DAn sufficed to render a bursting-dominated dynamic and an excessively connected functional network. Our experimental data confirmed the reduction of neurites specifically in DAn of PD neuronal cultures. Thus, even though the culture conditions used in the present studies prevented the appearance of overt signs of PD DAn neurodegeneration observed in previous works using more standard culture conditions 17 , 22 , 28 , they could not prevent a slight reduction in PD DAn neurites. Therefore, it is important to note that this phenotype appears to be independent from differentiation and culture methods, depends on the presence of the LRRK2 G2019S mutation (inasmuch as it is absent in gene-corrected isogenic PD DAn), and underlies the functional alterations in overall network dynamics. It is tempting to speculate that this observation might be related to the recent finding that the LRRK2-G2019S mutated protein interferes with microtubule-based motors 29 . Previous studies of PD patients 15 and animal models of the disease 9 have reported early hyperexcitability of DAn and corticospinal neurons of the motor cortex. In our study, we demonstrate that DAn derived from patient iPSC harboring LRRK2 mutations exhibit hyperexcitability in culture. This increased activity might contribute to trigger a cascade of excitotoxic disease mechanisms involving pathological changes in Ca 2+ handling and the eventual activation of cell death pathways. In contrast, recent work supports a link between hyperexcitability and neuroprotection 30 , 31 , in particular when hyperexcitability is induced via reduction of neurite density and a consequent lack in connections and communication between neurons. This connectivity loss may trigger compensatory mechanisms in which the neurons create an aberrant structural and functional connectivity that can be pointed as an early marker of pathology 32 , 33 . To determine the actual role of hyperexcitability in PD, it will be necessary for future studies to examine the effects of finely controlled manipulations of excitability on human DAn. Thus, dysfunction of DAn physiology appears to precede the functional alterations that are then spread in the overall culture, placing DAn dysfunction as an early sign of overall alteration and neurodegeneration. Moreover, we directly connected this alteration to a reduction in the neurite arborization using both experimental data and in silico analysis. Taken together, these findings highlight the importance of addressing early changes in mechanisms underlying spike generation at the DAn soma when considering disease pathogenesis and potential treatment strategies for PD. Furthermore, our findings show the usefulness of sensitive physiological studies of human iPSC-derived DAn for future work aiming to develop new diagnostic tools and therapeutics for PD. Methods iPSC lines and gene editing The parental iPSC lines used in our studies were previously generated and fully characterized 17 , 18 , 19 . The generation and use of human iPSCs in this work were approved by the Spanish competent authorities (Commission on Guarantees concerning the Donation and Use of Human Tissues and Cells of the Carlos III National Institute of Health). All procedure adhered to internal and EU guidelines for research involving derivation of pluripotent cell line. All subjects gave informed consent for the study using forms approved by the ethical Committee on the Use of Human Subjects in Research at Hospital Clinic in Barcelona. The iPSC lines used in this study include one iPSC line obtained from a healthy donor (SP11) and two lines obtained from PD patients carrying the LRRK2 G2019S mutation (SP12 and SP13). From these original lines, isogenic controls solely differing in the presence of the LRRK2 G2019S mutation were obtained by correcting the LRRK2 mutation in the SP12 iPSC line. Expanded subject information, cell characterization, and technical details of the original iPSCs are described in Table 1 . Generation of CRISPR/Cas9 plasmids and donor template for homology-directed repair For correcting the LRRK2 mutation, LRRK2 G2019S mutant SP12 iPSC were edited using TALENs. The CRISPR/Cas9 plasmid pSpCas9(BB)-2A-GFP (PX458) was a gift from Dr. Feng Zhang (Broad Institute, MIT; Addgene plasmid #12345) 34 . The original pCbh promoter was exchanged for the full-length pCAGGS promoter to achieve higher expression levels in hiPSCs. Custom guide RNAs were cloned into the BbsI sites as annealed oligonucleotides. The donor template for HDR was generated using standard molecular cloning procedures. Briefly, for TH donor template, homology arms were amplified from genomic DNA and verified by Sanger sequencing. Resulting sequences matched those of the reference genome GRCh38. The homology arms were inserted into the KpnI-ApaI (5′HA) and SpeI-XbaI (3′HA) sites of pBS-SK ( − ). The sequence coding for the P2A peptide was added to mOrange with the primers used to amplify the gene and the PCR product was inserted into the ApaI-XhoI sites of the pBS-5′HA-3′HA plasmid. Finally, LoxP-pRex1-Neo-SV40-LoxP was amplified from the aMHC-eGFP-Rex-Neo plasmid (gift from Dr. Mark Mercola; Addgene plasmid #21229) 35 and inserted between the XhoI-SpeI sites of the plasmid. For LRRK2 donor template, homology arms were amplified from genomic DNA of a wild-type donor and verified by Sanger sequencing. The homology arms were inserted in the KpnI-XhoI (5′HA) and SpeI-NotI (3′HA) sites of pBS-SK ( − ). LoxP-pRex1-Neo-SV40-LoxP was inserted in a second cloning step between the SalI-BamHI sites. Gene edition in iPSC To generate the TH-mOrange hiPSC reporter cell line, cells were transfected with the HDR template and a Cas9- and gRNA-encoding plasmid; the latter overlapping the TH gene stop codon. In total, 800,000 iPSCs were seeded in 10-cm plates the day before transfection. iPSCs were co-transfected with 6 µg of CRISPR/Cas9 plasmid and 9 µg HDR template using FuGENE HD (Promega) at a 1:3 DNA to reagent ratio. Cells were plated in selection medium containing 50 µg/mL G418 (Melford Laboratories Ltd., Ipswich, UK) and maintained for 2 weeks until resistant colonies could be screened. At that time, one-half of each resistant colony was manually picked and site-specific integration was verified by PCR. To correct the LRRK2 G2019S mutation in the SP_12 iPSC line, cells were transfected with the wild-type HDR template and a Cas9- and gRNA-encoding plasmid whose gRNA overlapped the insertion site for the selection cassette. Transfection, clone selection, and subsequent screening were conducted as described above. To excise the selection cassette, edited iPSCs were transfected with a CRE recombinase-expressing plasmid, gifted from Dr. Michel Sadelain (Sloan Kettering Institute; Addgene plasmid #27546) 36 . At 48 h post-transfection, cells were dissociated and seeded at clonal density on a feeder layer of irradiated human fibroblasts. When colonies attained a certain size, they were picked and subcultured in independent Matrigel-coated wells. Cells were sampled and checked for cassette excision by PCR and Sanger sequencing. Those clones in which the cassette was excised were expanded, cryopreserved, and karyotyped. Expanded information regarding oligonucleotides used during gene editing procedures are listed in Supplementary Table 3 in Supplementary Information . iPSC differentiation into vm DA lineage Directed differentiation of iPSC onto ventral dopaminergic neurons (DAn) was carried out following a previously published protocol 20 , with minor modifications. Briefly, iPSCs were cultured in mTeSR commercial medium until they reached 80% confluence. Ventral midbrain induction was then forced by switching to SRM medium (KO-DMEM,15% KO serum, 1% P/S, 1% glutamine, 1%NEAA, 0.1% bet- mercaptoethanol) with SB Tocris 1614 (a selective inhibitor of the growth factor TGF-b), LDN193189, Stemgent 04-0074 (BMP inhibitor) to inhibit the dual SMAD pathway, SAG and Purmorphamine, Calbiochem 540220 (SHH pathway activators) to induce neuroepithelial stem cells formation and proliferation. The medium was next changed to Neurobasal with 1% P/S, 1% N2, and 2% B27-VitA and CHIR99021, Stemgent 04-0004 (CHIR), a potent GSK3B inhibitor known to strongly activate WNT signaling that induces LMX1A in FOXA2 ventral midbrain (VM) dopaminergic neurons precursors. The best co-expression of LMX1A and FOXA2, crucial factors for inducing ventral midbrain fate, was obtained at day 12 of differentiation. After generating and characterizing the VM precursors, these cells were cultured in Neurobasal medium, 1% P/S and 2% B27-VitA with neurotrophic factors: 20 ng/ml of BDNF (450-02, Peprotech), 20 ng/ml of GDNF (450-10, Peprotech), 1 ng/ml of TGF-B3, (R&D Systems 243-B3), Ascorbic acid (Sigma A-4034), 0.5 mM of dbcAMP (D0627-25MG, SIGMA), and 5 µM of DAPT (565770; Calbiochem). On day 20, the precursors were split into wells previously coated with Poly Ornithine (15 µg/ml)/human Laminin (1 g/mL) and Fibronectin (2 µg/mL). Cells were differentiated for additional 15, 30, and 60 days, finally providing the timepoints states in the main text of 35, 50, and 80 days. Studied cultures were fixed with PFA 4% and characterized for VM specificity. Immunocytochemistry The differentiated cultures were fixed with 4% PFA (15 min), washed three times with DPBS (15 min), then washed with either TBS1x (low triton protocol for vesicles specific antibodies) or with TBS1+ (for standard protein immunocytochemistry) 3 times for 15 min and then blocked for 2 h with TBS++ with or without low triton. Primary antibodies were incubated for 48 h at 4 °C. Samples were then washed with TBS 1x/TBS+ (15 min) three times. The blocking was next repeated for 1 h at room temperature followed by 2 h incubation with the secondary antibodies (all at 1:200 dilution). The antibodies used are listed in Supplementary Table 4 in Supplementary Information . The samples were then washed with TBS 1x (15 min) three times and then incubated with nuclear staining DAPI (Invitrogen, dilution 1:5000) for 10 min. After washing twice the DAPI with TBX1x, samples were mounted with PVA:DABCO, dried for 2 h at room temperature, and stored at 4 °C until imaged. Samples were imaged using an SP5 confocal microscope (Leica®) and analyzed with FIJI® is Just ImageJTM®. Gene expression analysis using Human Neuropathology Panel 50 ng of total RNA per sample was prepared for analysis with a NanoString Human Neuropathology Panel chip. The assay was performed on an nCounter SPRINT Analysis System (Sanford Consortium for Regenerative Medicine Stem Cell Genomics Core, La Jolla) according to the manufacturer’s instructions. The nSolver software by NanoString was used to normalize gene expression data. ROSALIND software (OnRamp Bioinformatics, ) was then used to interpret targeted gene expression data and to create heatmaps. Data was then analyzed by ROSALIND® ( ), with a HyperScale architecture developed by OnRamp BioInformatics, Inc. (San Diego, CA) to interpret targeted gene expression data and to create heatmaps. Read Distribution percentages, violin plots, identity heatmaps, and sample MDS plots were generated as part of the QC step. The limma R library 37 was used to calculate fold changes and p -values. Clustering of genes for the final heatmap of differentially expressed genes was done using the PAM (Partitioning Around Medoids) method using the fpc R library ( ) that takes into consideration the direction and type of all signals on a pathway, the position, role, and type of every gene, etc. Hypergeometric distribution was used to analyze the enrichment of pathways, gene ontology, domain structure, and other ontologies. Functional enrichment analysis of pathways, gene ontology, domain structure, and other ontologies was performed using HOMER 38 . Several database sources were referenced for enrichment analysis, including Interpro 39 , NCBI 40 , KEG 41 , 42 , MSigDB 43 , REACTOME 44 , and WikiPathways 45 . Enrichment was calculated relative to a set of background genes, relevant to the experiment. The transcriptomic data has been deposited in Gene Expression Omnibus (GEO) of the National Center for Biotechnology Information and are accessible through GEO Series accession number SE167335 . Neuronal quantification during differentiation Immunostaining images were analyzed using Fiji® software to quantify the percentage of TH/DAPI at day 35, 50, and 80, and TH/FOXA2 and TH/GIRK at day 50 and the presence of pyknotic nuclei at day 50 and 80. An average of 5 images was quantified for each ratio, and each differentiation was performed at least three times. Neurite quantification Immunostaining images were analyzed with NeuronJ® software to quantify number of neurites per neuron and neurite length for TH+ cells. Each neuron was analyzed in NeuronJ® and each trace was automatically measured and organized in order to obtain information for every single cell. Immunostaining images were analyzed with NeuronJ® software to quantify number of neurites per neuron for MAP2+ cells. An average of five images and ten neurons per image were analyzed at each timepoint for TH+ and MAP2+ data for every iPSC-derived line. Calcium imaging assay We used calcium fluorescence imaging 46 , 47 , 48 , 49 to evaluate the differences in spontaneous activity between healthy and PD neurons. Calcium imaging allowed the monitoring of a large population of neurons, simultaneously and non-invasively (Fig. 2 ). Living neurons were incubated for 30 min in a solution that contained 3 ml of the recording medium (EM, consisting of 128 mM NaCl, 1 mM CaCl 2 , 1 mM MgCl 2 , 45 mM sucrose, 10 mM glucose, and 0.01 M Hepes; pH 7.4) and 4 µg/ml of the cell-permeant calcium-sensitive dye Fluo-8-AM. At the end of incubation, cultures were washed with 2 ml of fresh EM to remove residual Fluo-8 and transferred to a glass-bottom dish (Mattek) filled with EM for imaging. The dish was mounted on a Zeiss inverted microscope equipped with a CMOS camera (Hamamatsu Orca Flash 2.8) and an arc lamp for fluorescence. Greyscale images of spontaneous neuronal activity were acquired at 20 Hz for 15 min in a field of view of 2.8 × 2.1 mm 2 that contained between 300 and 700 neurons. A bright-field image of the monitored region was taken at the end of the recording session for easier cell identification. Data was then analyzed with the custom software NETCAL, run on MatLab®, to extract the trains of neuronal activations, as follows. Regions of Interest (ROIs) corresponding to cell bodies that exhibited prototypical neuronal morphology were manually drawn on the bright-field images, and their fluorescence intensity as a function of time extracted (Fig. 2a ). These fluorescence traces were then inspected to remove non-neuronal signals (either undifferentiated cells or glia). A Schmitt-trigger method 49 was next used on the fluorescence traces to identify the timing of neuronal activations, finally procuring the set of spike trains for each neuron. The resulting data of neuronal network activity was visualized in the form of raster plots. Collective episodes of coherent activity (network bursts) appeared as the synchronous activation of a large fraction of the neurons in the network in a short time window. Determination of DA release levels The supernatant collected from DA neurons differentiated from CTR (SP11), PD1 (SP12), PD2 (SP13), and isoPD1 (SP12 wt/wt) for 50 and 80 days (D50 and D80), was harvested and kept directly at −80 °C until the moment of analysis. Before their analysis, medium samples were previously deproteinized with 50 µl of homogenization medium (100 mL miliQ H2O, 100 mg of sodium metabisulphite (S-1516, Sigma), 10 mg EDTA-Na (E5134, Sigma), 100 mg cysteine (C-4022, Sigma), and 3.5 mL de HClO4 concentrated (Scharlau, 70%); centrifuged at 15,000 rpm during 30′ at 4 °C and the supernatant was filtered (Millex-HV 0.45 μm, Millipore) for a posterior HPLC injection. The concentration of dopamine (DA) in supernatant (SN) samples was determined by an HPLC system consistent of a Waters 717plus autosampler (Waters Cromatografia), a Waters 515 pump, a 5 μm particle size C18 column (100 × 46 mm, Kinetex EVO, Phenomenex), and a Waters 2465 amperometric detector set at an oxidation potential of 0.75 V. The mobile phase consisted of 0.15 M NaH 2 PO 4 .H 2 O, 0.57 mM 1-octane sulfonic acid, 0.5 mM EDTA (pH 2.8, adjusted with phosphoric acid), and 7.4% methanol and was pumped at 0.9 ml/min. The total sample analysis time was 50 min and the DA retention time was 3.94 min. The detection limit was 2–3 fmol (injection volume 60 µl). Corresponding dopamine metabolite content was normalized to protein concentration determined previously by Bradford method detection. Average neuronal activity and global network activity (GNA) The average neuronal activity quantified the degree of spontaneous activity in the recordings and was determined by counting the number of activations per neuron and minute, averaging afterward across neurons and realizations of the same line and timepoint. The GNA quantified the capacity of the network to exhibit collective synchronous events ( bursts ), and was determined by, first, counting the neurons that activated together in a sliding window of 1 s in length (corresponding to 20 image frames) without repetition and, second, by normalizing the count with the number of active neurons in the network. GNA thus varied between 0 (no activity) and 1 (full network activation). Bursts appeared in the GNA data as sharp peaks. The higher the GNA amplitude, the higher the number of participating neurons in the burst (Fig. 2c ). Ratio of extreme events of network bursting They corresponded to those GNA episodes in which neuronal participation was much higher than average and relatively to control (CTR) cultures. To compute the number of extreme events, GNA data were analyzed as follows. First, for each recording, background activity was determined by iteratively removing all peaks in the GNA signal that exceed by two standard deviations the average GNA value, procuring a background activity that was typically around GNA ≅ 0.05 (5% of the network). The non-background peaks of the recording were then ascribed as truly bursting episodes, with amplitudes that were typically above GNA = 0.1. Second, all the peak amplitudes observed in CTR cultures were pooled together and the average peak amplitude A CTR and standard deviation SD CTR determined. Then, for each realization and condition (CTR, isoPD1, PD1, and PD2), those bursting peaks that were above A CTR + 2 SD CTR were considered as ‘extreme events’. The ‘ratio of extreme events’ R EE was then determined, for each realization, as the ratio between the number of extreme peaks and the total number of observed peaks. Since this definition sets the CTR cultures as reference, some of the CTR realizations exhibited R EE close to 0, while PD1 or PD2 realizations exhibited R EE close to 1. Effective connectivity Since the number of neurons varied among realizations, all connectivity analyses were carried out in networks with randomly chosen 340 neurons, the minimum population in all experiments. Also, to ensure that connectivity and network analyses reflected the impact of bursting behavior, only those recording with at least 10 bursting events were used for connectivity inference. Causal relationships among pairs of neuronal spike trains were inferred using a modified version of the Generalized Transfer Entropy (GTE) 50 . Briefly, given a pair of spike trains corresponding to neurons X and Y, an effective connection was established between X and Y whenever the information contained in X significantly increased the capacity to predict future states of Y (Granger causality). For the actual estimation of the effective connectivity, binarized time series (‘1’ for the presence of a spike, ‘0’ for absence) were constructed and computed in the fast implementation of GTE run in MatLab. Instant feedback was present, and Markov Order was set as 2. The actual GTE estimate was then compared with those from the joint distribution of all inputs to Y and all outputs to X, setting a connection as significant whenever the GTE estimate exceeded the mean + 1 standard deviations of the joint distribution. This threshold was considered optimal to capture the effective interactions among neurons during bursting episodes, which is the key dynamic characteristic separating CTR and isoPD cultures from PD ones. All network measures and connectivity statistics were computed with this threshold condition. However, for visualization purposes only, the GTE data shown in the functional matrices and network maps were thresholded at mean + 2.5 standard deviations, which procured the top 10% strongest links. In either case, the GTE scores were finally set to 0 (absence of connection) or 1 (connection present), shaping directed yet unweighted connectivity matrices. In all network maps, the directionality of the connections was not shown for clarity. Also, for clarity of language, the term ‘functional’ was used instead of ‘effective’ throughout the description of results. Functional communities GTE connectivity data was analyzed, for each culture realization, to obtain the number of functional communities and their interrelation. A functional community corresponds to groups of neurons that are more connected within themselves than with the rest of network. They were detected using a fast implementation of the Louvain’s algorithm on the most significant connected component (Brain Connectivity Toolbox) 51 . Communities were visualized as boxes along the diagonal of the functional connectivity matrices. The strength of a community, i.e., how isolated it is from the rest of the network, was asserted through the community statistic Q. The larger Q, the higher the tendency of the network to split into characteristic communities. Q = 0 corresponds to the situation in which the network is highly integrated and the only community is the network itself, while Q = 1 corresponds to the extreme case in which all neurons are disconnected from one another and there are as many communities as neurons. Cumulative distribution of connections and Kullback–Leibler divergence The effective connectivity matrix was analyzed to extract the distribution of connections p(k), i.e., the normalized histogram of the number of neurons having k connections. This distribution was transformed into the ‘cumulative distribution function’ CDF(k), which provides the probability that a neuron has many connections less or equal than k. The divergence between two CDF distributions P and S was quantified through the Kullback–Leibler divergence \(D_{{\mathrm{KL}}}\left( {P||S} \right) = \mathop {\sum }\nolimits_i P\left( i \right)\ln \frac{{P\left( i \right)}}{{S\left( i \right)}}\) , using the function KLDiv.m in MatLab. Significant statistical differences between P and Q were analyzed using the Kolmogorov–Smirnov test. In silico model The model of Compte et al. 52 was used to simulate a network of excitatory, inhibitory, and dopaminergic neurons. The model incorporated soma and synaptic dynamics as well as noise in the form of Poissonian trains of excitatory pre-synaptic potentials. Network construction was set by placing on a bidimensional space the same neurons as the ones observed experimentally in the form of ROIs, but limited to a randomly chosen population of 300 neurons, and axons grew using biologically-realistic rules as described 53 , 54 . Simulations were run for the equivalent of 10 min in the experiments, and 4 realizations were carried out for each pruning condition (no pruning, 10%, 30, and 50% pruned neurites). Raster plots of network activity were analyzed identically as in the experiments to compute the ratio of extreme events, the effective connective, and CDFs. Full details of the model are provided in Supplementary Information . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that the main data supporting the findings of this study are available within the article and its Supplementary Information files. Extra data are available from the corresponding author upon request.
Researchers from IDIBELL and the University of Barcelona (UB) report that neurons derived from Parkinson's patients show impairments in their transmission before neurodegeneration. The study used dopaminergic neurons differentiated from patient stem cells as a model. Parkinson's is a neurodegenerative disease characterized by the death of dopaminergic neurons. This neuronal death leads to a series of motor manifestations characteristic of the disease, such as tremors, rigidity, slowness of movement, or postural instability. In most cases, the cause of the disease is unknown; however, mutations in the LRRK2 gene are responsible for 5% of cases. Current therapies against Parkinson's focus on alleviating symptoms, but do not stop its progression. It is thought that early interventions before the appearance of the first symptoms that prevent neuronal death could slow down or even stop the evolution of the disease. However, currently, the diagnosis is based on the appearance of symptoms, when 70% of the neurons have already been lost. A group of researchers from IDIBELL and the University of Barcelona (UB) has identified early functional deficiencies, before death, in neurons derived from patients with genetic Parkinson's. Dr.Antonella Consiglio says, "These discoveries open the door to early diagnosis, which would allow us to carry out a premature intervention that would slow down neuronal death, and therefore, would stop the evolution of the disease." In this work, dopaminergic neurons, the most vulnerable in Parkinson's, differentiated from stem cells (iPSC) of healthy individuals and patients with genetic Parkinson's, have been used as a model. Researchers have observed that these dopaminergic neurons are capable of maturing and forming functional neural networks in culture, in both control and Parkinson's disease conditions. However, this work published in npj Parkinson's Disease shows that neurons from individuals with Parkinson's are more spontaneously active and present more explosion episodes in which, for example, the entire network is activated at the same time. All this occurs before the neurodegeneration. The researchers believe that this early neuronal dysfunction could be contributing to initiating the cascade of events responsible for the death of dopaminergic neurons, and consequently, Parkinson's disease. Furthermore, this work highlights the extraordinary window of opportunity provided by experimental models based on iPSC in the understanding and presymptomatic evaluation of neurodegenerative diseases.
10.1038/s41531-021-00198-3
Biology
First genetic evidence of resistance in some bats to white-nose syndrome, a devastating fungal disease
Giorgia G. Auteri et al. Decimated little brown bats show potential for adaptive change, Scientific Reports (2020). DOI: 10.1038/s41598-020-59797-4 , www.nature.com/articles/s41598-020-59797-4 Journal information: Scientific Reports
https://www.nature.com/articles/s41598-020-59797-4
https://phys.org/news/2020-02-genetic-evidence-resistance-white-nose-syndrome.html
Abstract The degree to which species can rapidly adapt is key to survival in the face of climatic and other anthropogenic changes. For little brown bats ( Myotis lucifugus ), whose populations have experienced declines of over 90% because of the introduced fungal pathogen that causes white-nose syndrome (WNS), survival of the species may ultimately depend upon its capacity for adaptive change. Here, we present evidence of selectively driven change (adaptation), despite dramatic nonadaptive genomic shifts (genetic drift) associated with population declines. We compared the genetic makeups of wild survivors versus non-survivors of WNS, and found significant shifts in allele frequencies of genes associated with regulating arousal from hibernation (GABARB1), breakdown of fats (cGMP-PK1), and vocalizations (FOXP2). Changes at these genes are suggestive of evolutionary adaptation, given that WNS causes bats to arouse with unusual frequency from hibernation, contributing to premature depletion of fat reserves. However, whether these putatively adaptive shifts in allele frequencies translate into sufficient increases in survival for the species to rebound in the face of WNS is unknown. Introduction Events that kill large portions of populations, including naturally and anthropogenically induced disasters, increasingly threaten biodiversity 1 , 2 . Invasive species are a major trigger of these declines 3 , including invasive pathogens, against which native species can experience high mortality due to a lack of co-evolutionary defenses 4 , 5 , 6 . Introduced fungal pathogens can be particularly dangerous—they can frequently survive in the environment for extended periods, affect a relatively broad range of hosts, and can be highly virulent 7 , thereby driving mass-mortalities of native species (e.g. amphibian chytrid 8 , snake fungal disease 9 , sea fan aspergillosis 10 , and others 11 , 12 , 13 ) as well as threatening agricultural crops 14 , 15 (e.g. rice blast disease 16 and Fusarium wilt in bananas 17 ). Although host mortalities may have little impact on fungal pathogens, the pathogens can exert incredibly strong selective pressures on their host populations 18 . A pressing conservation question is whether host populations can evolve resistance or tolerance during such epidemics—a necessary first step towards preventing extinction. Strong selective pressures might theoretically lead to an evolutionary rescue effect if host populations adapt 19 . However, acute events that kill off most members of a species also reduce the genetic diversity upon which natural selection can act, thereby limiting the capacity for adaptive change 20 . White-nose syndrome (WNS) is a disease affecting bats, which is caused by the invasive fungus Pseudogymnoascus destructans 21 . This highly destructive pathogen has decimated populations of bats, with 12 North American species currently affected 22 , and some populations experiencing losses of 90–100% 23 . The fungus was first inadvertently introduced to North America by humans in 2006 (in the northeastern U.S.) 24 , and is spreading across the continent, largely via infected bats 25 . The exact mechanism of death is not known, but bats apparently die from secondary physiological complications (e.g. depleted fat reserves) associated with too frequent arousals from hibernation 26 . Here, we conduct a genome scan to test for evidence of evolutionary changes in little brown bats ( Myotis lucifugus ) in response to WNS. The recent expansion of the fungus into our study area in 2014 combined with the staggering impact of WNS on the local population (roughly 78%) 27 provides an opportunity to study the initial evolutionary effects of this pathogen, which continues to spread throughout the continent. Eurasian bats within the genus Myotis —in the native range of the pathogen—tolerate fungal growths with no noticeable mortality 28 , 29 . In contrast, little brown bats were the most common bats in eastern North America prior to WNS, but due to population losses, the species has now been listed as endangered by the IUCN 30 and the federal government of Canada 31 , with a similar decision by the U.S. government pending 32 . Despite large observed declines, some individuals may have greater genetic-based tolerance or resistance to the disease, raising the potential for adaptive change in little brown bats via selective forces acting on standing genetic variation. However, dramatic population losses may confound the effectiveness of selection or purge potential adaptive variants via genetic drift. Information about these evolutionary processes can help inform the tempo and pace of management efforts for this species, by indicating which, if any, populations are adapting to the pathogen and what traits may be important for survival. Results In our tests for evolutionary changes in little brown bats, we compared the genetic makeup of “survivors” and “non-survivors” of the disease (see Fig. 1 ) in a genome-wide survey of 19,797 single nucleotide polymorphisms (SNPs) among 14,345 loci (140 bp segments) generated from a reduced representation library (ddRadSeq 33 ). We detected the effects of stochastic, non-adaptive genomic changes in otherwise neutral portions of the genome (genetic drift) reflective of the large numbers that have died from WNS in this species. Nevertheless, we also identified genetic changes (based on F ST -outlier analyses) that may have contributed to survival (as opposed to changes simply due to strong genetic drift), where the signature of selection can be detected by levels of genetic differentiation at a gene that exceeds background levels across the genome 34 , 35 . See methods for more details. Figure 1 Sampling locations of little brown bats. ( A ) Sequenced survivors ( n = 9, marked by stars) and non-survivors ( n = 29, crosses), jittered around similar collection sites (black dots); the size of the symbol indicates relative differences in the number of samples per site (see Table S1 for details). Survivors undertake short-distance migrations away from hibernacula in spring, which is reflected in their scattered collection locations. Non-survivors are closely associated with underground hibernation sites, with most ( B ) collected within hibernacula (~26 carcasses marked by circles on the floor of a mine), although some ( C ) leave these sites prematurely, like these dead bats on the outer screen of a house <1 km from a hibernaculum (note the snowy landscape). Photo credits A. Kurta (top) and C. Rockey (bottom). Full size image Non-adaptive evolution associated with large number of deaths caused by WNS To visualize the drift-induced changes that have occurred broadly across the genome, a PCA generated using the survivors, onto which the non-survivors were projected (Fig. 2 ), indicated the genomic makeup of survivors differs substantially from the non-survivors (which is robust to more stringent criteria for data filtering; Fig. S1 ). Quantification of the rate of evolutionary change from an inferred common ancestor showed the rate of drift is an order of magnitude higher in survivors (mean F = 0.04 ± SE 0.0001) relative to non-survivors ( F = 0.006 ± 0.0003), using the F -model in Structure 36 , 37 . This amount of drift-induced genetic change (Fig. 2 ) is especially striking given that these changes have accumulated over, at most, three years (with most of our samples separated by just one year; Table S1 ), in a species that can live for well over 20 years 38 , 39 and in which females typically produce one pup per year 40 . Figure 2 Stochastic drift induced genetic change. ( A ) PCA of survivors of WNS, with non-survivors projected onto the PC axes; PC1 explained 27% and 66% of the variance among survivors and non-survivors, respectively, and PC2 explained 13% and 6% of the variance. ( B ) The estimated degree of genetic drift ( F , as estimated in Structure 36 , 37 ) is an order of magnitude greater for survivors compared to non-survivors, as illustrated by the contrasting branch lengths from an inferred common ancestor. Full size image Selective divergence putatively driven by WNS Quantification of locus-specific differentiation across the genome using F ST -outlier analyses identified nine SNP alleles that are significantly more common among survivors than non-survivors across all three outlier detection methods (Table S2 ; for details on individual genotypes see Table S3 ). These nine variable sites were the only outliers identified using the AMOVA-corrected F ST from STACKS (Fig. 3 ), and were also among the outliers recognized in the two other tests (see Figs. S2 and S3 ). Analyses with and without four non-survivors that were collected several years prior to other samples (in 2014; Table S1 ) confirmed the robustness of these results to different collection dates (Figs. S2 – S4 ). Figure 3 Putative loci under positive selection. AMOVA-corrected F ST -values of SNPs versus alignment position, highlighting the three genes that our SNPs map to, as well as an outlier SNP nearby to PLA2G7 (*), and the outlier SNP which is adjacent to CGMP-PK1 ( † ). The dashed line marks the significance threshold and alternating colors indicate different genomic scaffolds (1,214 in our dataset). Full size image Comparison of the nine top-candidate loci with the M. lucifugus reference genome (MYOLUC 2.0 41 ) indicates three mRNA-coding SNPs are located in introns of annotated genes (Table S2 ). These three genes are: the gamma-aminobutyric acid (GABA) receptor subunit beta-1 (GABRB1; Gene ID 102432079 in the reference genome), cyclic guanosine-3′,5′-monophosphate-dependent protein kinase 1 (cGMP-PK1; Gene ID 102431010), and the forkhead box P2 protein (FOXP2; Gene ID 102423801). Two other SNPs are close to annotated genes—one was near the previously identified cGMP-PK1 gene in our dataset (3,387 bp away), and the other was near phospholipase A2 group VII (PLA2G7; Gene ID 19253; 2,747 bp away). The remaining four SNPs are relatively distant from any area of the reference genome with known function (>170,000 bp away on average). Discussion We studied the genetic differences between wild little brown bats that were survivors versus non-survivors of WNS, and found evidence that there is likely a genetic component to survivorship for individuals facing this disease. This apparent adaptation has occurred very quickly since the detected evolutionary changes took place after the WNS introduction in 2014, and survivors were sampled just a few years later. The putative selectively driven genetic changes we identify (Fig. 3 ) have also occurred despite dramatic nonadaptive genomic shifts (genetic drift; Fig. 2 ) associated with population declines due to the disease. Together, this suggests that the putative adaptive changes have resulted from very strong selective forces acting on standing genetic variation. Such rapid evolutionary changes are not unprecedented. For example, populations of the steelhead trout ( Oncorhynchus mykiss ) introduced to the central USA from coastal areas show signs of adaptation to freshwater conditions, despite small founder populations 42 . Likewise, extremely rapid phenotypic adaptation in Caribbean lizards followed a hurricane, with surviving lizards having larger toe pads which were presumably better at gripping surfaces during strong winds 43 . The putatively adaptive SNPs among the surviving bats in our study are located within or in close proximity to four genes (cGMP-PK1, FOXP2, GABARB1, and PLA2G7), which when mapped to the annotated reference genome suggest different ways adaptive shifts might contribute to survival. GABARB1 is a receptor for the neurotransmitter GABA, which is a major neural inhibitor in the brains of vertebrates, and has also long been suspected to be involved in regulating hibernation 44 . In addition to GABA, these receptors are also sensitive to histamines 45 , which similarly help regulate hibernation in mammals 46 and are released in response to tissue damage from WNS 47 . The importance of an individual’s sensitivity to histamines is further hinted by PLA2G7, which regulates release of histamines from mast cells 48 . Because arousals account for 80–90% of bats’ energy budget during hibernation 32 , genetic variation that contributes to even small changes in arousal frequencies could result in large differences in energy expenditures, making the difference between life and death (i.e., affecting susceptibility to WNS). We speculate that bats genetically predisposed to release fewer histamines, or be less prone to arousals induced by histamines, are better able to survive WNS through conservation of energy reserves. Links between metabolic demands and survival are further suggested by cGMP-PK1, which was implicated by two significant SNPs in our dataset (one within the gene and one nearby). This gene is part of pathways involving cellular metabolism and breakdown of fat, and allelic variants have been linked to obesity in mammals 49 , 50 , which might prove beneficial for WNS-infected bats facing premature depletion of winter fat reserves. In fact, a recent study documented a post-WNS phenotypic shift towards fatter bats of this species 51 . Although this may be due to a variety of potential mechanisms, including non-evolutionary ones (see discussion in 51 ), our findings suggest a genetic component to this shift. In contrast to the SNPs linked to physiological mechanisms during winter hibernation, a SNP within FOXP2 suggests behavioral differences might confer a selective advantage. Specifically, FOXP2 is associated with vocalizations in other vertebrates, and echolocation in bats 52 . Because variation in calls is closely associated with the type of prey and habitat bats must navigate, echolocation is an important functional trait, and potentially adaptive shifts might be related to hunting proficiency, speed of developing foraging abilities in juvenile bats, or subtle differences in prey preferences. These could affect the type and amount of fat that bats store for hibernation. In addition to echolocations for hunting, bats also emit social calls. Sociality may influence the impact of the disease in this species 53 , and due to the importance of FOXP2 in communication, the gene has been linked to variations in social behavior in other species 54 , 55 , 56 . A more detailed study is needed to test these hypotheses, and there are possibly alternative unknown functions of FOXP2 in bats. Interestingly, no individuals in our dataset were heterozygous for this SNP. Although outlier analyses can contain false positives, potentially inferring selectively driven differentiation when there is none 57 , we think it is unlikely the mRNA-coding SNPs we detected are statistical artifacts. The four genes we identify had putatively adaptive alleles that were entirely absent from our non-survivors (with the exception of a single allele copy in one individual). With the much greater sampling of non-survivors ( n = 29) this difference is also not due to limited sampling (Fig. 1 ). An alternative consideration is that genetic drift, not selection, explains the elevated differentiation in what we identified as putatively adaptive alleles among the survivors. With inter-locus contrasts, the genome serves as the expected background for differentiation caused by drift (i.e., the expected variance in F ST -values in this case; Fig. 3 ). However demographic processes can inflate the variance of the distribution of F ST -values (e.g. population structure such as isolation by distance or expansion; reviewed in Hoban et al . 57 ), potentially confounding the signals of selection and drift. Although we cannot rule out a role for non-selective processes, we note that annotation of the alleles suggests that selection is involved given that the functions are consistent with an adaptive response. Whether the putative adaptive changes described here reflect host resistance or tolerance to the fungal pathogen has consequences for evolutionary and ecological pressures, as well as management strategies. While our study does not explicitly test whether bats survive WNS via resistance versus tolerance mechanisms, and the genomic approach we used only looked at a small portion of the genome, we found putative selection acting on non-immune genes, which suggests disease tolerance 58 may be important. Specifically, the alleles we identify could assist some bats in “holding out” until spring, when they leave sites in which growth of the pathogen is restricted to. While infected bats do exhibit an immune response to the fungus 59 , they likely ultimately die due to secondary physiological complications linked to starvation while hibernating 26 , 60 . Such tolerance in little brown bats to WNS may be important for survival in both in intraspecific 61 and interspecific 62 contexts. However, others argue that resistance is the primary mechanism of survivorship 63 . Future work is needed to resolve this question. Conclusions What the outcome of the evolutionary change we report here might be and what it bodes for the future recovery of little brown bats is not clear—it is too soon to claim that the species will be “saved” via an evolutionary rescue effect. There have been dramatic population declines, and low population sizes inherently make species vulnerable to further perturbations. Furthermore, the disease has only been present in North America for thirteen years at the time of this publication, and with little brown bats surviving to more than 20 years old in the wild 38 , 39 it will take time to determine whether surviving remnant populations have sufficient reproductive and recruitment levels to avoid extinction or extirpation. However, the functions of the genes we identify suggest that for this species, and possibly other bats effected by WNS, conservation of summer foraging habitat—not just winter hibernation sites—may promote population recovery, given that the selective advantages underlying shifts in FOXP2 would most likely manifest when bats are echolocating and hunting, and not in the hibernation sites where the bats are confronted with the fungus (Fig. 1 ). Other genes we identified are likely subject to strong selection during winter periods of infection, but could also be important year-round (cGMP-PK1, PLA2G7, and GABRB1), given their functions in cellular metabolism. With the limited representation of the genome, there may also be selective divergence in genes not studied here. Nevertheless, even without more extensive coverage of the genome, our work hints at the multifaceted nature of selection by identifying genes whose roles differ across habitats of highly seasonal environments, and are linked to both physiological and behavioral traits. Materials and Methods Study area We chose northern Michigan, USA, for our study because it represents a reasonably isolated population of little brown bats (Fig. 1A ); WNS is present throughout our study area, and was first detected there in early 2014. We sample non-survivors from hibernation sites during the winter and survivors during the summer (when they are no longer afflicted by the pathogen). However, because the species utilizes short distance seasonal migration (typically ≤ 500 km 64 ), during warmer periods they do not roost in the same sites in which they hibernate, thus the relative geographic isolation is important for assuring that bats sampled during both seasons were from the same population. Winter hibernation sites are concentrated in the northwestern portion of our study area (hibernation sites are lacking in the central and southern Michigan), and primarily consist of abandoned iron and copper mines. As a consequence, bats in our area (Fig. 1A ) are isolated from other populations by two factors: the Laurentian Great Lakes and the lack of suitable subterranean hibernation sites within migration range in central and southern Michigan. The seasonal sampling of bats is necessary because WNS non-survivors can only be documented in winter areas, and disease survivors can only be identified during summer. Sampling of focal species All sampled bats (Table S1 ) were categorized as either “survivors” or “non-survivors” of WNS. Survivors ( n = 9) were adult bats that had been born the previous year or earlier and thus had survived at least one hibernation period with the WNS pathogen (collected during summer s of 2016–2017, see Anthony 65 for aging methodology). Most individuals which succumb to the disease are found within the subterranean sites that afflicted species of bats rely upon in winter, and in which the fungus thrives, however some infected bats leave hibernation sites prematurely in winter in search of food or water, but quickly die due to lack of available resources and sub-freezing temperatures. Correspondingly, most non-survivors we sampled were bats found dead in or near hibernation sites during winter (collected in early 2016; n = 25; Fig. 1 ), although some tissue samples came from individuals with the pathogen that were euthanized during surveillance studies (i.e., they tested positive for the fungus; collected in early 2014; n = 4). Note that comparing survivors to this more general group of non-survivors makes tests for loci under selection more conservative, in that some of the euthanized bats categorized as non-survivors may not have died from WNS naturally. However, if non-survivors actually carried adaptive alleles, this would not produce a bias (i.e., make it more likely) to detect putatively selected alleles—in fact it would make such detection more difficult. In addition, all analyses were repeated excluding the euthanized bats to confirm the robustness the results. Samples for most non-survivors ( n = 23) were from bat carcasses found during winter either in or proximal to the caves or mines in which they were hibernating. Prior to the introduction of WNS, it was uncommon to find dead bats at hibernacula, whereas conspicuous numbers of dead individuals are found in and around these sites post-introduction of the disease (Fig. 1B ), and all sites were WNS-positive at the time of collection. The accidental inclusion of bats which had died due to other causes would make it more difficult to detect adaptation in our analyses. To reduce disturbance to hibernating bats, dead bats were collected in conjunction with routine surveys by the Michigan Department of Natural Resources (MDNR) and Eastern Michigan University. Four samples were contributed by the U.S. Geological Survey National Wildlife Health Center; these bats were found during hibernation with the fungus growing on them, but were euthanized (as discussed above). Lastly, two samples of non-survivors came from the MDNR Wildlife Disease Laboratory (see details below). Among the survivors, collection methods varied (Table S1 ). Three survivors were captured during summer using mist-nets, and visual inspection confirmed evidence of recovering from WNS (i.e., the presence of healing wing lesions or scars). Tissue samples were collected via small biopsy punches (2 mm diameter, one punch for each wing, Premier Medical Products Company, Plymouth Meeting, Pennsylvania, USA), after which bats were immediately released. No individual was detained for longer than 30 minutes. Eight specimens were contributed by the MDNR Wildlife Disease Laboratory, which annually receives large numbers of bats for rabies testing after they are encountered by humans or pets 66 . All individuals used in this study tested negative for the rabies virus. Six of these were considered survivors because they were submitted for testing in summer or fall; during the summer this species uses structures such as houses in addition to trees 40 so there is no reason to believe that animals encountered by people during warmer periods were unhealthy. However, at the latitude of our study, little brown bats are not known to hibernate in buildings 40 . Consequently, any individual encountered by humans during sub-freezing periods is almost certainly on the cusp of dying from WNS. Individuals submitted to the MDNR Wildlife Disease laboratory in winter or early spring were therefore assigned to the non-survivor group ( n = 2 in this study). DNA sequencing and data processing DNA was extracted from membrane of wing tissue using DNeasy Blood and Tissue Kit (Qiagen, Valencia, CA, USA) and used to prepare a reduced representation genomic library for sequencing. Two restriction enzymes, Eco RI and Mse I, were used to digest extracted DNA (ddRadSeq 33 ), to which barcodes (unique tags 10 base-pairs long) and adapters for Illumina sequencing were then ligated. Ligation and amplification were done via polymerase chain reaction (PCR), and 350 to 450 bp long fragments were size selected using Pippin Prep (Sage Science, Beverly, Massachusetts, USA). The library of 38 samples was sequenced in one HiSeq. 2500 lane (Illumina, San Diego, CA, USA), at the Centre for Applied Genomics (Toronto, Ontario, CA). Genomic sequences were demultiplexed using the STACKS bioinformatics pipeline 67 (v. 2.1; specifically process rad-tags , gstacks , and populations ), and processed in conjunction with supporting programs. The first step, process radtags , allowed up to one mismatch in the adapter sequence and two mismatches in the barcode, with rescue of RAD-Tags allowed. A sliding window of 15% of the read length was used for an initial exclusion of any reads with a Phred score 68 below 10 within the window (note additional filters of a minimum Phred score of 30 were applied in downstream processing, as discussed below). Of 102,419,857 initial sequences, process radtags removed 1,144,865 reads containing the adapter sequence, 18,775,218 reads with ambiguous barcodes, 156,274 low quality reads, and 2,495,192 reads with ambiguous RAD-Tags. We then indexed a previously generated reference genome for the species, ftp://ftp.ncbi.nih.gov/genomes/Myotis_lucifugus (7x coverage; MYOLUC v. 2.0 41 ), and mapped our sequences to the genome using the Burrows-Wheeler Alignment Program (v. 7.17) indexing and MEM algorithms, respectively 69 , 70 . The resulting files were filtered (-F 0x804, -q 10, -m 100), converted to .bam files, and sorted using SAMtools 71 , 72 (v. 1.8-27). The reference-based method of gstacks (set to remove PCR duplicates) was run using the Marukilow model 73 , minimum Phred 68 score of 30, and alpha thresholds (for mean and variance) of 0.05 for discovering single nucleotide polymorphisms (SNPs). This resulted in 59,888,201 BAM records and 581,607 loci (8% of reads were excluded because they were excessively soft-clipped, and 3% had insufficient mapping qualities to be included). All remaining loci were genotyped, with a mean per-sample coverage of 10.5x ± 7.1x, a mean of 138.5 bps per locus, and consistent phasing for 88.3% of diploid loci. Populations was then run with default settings and the resulting loci were filtered with a custom script in R 74 (v. 3.5.0) to remove loci and SNPs that may be artifacts of sequencing or alignment errors (Fig. S5 ) based on the number of SNPs per read position, resulting in exclusion of SNPs occurring in the last 2 bp of each read. Loci with unusually high levels of diversity were also removed from consideration (threshold θ > 0.026), leaving 273,261 unique loci. Using the list of vetted loci and SNPs, populations was then run again, retaining loci present in at least 56% of both survivors and non-survivors, ensuring a minimum sample size of at least six survivors; note the actual missing data was typically much lower (i.e., <15% in all but 7 individuals of survivors and non-survivors). This resulted in 40,963 loci (140-bp segments), of which were variable, containing 19,797 SNPs (our final SNPs), all of which had a minor allele frequency of >0.01. Minor allele thresholds of 0.01 and 0.05 were evaluated for downstream analyses, and when warranted the higher threshold was used (noted below). Mean genotyped sites per locus was 142.41 bp ( SE ± 0.02). Because some loci contained more than one SNP, the robustness of downstream analysis to inclusion of multiple versus a single SNP per 140-bp fragment was evaluated. Main findings did not differ, thus we present analyses based on multiple SNPs per locus in the main text (see Fig. S6 for results based on a single SNP per locus). We also checked that the data were not biased due to different levels of genetic decomposition between the survivors and non-survivors by analyzing the Guanine-Cytosine (GC) content of each sample. Specifically, raw Illumina reads (immediately after process radtags ) of survivors were compared with the non-survivors using BBMap 75 (v. 38.01). The proportion of GC per individual per locus was averaged across all loci for each individual using a custom script in R. Mean GC content was 43% for survivors ( n = 9) and 42% ( n = 29) for non-survivors, which confirmed non-survivors were not biased towards higher GC because of decomposition. In addition, the relatedness of sampled individuals was evaluated in two ways: with related 76 in R 74 and using Plink 77 . Due to program constraints in related , 250 loci were randomly selected to simulate 100 pairs of individuals in each of four categories: parent-offspring, full sibling, half sibling, and unrelated. Application of the Ritland estimator of relatedness 78 to both the simulated and empirical dataset of 1,242 filtered SNPs (see Fig. S7 caption) indicated that none of the individuals in our dataset were related with the exception of two of the non-survivors, which may be half-siblings (Fig. S7 ). However, the Plink 77 analysis of 6,237 SNPs (restricted to a single SNP per locus and minor allele frequency >0.05, as per guidelines) indicated no related individuals within our dataset. We kept all individuals in downstream analyses, because the presence of a single pair of potential half siblings is not expected to influence estimates of allele frequencies or F ST , and removal of putatively related individuals can actually increase the error (for more details, see Waples & Anderson 79 ). Lastly, to confirm that individuals from different sampling sites within the study area could be considered one population, we used Structure 37 (v. 2.3.4 ) to evaluate if genome-wide differentiation indicated a single, panmictic population. We selected the ADMIXTURE model with ‘Allele frequencies correlated’ turned on and no prior information about sampling population and explored the best supported model, considering a range of genetic clusters (i.e., k = 1 to 5) with 10 repetitions for each k , for 500,000 Markov chain Monte Carlo iterations with a burn in of 50,000. Visual assessment was used to ascertain convergence by examining plots of F ST , alpha, and likelihood versus iterations, and to check for consistency among the ten iterations. No evidence of genetic subdivision based on geographic sampling locality was detected (see Fig. S8 ). Tests of genetic drift Given the large numbers that have died from WNS in this species, genetic differentiation between survivors and non-survivors may result because some alleles, just by chance, will increase or decrease in frequency. These stochastic, non-adaptive genomic changes in otherwise neutral portions of the genome (genetic drift) can be particularly great when only a small proportion of the population survives, sometimes causing population bottlenecks. To visualize the drift-induced changes that occurred broadly across the genome, we conducted a principal components analysis (PCA) of the survivors, and projected the non-survivors onto the estimated PC axes, and the degree of drift was quantified using the F -model 36 in STRUCTURE 37 . The PCA was calculated for the survivors, onto which the non-survivors were projected (by applying the same scaling and centering used for survivors to the non-survivors; see Lipson et al . 80 ). Generating a PCA in this manner is a method of visualizing differences when one group is a subset of the other (in terms of the proportion of variance), for example due to a series of founder events 80 .The PCA was performed in R 74 , in conjunction with the packages Adegenet 81 (v. 2.1.1) and Plyr 82 (v. 1.8.4) using the prcomp function. One survivor and four non-survivors were excluded from this analysis because of missing data (i.e., >50% missing loci), as were loci missing in >50% individuals (data were filtered using Plink v. 1.07 77 ; see Table S1 ). After this, the actual missing data was <15% for all individuals except one survivor and one non-survivor, with just under 50% missing data. Missing data were then replaced with the per locus mean value across all individuals. Only genomic sites with a minor allele frequency of ≥0.05 that were variable in both survivors and non-survivors were considered, for a total of 11,462 SNPs. The PCA was repeated to confirm the robustness of the results to missing data threshold, this time using a minimum data threshold of 8.7% missing data per individual and 19% per locus (mean missing data was 1.9%), which resulted in 13,666 loci and 31 individuals being included. We also directly estimated the amount of genetic drift between survivors and non-survivors in S tructure 37 using the F -model 36 (see also Harter et al. 83 .). The F -model accounts for differences in population sizes, and has been used to quantify differences in drift between groups of contrasting sample sizes that are similar in proportion to our own 83 . For our parameter of interest, F , we used a prior mean and SD of 0.10, which places similar probabilities on both large and small values of F . To implement this Bayesian approach, we preassigned individuals to one of the two groups (survivor or non-survivor), and used a burn-in of 50,000 followed by 500,000 reps. We fixed lambda at 1, and used a uniform prior from 1 to 10 for alpha, with a standard deviation of 0.025. Three iterations were run, with different random seeds for initiating the Markov Chains. Tests of loci under selection To identify genetic differences among individuals that might have contributed to their survival of WNS, we used F ST -outlier analyses, where the signature of selection can be detected by considering the proportional split of allelic variants between groups relative to background levels across the genome 34 , 35 . We identified candidate loci using three methods of outlier detection—identification of outliers via (i) the number of standard deviations from the mean using an AMOVA-corrected F ST 84 , (ii) by assessing confidence intervals from bootstrap permutation across loci, and (iii) measuring departure from a chi-squared distribution (detailed below). Variable sites which met all three requirements were regarded as candidate loci apparently undergoing positive selection. All tests of selection were conducted with and without the four non-survivors sampled in 2014 (collected prior to the other specimens), to confirm that the results were robust. Note that the low number of sampled survivors reflects the devastating impact of WNS on this species; despite the small sample size, it is not beyond a size in which SNPs under selection can be detected with F ST -outlier analyses 85 . In our first approach, we used the AMOVA-corrected F ST 84 calculated by populations in STACKS 35 . SNPs with an F ST -value of greater than nine standard deviations from the mean (mean = 0.018 ± 1 SD of 0.026) were considered outliers (similar to Willoughby et al . 42 ). A threshold of five standard deviations is often used in detection of outlier SNPs under positive selection 42 , 86 , 87 . We increased our threshold of significance to nine standard deviations to reduce the potential for false-positives. In the second approach, confidence intervals (95% CI) were estimated using diveRsity 88 . Using the diffCalc function, Weir and Cockerham’s F ST 89 was calculated for all loci, with 1,000 bootstraps performed across loci. Only loci for which the lower limit of the CI remained five SD from the mean were considered outliers. In the third approach, outliers were identified with OutFLANK 90 , which estimates the expected neutral variation of F ST -values under a chi-squared distribution. As per the developer guidelines 90 , we excluded loci with low expected heterozygosity (<0.1), and visually adjusted the trim functions to best fit the observed distribution (LeftTrimFraction = 0.3 and RightTrimFraction = 0.05; Fig. S9 ). Significance was assessed using qvalue 91 in R 74 (v. 2.12). All results were visualized in R 74 , often in conjunction with the package ggplot2 92 . A custom script was used to identify SNPs which were identified as candidate loci under all three methods, and putatively selected sites were then cross-referenced with the species’ annotated reference genome 41 to infer possible phenotypic function (see 93 , 94 for additional information on the reference genome and annotation). If the SNP’s position was not within a gene, the nearest annotated areas in each direction were identified. Data availability Genomic data (raw reads) will be made available on GenBank (SRA accession PRJNA563655). All commands (STACKS, Structure ) and scripts (PCA, F ST ) used for analyses are available on GitHub ( ). Change history 24 March 2020 An amendment to this paper has been published and can be accessed via a link at the top of the paper.
A new study from University of Michigan biologists presents the first genetic evidence of resistance in some bats to white-nose syndrome, a deadly fungal disease that has decimated some North American bat populations. The study involved northern Michigan populations of the little brown bat, one of the most common bats in eastern North America prior to the arrival of white-nose syndrome in 2006. Since then, some populations of the small, insect-eating bat have experienced declines of more than 90%. U-M researchers collected tissue samples from wild little brown bats that survived the disease, as well as individuals killed by the fungal pathogen. They compared the genetic makeup of the two groups and found differences in genes associated with regulating arousal from hibernation, the breakdown of fats and echolocation. "Because we found differences in genes associated with regulating hibernation and breakdown of fats, it could be that bats that are genetically predisposed to be a little bit fatter or to sleep more deeply are less susceptible to the disease," said U-M's Giorgia Auteri, first author of a paper scheduled for publication Feb. 20 in the journal Scientific Reports. "Changes at these genes are suggestive of evolutionary adaptation, given that white-nose syndrome causes bats to arouse with unusual frequency from winter hibernation, contributing to premature depletion of fat reserves," said Auteri, a doctoral student in the Department of Ecology and Evolutionary Biology who conducted the study for her dissertation. The other author of the Scientific Reports paper is U-M biologist Lacey Knowles, Auteri's faculty adviser. While the study was small—involving tissue samples from 25 little brown bats killed by white-nose syndrome and nine bats that survived the disease—the authors say their sample size is large enough to detect genetic changes driven by natural selection. A larger follow-up study is underway, expanding both the number of bats and the areas affected by the disease, to develop a fuller picture of adaptive change that may be key to the species' survival. The fungal pathogen that causes white-nose syndrome was inadvertently introduced in the northeastern United States in 2006 and is currently spreading across the continent. Thirteen species of North American bats are currently affected, with some populations experiencing losses of 90-100%. The disease is named for a distinctive fungal growth around the muzzles and on the wings of hibernating bats. The U-M team's study area is Michigan's northern Lower Peninsula and Upper Peninsula. White-nose syndrome fungus was first detected there in 2014, and its arrival allowed the researchers to study the pathogen's initial evolutionary impact. For the study, the U-M researchers collected tissue samples from dead little brown bats found in or near hibernation sites during the winter. The hibernation sites were concentrated in the western Upper Peninsula and primarily consisted of abandoned iron and copper mines. During the summer, they also collected small tissue samples from survivors that emerged successfully from hibernation despite exposure to the disease. Surviving bats had healing wing lesions or scars from the fungus. In the laboratory, DNA was extracted from the tissues and sequenced, and the sequences were mapped to a previously generated reference genome for the species. A genome scan was conducted to test for evidence of evolutionary changes in response to white-nose syndrome. The researchers found significant differences in three genes associated with arousal from hibernation (GABARB1), breakdown of fats (cGMP-PK1) and echolocation (FOXP2), as well as a fourth gene (PLA2G7) that regulates the release of histamines from mast cells. "The function of one gene we identified hints that summer activities such as hunting via echolocation may be an important determinant of which individuals survive the winter infection period," Auteri said. "This suggests that conservation of summer foraging habitat—not just winter hibernation sites—may promote population recovery in bats affected by white-nose syndrome." The observed genetic differences are suggestive of very rapid—though not unprecedented—evolutionary adaptation driven by natural selection, according to Auteri and Knowles. "This apparent adaptation occurred very quickly, involves genes with a variety of functions which likely act across seasons in order to contribute to survivorship, and has taken place despite an observable reduction in genetic diversity associated with population declines," said Knowles, a professor in the Department of Ecology and Evolutionary Biology and a curator at the U-M Museum of Zoology. Auteri and Knowles said it's too soon to say how the evolutionary changes they uncovered are likely to affect the little brown bat's prospects. After all, these bats have suffered dramatic population declines, and low population sizes inherently make a species more vulnerable to further perturbations. "But we're finding the hint that there could be these genetic changes that are occurring that might provide some type of survival in the future," Knowles said. "So as these variants increase, there's some hope that these bats are not all going to die from the disease itself." Because little brown bats only have one pup per year, recovery of the species would likely take a long time, according to Auteri and Knowles. Due to population losses, little brown bats have been listed as endangered by the International Union for Conservation of Nature and by the federal government of Canada, with a similar decision by the U.S. government pending.
www.nature.com/articles/s41598-020-59797-4
Physics
Researchers improve the measurement of a fundamental physical constant
Determination of the fine-structure constant with an accuracy of 81 parts per trillion, Nature (2020). DOI: 10.1038/s41586-020-2964-7, www.nature.com/articles/s41586-020-2964-7 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2964-7
https://phys.org/news/2020-12-fundamental-physical-constant.html
Abstract The standard model of particle physics is remarkably successful because it is consistent with (almost) all experimental results. However, it fails to explain dark matter, dark energy and the imbalance between matter and antimatter in the Universe. Because discrepancies between standard-model predictions and experimental observations may provide evidence of new physics, an accurate evaluation of these predictions requires highly precise values of the fundamental physical constants. Among them, the fine-structure constant α is of particular importance because it sets the strength of the electromagnetic interaction between light and charged elementary particles, such as the electron and the muon. Here we use matter-wave interferometry to measure the recoil velocity of a rubidium atom that absorbs a photon, and determine the fine-structure constant α −1 = 137.035999206(11) with a relative accuracy of 81 parts per trillion. The accuracy of eleven digits in α leads to an electron g factor 1 , 2 —the most precise prediction of the standard model—that has a greatly reduced uncertainty. Our value of the fine-structure constant differs by more than 5 standard deviations from the best available result from caesium recoil measurements 3 . Our result modifies the constraints on possible candidate dark-matter particles proposed to explain the anomalous decays of excited states of 8 Be nuclei 4 and paves the way for testing the discrepancy observed in the magnetic moment anomaly of the muon 5 in the electron sector 6 . Main The fine-structure constant α is the pillar of our system of fundamental constants. As the measure of the strength of the electromagnetic interaction in the low-energy limit, it has been measured using diverse physical phenomena: the quantum Hall effect, the Josephson effect, the atomic fine structure, atomic recoils and the electron magnetic moment anomaly 7 . Comparison of results across sub-fields of physics is a powerful test of the consistency between theory and experiment. In particular, the fine-structure constant is a crucial parameter for testing quantum electrodynamics (QED) and the standard model. This test relies on the comparison between the measured value of the electron gyromagnetic anomaly a e = ( g e − 2)/2 (where g e is the electron g factor) and its theoretical value. The standard-model prediction a e , SM is dominated by the QED term given by a perturbation series of α /π, and contains additional contributions from hadronic and weak interactions. Numerical and analytical evaluations of the coefficients of the QED series are firmly established up to the eighth order, and the accuracy of the tenth order has been improved over the past years 1 , 2 , 8 . Assuming that the prediction of the standard model is correct, comparison of the theory with the most accurate measurement of the electron magnetic moment 9 leads to a value of the fine-structure constant with a relative accuracy of 2.4 × 10 −10 dominated by experimental precision 9 (see Fig. 1 ). Fig. 1: Precision measurements of the fine-structure constant. Comparison of most precise determinations of the fine-structure constant so far. The red points are from g e − 2 measurements and QED calculations, and the green and blue points are obtained from measurements of caesium and rubidium atomic recoils, respectively. Errors bars correspond to ±1 σ uncertainty. Previous data are from ref. 34 (Washington 1987), ref. 10 (Stanford 2002), ref. 18 (LKB 2011), ref. 9 (Harvard 2008), ref. 2 (RIKEN 2019) and ref. 3 (Berkeley 2018). Inset, magnification of the most accurate values of the fine-structure constant. Full size image From a different point of view, to test the prediction of the standard model, we need independent measurements of α with a similar precision to evaluate a e , SM . The most successful independent approach is based on the measurement of the recoil velocity ( v r = ħk / m ) of an atom of mass m that absorbs a photon of momentum ħk (refs. 10 , 11 ). Here ħ is the reduced Planck constant ( ħ = h /(2π)) and k = 2π/ λ is the photon wave vector, where λ is the laser wavelength. Such a measurement yields the ratio h / m and then α via the relation $${\alpha }^{2}=\frac{2{R}_{\infty }}{c}\times \frac{m}{{m}_{{\rm{e}}}}\times \frac{h}{m}.$$ (1) The Rydberg constant R ∞ is determined from hydrogen spectroscopy with an accuracy of 1.9 parts per trillion (ppt; ). The atom-to-electron mass ratio m / m e is obtained from the ratio of the relative atomic mass of the atom A r ( m ) (known at 69 ppt for rubidium 12 , 13 ) and the relative atomic mass of the electron A r ( m e ) (known at 30 ppt) 14 . The speed of light in vacuum, c , has a fixed value. Here, we present a measurement of the recoil velocity on rubidium atoms. We measured h / m ( 87 Rb) = 4.59135925890(65) × 10 −9 m 2 s −1 . In the international system of units adopted in 2019, in which h has a fixed value, we obtain m ( 87 Rb) = 1.44316089776(21) × 10 −25 kg. This is the most accurate atomic mass measurement so far, to our knowledge. This results leads to a fine-structure constant α of $${\alpha }^{-1}=137.035999206(11).$$ The uncertainty contribution from the ratio h / m ( 87 Rb) is 2.4 × 10 −11 (statistical) and 6.8 × 10 −11 (systematic). Our result improves the accuracy on α by a factor of 2.5 over the previous caesium recoil measurement 3 but, most notably, it reveals a 5.4 σ difference from this latest measurement. We built a dedicated experimental setup and implemented robust methods to control systematic effects. By accelerating atoms up to 6 m s −1 in 6 ms and using typical two-photon Raman transitions as beam splitters for the matter waves, we obtained a relative sensitivity on the recoil velocity of 0.6 ppb in 1 h of integration (0.3 ppb on α ). This sensitivity is more than three times better than that obtained using the best atom interferometer based on multi-photon beam splitters 3 , although the latter technique is expected to provide a substantial gain in sensitivity with respect to Raman transitions 15 , 16 . The unprecedented sensitivity of our atom interferometer enables us to experimentally evaluate and mitigate several systematic biases. We recorded data with different experimental parameters, reinforcing the overall confidence of our error budget. We also implemented a Monte Carlo simulation that includes both the Ramsey–Bordé atom interferometer and the Bloch oscillations process. This code models precisely the underlying physics of our interferometer and provides an accurate evaluation of systematic effects, consistent with experimental results. Experiment Our experimental method is illustrated in Fig. 2 . The basic tools of our experiment are Bloch oscillations in an accelerated optical lattice, which enable the coherent transfer of a precise number of photon momenta to the atoms (typically 1,000 ħk ), and a matter-wave interferometer that measures the phase shift due to the change in velocity of the atoms. As in the optical domain, atom interferometry needs tools to split and recombine atomic wave packets; this is accomplished by a sequence of light pulses. The probability of detecting atoms in a given internal state at the output of the interferometer is a sinusoidal function of the accumulated phase difference along the two paths. Thus, the measurement of atomic populations enables the evaluation of the phase shift. Using the combination of the Ramsey–Bordé interferometer configuration and Bloch oscillations, the phase shift is proportional to the ratio h / m (ref. 17 ). Fig. 2: Experimental setup. a , Design of the vacuum chamber; the atom interferometer—a 70-cm-long magnetically shielded tube—is located in the upper area. b , Sequence of Bloch oscillations (B.O., red) and Raman pulses (yellow) used to control the trajectory of atoms before starting the atom interferometer. c , Atom interferometer light pulse sequence. The atomic trajectories for upward (blue) and downward (purple) accelerations are previously calculated to mitigate the gravity gradient effect. The separation between the two paths of each interferometer is exaggerated for clarity. Full size image We produce a cold rubidium sample using an optical molasses in the main chamber. Then, atoms are transported to the interferometry area, a 70-cm-long tube surrounded by a two-layer magnetic shield. The magnetic field is controlled to within 50 nT. To that end, we use an atomic elevator based on two Bloch oscillation pulses (acceleration/deceleration) 17 . These are performed using two vertical counter-propagating laser beams, the frequency difference of which is swept to create an accelerated standing wave. Atomic trajectories are precisely adjusted by controlling this frequency difference. Between the two Bloch oscillation pulses of the elevator, we apply two Raman pulses to prepare atoms in a well defined atomic internal state (see Fig. 2b ). Raman transitions occur between the two hyperfine levels of the ground state of the rubidium atom and are also implemented using two vertical counter-propagating laser beams (with wave vectors k 1 = − k 2 and k R = k 1 ≈ k 2 ). Their frequency difference ω R is controlled to compensate precisely the Doppler shift induced by the accelerations of the atoms. The atom interferometer is illustrated in Fig. 2c . It is implemented with two pairs of π/2 Raman pulses. Each pulse acts as a beam splitter by transferring a momentum of 2 ħk R to an atom with a probability of 50%. The first pair creates a coherent superposition of two spatially separated wave packets in the same internal state with the same momentum. The second pair recombines the two wave packets. Between the second and third π/2 pulses, a Bloch oscillation pulse transfers a momentum of 2 N B ħk B to both wave packets, where N B is the number of Bloch oscillations. The overall phase Φ of the interferometer is given by $$\varPhi ={T}_{{\rm{R}}}\left[{\varepsilon }_{{\rm{R}}}2{k}_{{\rm{R}}}({\varepsilon }_{{\rm{B}}}\frac{2{N}_{{\rm{B}}}\hbar {k}_{{\rm{B}}}}{m}-gT)-\delta {\omega }_{{\rm{R}}}\right]+{\varphi }_{{\rm{L}}{\rm{S}}},$$ (2) where T R is the time between the π/2 pulses of each pair, T is the time between the first and the third π/2 pulses, g is the gravitational acceleration, ϕ LS represents the phase corresponding to parasitic atomic level shifts and δ ω R is the difference of the Raman frequencies between the first and the third π/2 pulses. ε R and ε B determine the orientation of Raman and Bloch lasers wave vectors, respectively. The fluorescence signal collected in the detection zone gives the number of atoms in each atomic level at the output of the interferometer. Atomic fringes are obtained by measuring the fraction of atoms in a given internal state for varying δ ω R . Using a mean-square adjustment, we calculate δ ω R,0 , the frequency for which Φ = 0. Gravity is cancelled between upward ( ε B = 1) and downward ( ε B = −1) acceleration (see Fig. 2 ). Constant level shifts ϕ LS are mitigated by inverting the direction of the Raman beams ( ε R = ±1). The shot-to-shot parameters of the interferometer (δ ω R , ε R , ε B ) are applied randomly to avoid drifts. We record four spectra (Fig. 3a ) that yield $$\frac{\hbar }{m}=\frac{1}{4}\frac{{\sum }_{{\varepsilon }_{{\rm{R}}},{\varepsilon }_{{\rm{B}}}}|{\rm{\delta }}{\omega }_{{\rm{R}},0}({\varepsilon }_{{\rm{R}}},{\varepsilon }_{{\rm{B}}})|}{4{N}_{{\rm{B}}}{k}_{{\rm{B}}}{k}_{{\rm{R}}}}.$$ (3) Fig. 3: Data analysis. a , Typical set of four spectra recorded by inverting the directions of the Raman and Bloch beams for T R = 20 ms and N B = 500. Each spectrum displays the variation of the relative atomic population with respect to the parameter δ ω R . The lines are least-squares fits used to determine the position of the central fringe displayed on the top of each spectrum. b , Allan deviation σ α of the measurement of the fine-structure constant α at maximum sensitivity ( T R = 20 ms, N B = 500) as a function of the integration time τ . The line corresponds to \({\sigma }_{\alpha }(\tau )=3\times {10}^{-10}/\sqrt{\tau }\) , with τ expressed in hours. Error bars indicate 1 σ uncertainties. c , Datasets used to determine the value of the fine-structure constant, α . Data are obtained by changing the following experimental parameters: the pulse separation time, T R , the number of Bloch oscillations, N B , and their total duration, τ B . The circles and diamonds correspond to two different laser intensities during the π/2 pulses of the interferometer. Error bars denote ±1 σ and are estimated from the standard deviation of the mean. The blue band represents the overall the ±1 σ standard deviation. The reduced χ 2 for the combined data is 1.4. Full size image Data analysis For the conditions of Fig. 3a , the typical uncertainty on δ ω R,0 is 55 mHz. This leads to a statistical uncertainty on h / m of less than 2 ppb in 5 min. The behaviour of the Allan deviation calculated with a set of h / m measurements over 56 h (Fig. 3b ) shows that the data are independent (no correlations or long-term drift). It also indicates that the sensitivity of our setup on α is 8 × 10 −11 in 14 h. Table 1 presents our error budget. Several systematic effects identified in our previous measurement 18 have been reduced by at least one order of magnitude. By controlling the experimental parameters of the atomic elevator, we are able to adjust precisely the altitude of atomic trajectories within 100 μm in such way that the gravity gradient cancels out between the configurations ε B = 1 and ε B = −1 (see Fig. 2c ). The effect of Earth’s rotation is suppressed by continuously rotating one of the Raman beams during the interferometric pulse sequence 19 . The long-term drift of the beam alignment is corrected with an accuracy better than 4 μrad every 45 min by controlling the retro-reflection of the laser beams via a single-mode optical fibre. Our lasers are locked on a stabilized Fabry–Pérot cavity and their frequencies are regularly measured using a frequency comb with an accuracy of less than 4 kHz. The low density of our atomic sample implies a reduction of the effects of the refraction index and atom–atom interaction 20 to less than 1 ppt. Effects related to the geometrical parameters of the laser beams (Gouy phase and wave front curvature) are mitigated by using a 4.9-mm-waist beam passing through an apodizing filter and by adjusting the curvature with a shearing interferometer. Table 1 Error budget on α Full size table Among the recently identified systematic effects, the most subtle one is related to correlations between the efficiency of the Bloch oscillations and short-scale spatial fluctuations in laser intensity. This effect raises the question of how to calculate the photon momentum in a distorted optical field. Relying on our previous work 21 , we reduce the contribution of this effect to the error budget to less than 0.02 ppb. Because of the expansion of the atomic cloud, there is a residual phase shift that is due to the variation of the intensity perceived by the atoms. This phase shift depends on the velocity distribution 22 , 23 . We implement a method to compensate for the mean intensity variation and use a Monte Carlo simulation to evaluate the residual bias due to this Raman phase shift. During the interferometer sequence, we apply a frequency ramp to compensate the Doppler shift induced by gravity. Nonlinearity in the delay of the optical phase-lock loop induces a residual phase shift that is measured and corrected for each spectrum. These systematic effects were not considered in our previous measurement 18 (see Fig. 1 ), which could explain the 2.4 σ discrepancy between that measurement and the present one. Unfortunately, we do not have available data to evaluate retrospectively the contributions of the phase shift in the Raman phase-lock loop and of short-scale fluctuations in the laser intensity to the 2011 measurement. Thus, we cannot firmly state that these two effects are the cause of the 2.4 σ discrepancy between our two measurements. Overall systematic errors contribute an uncertainty of 6.8 × 10 −11 . Figure 3c shows the data used for the determination of α . Each point represents about 10 h of data. We took advantage of the sensitivity and reproducibility of our setup to study systematic effects by varying the experimental parameters (such as pulse-separation time, number of Bloch oscillations, duration of Bloch pulse, laser intensity and atomic trajectories). In parallel, we performed theoretical modelling and numerical simulations to interpret the experimental observations. The measurement campaign lasted one year and ended when consistent values were obtained for the different configurations. Using our measurement of the fine-structure constant, the standard-model prediction of the anomalous magnetic moment of the electron becomes $${a}_{{\rm{e}}}({\alpha }_{{\rm{LKB2020}}})=\frac{{g}_{{\rm{e}}}-2}{2}=1\hspace{-1pt},\hspace{-1pt}159\hspace{-1pt},\hspace{-1pt}652\hspace{-1pt},180.252\,(95)\times {10}^{-12}.$$ The relative uncertainty on g e is below 0.1 ppt, which is the most accurate prediction of the standard model. Comparison with the direct experimental measurement a e , exp (ref. 9 ) gives δ a e = a e,exp − a e ( α LKB2020 ) = (4.8 ± 3.0) × 10 −13 (+1.6 σ ), whereas comparison with caesium recoil measurements gives δ′ a e = a e,exp − a e ( α Berkeley ) = (−8.8 ± 3.6) × 10 −13 (−2.4 σ ). The uncertainty on δ a e is dominated by a e,exp . Discussion Our measurement sets additional limits on theories beyond the standard model that lead to a contribution to a e . Using a Bayes method 24 , our result implies that for a theory with positive δ a e , we can reject δ a e > 9.8 × 10 −13 with a 95% confidence level, and for a theory with negative δ a e , we can reject δ a e < −3.4 × 10 −13 with a 95% confidence level. For example, our result modifies the limits on a possible substructure within the electron. If the electron is composed of constituent particles of mass m * bound together by some unknown attraction, its natural size should be R = ħ /( m * c ) and its magnetic moment would be modified by δ a e ≈ m e / m * using the simplest analysis. According to the chirally invariant model 25 , our result excludes regions with m * < 520 GeV/ c 2 or R > 4 × 10 −19 m with a confidence level of 95%. These are stringent limits set by low-energy experiments, although they are not yet at the limits of the Large Electron–Positron collider (the largest electron–positron collider available today) 26 . Moreover, our result sets the stage for testing whether the persistent discrepancy of 3.6 σ between the experimental value 5 and the standard-model prediction of the magnetic moment of the muon 27 , 28 ( a μ ) exists for electrons. If this discrepancy (δ a μ ) is the signature of new physics, similar effects could be observable for electrons. Using naive scaling, the effects on the electron would be of the order of ( m e / m μ ) 2 δ a μ (ref. 6 ), where m μ is the mass of the muon. Figure 4a summarizes the overall contributions of experiments involved in the determination of δ a e . We also include the largest theoretical contributions from the fifth order of the QED series and the hadronic term. The dominant contribution comes from the direct measurement of the electron moment anomaly, a e , exp . For the first time, the contribution of the recoil measurement ( h / m ) is at the level of ( m e / m μ ) 2 δa μ ≈ 6.5 × 10 −14 , the value of δ a e deduced from the naive scaling (horizontal green bar). In the next years, improvement of one order of magnitude is expected for the accuracy of the measurement of a e , exp (ref. 29 ); it will then be possible to probe physics beyond the standard model with comparable information from both the electron and muon. Fig. 4: Impact on the test of the standard-model prediction of a e and limits on hypothetical X boson. a , Summary of contributions to the relative uncertainty on δ a e . The horizontal green line corresponds to the δ a e value obtained by taking into account the muon magnetic moment discrepancy and using a naive scaling model. Previous data from ref. 9 (Harvard 2008), ref. 18 (LKB 2011), ref. 3 (Berkeley 2018), ref. 13 (Atomic Mass Evaluation, AME 2016), ref. 14 (Max-Planck-Institut für Kernphysik, MPIK 2014) and ref. 2 (RIKEN 2019). Also shown are the 10th-order and hadronic contributions in the calculation of the electron moment anomaly. b , Exclusion area in ( ε , m X ) space for the X boson. The grey, blue and light purple regions are ruled out by the E141 31 , NA64 32 and BaBar 35 experiments, respectively. A test based on the magnetic moment of the electron rules out the orange region when using the Berkeley measurement 3 and the purple region when using the present result. Disregarding the Berkeley measurement, the remaining allowed range at 16.7 MeV is depicted by the thick red line. The zone favoured by δ a e > 0, as deduced from this work, is shown by grey dots. Full size image Finally, the anomaly reported in the angular distribution of positron–electron pairs ( e + e − ) produced in 8 Be nuclear transitions 4 could be explained by the emission of a hypothetical protophobic gauge boson X with a mass of 16.7 MeV followed by the decay X → e + e − (ref. 30 ). The X boson is parameterized by a mixing strength ε with electrons and a non-zero mass m X . Figure 4b presents the exclusion space for those parameters. At 16.7 MeV, the upper limit of ε is set by the g e − 2 value of the electron and its lower limit by electron beam dump experiments (E141 31 and NA64 32 collaborations). Recently, new results from the NA64 collaboration 33 excluded ε values lower than 6.8 × 10 −4 . Because vector coupling implies δ a e > 0, the result from a caesium recoil experiment imposes strong constraints on ε ; combined with the NA64 result, it rejects pure vector coupling of X (16.7 MeV) at 90% confidence level. By contrast, our measurement of α gives δ a e > 0 and favours pure vector coupling with ε = (8 ± 3) × 10 −4 , which could explain the 8 Be anomaly. Methods Experimental setup The design of the science chamber is shown on Fig. 2a . A three-dimensional magneto-optical trap (MOT) is loaded by a slow atomic beam generated in a two-dimensional MOT. An optical molasses is used to further cool down atoms to a temperature of 4 μK. The temperature of the atomic cloud is measured using Doppler-sensitive Raman transitions. After being released from the optical molasses ( t = 0), atoms are transported to a separate chamber in which the vacuum is controlled at the level of few 10 −11 mbar. The chamber consists of a long tube placed 50 cm above the centre of the MOT. One main difference with our previous setup 18 is that the atom interferometer is realized in this separate long tube, where the magnetic field is precisely controlled using a uniformly wound solenoid shielded by two layers of μ-metal. Lasers for the Raman transitions are produced using second-harmonic generation from 1.56-μm lasers. These two lasers are phase-locked, and the scheme used to control the frequency difference between them during the interferometer sequence is shown in Extended Data Fig. 3a . The power used to drive Raman transitions is at maximum 70 mW per beam. The lasers are detuned with respect to the one-photon transition (Rb D2 line) by about 60 GHz. Laser beams for the Bloch oscillations are produced from a 1.56-μm fibre laser that is split into two. Each beam seeds an optical system (μQuans) in which it passes through an acousto-optic modulator to control the laser frequency, is then amplified and passes through a periodically poled lithium niobate crystal for second-harmonic generation (about 800 mW at 780 nm). The two Bloch beams are filtered through a Rb vapour cell to reduce the resonant component of the amplified spontaneous emission of the amplifiers 36 . The total power is 400 mW for a peak intensity of 530 mW cm −2 . The laser is blue-detuned by 40 GHz from Rb D2 line. The two Raman beams have linear and orthogonal polarizations. Together with one of the Bloch beams, they are transported with the same single-mode polarization-maintaining fibre at the top of the cell and pointing downwards (Extended Data Fig. 1a ). A polarizing beam splitter is placed at the bottom of the vacuum cell. It transmits one of the Raman beams, which is then retro-reflected on a horizontal mirror placed on a vibration isolation table to achieve the counter-propagating configuration. The second Raman beam and the Bloch beam are rejected by the polarizing beam splitter. The inversion of the Raman effective wave vector is performed by rotating the polarization of the Raman beams by 90° before the fibre. The second Bloch beam is transported by an independent single-mode polarization-maintaining fibre at the bottom of the cell and points upwards. The waist of the beams at the output of the collimators is 4.9 mm. An apodizing filter is placed after each collimator 3 . Experimental sequence To transport atoms in the interferometry area, we use an atomic elevator based on two Bloch oscillation pulses (acceleration/deceleration) 20 . By adjusting the parameters of the elevator (number of Bloch oscillations and delays), we can precisely choose the initial position z 0 and velocity v 0 of the cloud at the start of the interferometer t interf. . Between the two Bloch oscillations pulses of the elevator, we apply two Raman π pulses with a blow-away pulse in between. With this sequence, atoms are prepared in the magnetically insensitive state, and by controlling the parameters of the first Raman π pulse (intensity and duration) one can set the width of the vertical velocity distribution of the atomic cloud. Using a pulse duration of 189 μs, we obtain a velocity distribution with a full-width at half-maximum of 1.7 mm s −1 . After the preparation sequence, 500,000 atoms form the cloud. The interferometer consists of four π/2 Raman pulses of the same duration arranged in two identical Ramsey sequences (delay T R ) separated by a duration T . The Bloch oscillation pulse is applied between the second and third Raman pulses (see Fig. 2c or Extended Data Fig. 1c for definitions of the pulse timing notation). To perform Bloch oscillations, we load the atoms at time t acc. in an optical lattice by adiabatically ramping up the laser intensity for τ adiab. = 500 μs. Then, we implement N B oscillations by accelerating the lattice during time τ B , which is proportional to N B and in our experiment corresponds to τ osc = 12 μs per oscillation unless otherwise specified. Finally, the lattice is adiabatically ramped down for another 500 μs. The detection scheme (Extended Data Fig. 1b ) is composed of three horizontal retro-reflected light sheets through which the atoms fall successively. The first light sheet is resonant with atoms in the state | F = 2 ⟩ that emit fluorescence photons collected on a large-area photo-diode ( F , hyperfine quantum number). A cache placed at the bottom of the light sheet blocks the retro-reflection, leading to pushing the detected atoms away from the detection system. The remaining atoms in | F = 1 ⟩ pass through a light sheet that repumps them in | F = 2 ⟩ , and they are subsequently detected in a third light sheet similar to the first one. The relative population of atoms in each state is then obtained from the collected fluorescence signals. Theoretical phase shift at the output of the interferometer To maintain the resonance condition of the Raman transitions, the frequency difference ω R between the lasers that drive them is carefully adjusted. In addition to the frequency difference shift δ ω R between the first and third π/2 pulses, we apply during the Ramsey sequences a ramp at rate β to compensate for gravity. Thus, the effective wave vector of Raman transitions varies along the interferometer, which can induce a bias 37 . By treating this effect as a perturbation in the Lagrangian formalism 38 , we obtain a modified version of equation ( 2 ): $$\begin{array}{l}\varPhi ={T}_{{\rm{R}}}\left[{\varepsilon }_{{\rm{R}}}2{k}_{{\rm{R}}}({\varepsilon }_{{\rm{B}}}\frac{2{N}_{{\rm{B}}}\hbar {k}_{{\rm{B}}}}{m}-gT)-\delta {\omega }_{{\rm{R}}}\right]+{\varphi }_{{\rm{L}}{\rm{S}}}\\ \,+\frac{{T}_{{\rm{R}}}}{c}\{\beta [gT\left(\frac{T}{2}+{T}_{{\rm{R}}}\right)-{v}_{0}T\\ \,+{\varepsilon }_{{\rm{B}}}\frac{2{N}_{{\rm{B}}}\hbar {k}_{{\rm{B}}}}{m}\left({t}_{{\rm{a}}{\rm{c}}{\rm{c}}.}+{\tau }_{{\rm{a}}{\rm{d}}{\rm{i}}{\rm{a}}{\rm{b}}.}+\frac{{\tau }_{{\rm{B}}}}{2}-{T}_{{\rm{R}}}-T\right)]\\ \,+2{k}_{{\rm{R}}}\left(\frac{2{N}_{{\rm{B}}}\hbar {k}_{{\rm{B}}}}{m}-gT\right)\left(2{v}_{0}-\frac{{T}_{{\rm{R}}}g}{2}+{\varepsilon }_{{\rm{B}}}\frac{2{N}_{{\rm{B}}}\hbar {k}_{{\rm{B}}}}{m}-gT\right)\},\end{array}$$ (4) where k R is defined as the effective wave vector when the laser frequency difference is set to address atoms at zero velocity. This formula must be used to compute h / m from the central frequency determinations of the four spectra. However, because the additional term (second and third lines in equation ( 4 )) is independent of the direction of the Raman beams, the determination of h / m from equation ( 3 ) remains valid, provided that the value of the Raman wave vector corresponds to the one resulting from addressing atoms at zero velocity. Because we use this value, there is no correction associated to this effect. Evaluation of uncertainty budgets Thanks to the high sensitivity of our atom interferometer, a wide range of systematic effects was investigated and evaluated experimentally. Furthermore, we performed the measurements of h / m with various experimental parameters ( N B , T R , τ B , Raman laser intensity). The parameters are listed in Extended Data Table 1 . Given that many systematic effects depend on the position or velocity of the atoms, we implemented a Monte Carlo simulation of the experiment to calculate such effects precisely. The trajectories of the atoms during the measurement sequence were precisely controlled by means of the atomic elevator. The Monte Carlo simulation was based on the calculation of atomic trajectories using the real-time sequence of the experiment. Quantities depending on the trajectory of the atoms (such as the contrast of Rabi oscillations or the efficiency of Bloch oscillations) were calculated and compared with experimental results to confirm the validity of the model. Calculation of the final uncertainty The final value of h / m was obtained from hundreds of individual measurements of h / m . For each measurement, an uncertainty was calculated. This uncertainty has several origins that may be unique to this measurement (for example, the uncertainty of the fit or the laser frequency measurement), that depend on the parameters of the measurement (for example, the light shift and the gravity gradient) or that are common to all measurements (for example, the beam parameters). The uncertainty package of Python ( ) was used to compute the weighted average value of h / m . The final uncertainty is a weighted quadratic sum of all the elementary sources of uncertainty. The error budget is obtained by combining those contributions according to their origin. Monte Carlo simulation In this simulation, each atom is described by an atomic wave packet with mean momentum p ( t ), a phase ϕ ( t ) at its mean position r ( t ), and the real amplitude a ( t ). The momentum p ( t ) and the position r ( t ) of the wave packet evolve using classical forces that act on the atom, and the phase is calculated along this path. The sequence is split into different stages in which the accumulated phase, the evolution of the trajectory and the amplitude are computed. Three different stages are considered: free fall in the gravity field, Raman transitions and Bloch oscillations. During free fall, the amplitude remains constant, the trajectory is given by classical physics and the phase is computed using the action along the classical trajectory. For Raman transitions, the evolution is calculated in an accelerated frame in which the Raman frequency is constant. Analytical solutions for a finite pulse duration in the momentum representation are used 39 , allowing us to compute the amplitude and the phase. The displacement is calculated from the derivative of the phase with respect to the momentum. For Bloch oscillations, the evolution is calculated in the frame of the lattice. In this frame, the evolution is periodic and no displacement of the wave packet occurs. The phase evolution depends on three terms: (i) the phase due to the absorption and stimulated emission of N B photons: ϕ ph = N B [ ϕ up ( x , t ) − ϕ down ( x , t )], where ϕ up and ϕ down are the phases of the two lasers of the lattice; (ii) the phase due to acceleration: ϕ acc = m ( g − γ ) τ B / ħ , where γ is the acceleration of the lattice and τ B is the total duration of the acceleration; and (iii) the phase due to the lattice, ϕ latt , which is calculated from the average energy of the atom in the first band in the tight-binding limit $${\varphi }_{{\rm{latt}}}=[2\sqrt[4]{{E}_{{\rm{r}}}^{2}{V}_{{\rm{up}}}{V}_{{\rm{down}}}}+{(\sqrt{{V}_{{\rm{up}}}}-\sqrt{{V}_{{\rm{down}}}})}^{2}]\frac{{\tau }_{{\rm{B}}}}{\hbar },$$ (5) where E r is the recoil energy and V up/down is the potential (light shift) of each individual laser of the lattice. From this energy, a classical force that acts on the atom is also calculated. The amplitude is calculated independently: the efficiency of the Bloch oscillation, which depends on both the depth of the lattice and the magnitude of the acceleration, is taken from tables computed using an independent numerical simulation 40 , 41 . The analytical formulas for the Raman and Bloch beam evolution are obtained assuming that the laser beams used are plane waves. Generalization to other beams is obtained by using a formula with a plane wave that locally fits the phase of the laser (amplitude, phase and phase gradient). These local parameters are obtained analytically when the simulation is performed with Gaussian beams. In the case of an arbitrary beam, numerical values are obtained using plane-wave decomposition of the solution of the Helmholtz equation. We compute the Fourier transform \(\tilde{A}({k}_{x},{k}_{y},{z}_{0})\) of the wavefront at position z 0 . At any position, the complex amplitude is calculated using $$A(x,y,z)=\iint {{\rm{e}}}^{{\rm{i}}({k}_{x}x+{k}_{y}y+\sqrt{{k}^{2}-{k}_{x}^{2}-{k}_{y}^{2}}z)}\tilde{A}({k}_{x},{k}_{y},{z}_{0}){\rm{d}}{k}_{x}{\rm{d}}{k}_{y}$$ (6) and the recoil is determined using $$\begin{array}{c}{k}_{z}(x,y,z)=\\ \iint \sqrt{{k}^{2}-{k}_{x}^{2}-{k}_{y}^{2}}{{\rm{e}}}^{{\rm{i}}({k}_{x}x+{k}_{y}y+\sqrt{{k}^{2}-{k}_{x}^{2}-{k}_{y}^{2}}z)}\tilde{A}({k}_{x},{k}_{y},{z}_{0}){\rm{d}}{k}_{x}{\rm{d}}{k}_{y}.\end{array}$$ (7) The Monte Carlo simulation is performed as follows: an initial set of N wave packets (index i ) is randomly calculated with a Gaussian distribution for both position and velocity. For each wave packet, and for the two paths (labelled A and B) of the interferometer, the final amplitude \({a}_{i}^{{\rm{A}}/{\rm{B}}}\) , position \({{\bf{r}}}_{i}^{{\rm{A}}/{\rm{B}}}\) , momentum \({{\bf{p}}}_{i}^{{\rm{A}}/{\rm{B}}}\) and phase \({\varphi }_{i}^{{\rm{A}}/{\rm{B}}}\) are calculated. The phase of the interferometer is then obtained from: $$\varPhi =\frac{1}{N}\mathop{\sum }\limits_{i=1}^{N}{a}_{i}^{{\rm{A}}}{a}_{i}^{{\rm{B}}}\,\left[{\varphi }_{i}^{{\rm{A}}}-{\varphi }_{i}^{{\rm{B}}}+\frac{({{\bf{p}}}_{i}^{{\rm{B}}}+{{\bf{p}}}_{i}^{{\rm{A}}})\cdot ({{\bf{r}}}_{i}^{{\rm{B}}}-{{\bf{r}}}_{i}^{{\rm{A}}})}{2\hbar }\right].$$ (8) The simulation is run for each of the four spectra. The value of h / m is deduced using equation ( 3 ). Frequency measurement The Bloch laser and of one of the Raman lasers are locked to a Fabry–Pérot cavity. The cavity is itself locked to the two-photon transition from 5S 1 /2 ( F = 3) to 5D 5 /2 ( F = 5) in 85 Rb (ref. 42 ). The frequencies of those two lasers are measured using a commercial frequency comb (MenloSystems), which is referenced by a 100-MHz signal synchronized with the French National
The validation and application of theories in physics require the measurement of universal values known as fundamental constants. A team of French researchers has just conducted the most accurate measurement to date of the fine-structure constant, which characterizes the strength of interaction between light and charged elementary particles, such as electrons. This value has just been determined with an accuracy of 11 significant digits; improving the precision of the previous measurement by a factor of 3. The scientists achieved such precision by enhancing their experimental set-up, in an effort to reduce inaccuracies and to control effects that can create perturbations of the measurement. The experiment involves cold rubidium atoms with a temperature approaching absolute zero. When they absorb photons, these atoms recoil at a velocity that depends on their mass. The highly precise measurement of this phenomenon helps to improve the knowledge of the fine-structure constant. These results, which will appear in Nature on 3 December, open new prospects for testing the Standard Model's theoretical predictions. The use of more accurate constants can help to answer fundamental questions, such as the origin of dark matter in the universe.
10.1038/s41586-020-2964-7
Medicine
Taking the itch out of cancer immunotherapy
Ryota Tanaka et al. Activation of CD8 T cells accelerates anti-PD-1 antibody-induced psoriasis-like dermatitis through IL-6, Communications Biology (2020). DOI: 10.1038/s42003-020-01308-2 Journal information: Communications Biology
http://dx.doi.org/10.1038/s42003-020-01308-2
https://medicalxpress.com/news/2020-10-cancer-immunotherapy.html
Abstract Use of immune checkpoint inhibitors that target programmed cell death-1 (PD-1) can lead to various autoimmune-related adverse events (irAEs) including psoriasis-like dermatitis. Our observations on human samples indicated enhanced epidermal infiltration of CD8 T cells, and the pathogenesis of which appears to be dependent on IL-6 in the PD-1 signal blockade-induced psoriasis-like dermatitis. By using a murine model of imiquimod-induced psoriasis-like dermatitis, we further demonstrated that PD-1 deficiency accelerates skin inflammation with activated cytotoxic CD8 T cells into the epidermis, which engage in pathogenic cross-talk with keratinocytes resulting in production of IL-6. Moreover, genetically modified mice lacking PD-1 expression only on CD8 T cells developed accelerated dermatitis, moreover, blockade of IL-6 signaling by anti-IL-6 receptor antibody could ameliorate the dermatitis. Collectively, PD-1 signal blockade-induced psoriasis-like dermatitis is mediated by PD-1 signaling on CD8 T cells, and furthermore, IL-6 is likely to be a therapeutic target for the dermatitis. Introduction For cancer immune therapies that regulate T cells to enhance immune responses, T cells must successfully recognize tumor antigens through their T-cell receptors (TCRs) and become activated in order to expel tumors 1 , 2 . In addition, a number of stimulatory and inhibitory receptor and ligand pairs expressed on T cells, antigen-presenting cells (APCs) or tumor cells, termed immune checkpoints, also play crucial roles for both T cell activation and inhibition 3 . Programmed cell death-1 (PD-1) is one of these immune checkpoint molecules, which was initially detected in activated murine T cells upon TCR engagement 4 and subsequently in exhausted T cells 5 . Its ligands, programmed cell death-ligand 1 (PD-L1) and PD-L2, are expressed on various cell types, including hematopoietic cells infiltrating tumors, including APCs, and on non-hematopoietic cells such as cancer cells 6 , 7 . The interaction between PD-1 and its ligands reduces T cell function by inducing exhaustion, apoptosis, anergy, and downregulation of cytokine production by T cells, leading to suppression of the antitumor immune response 8 , 9 . In melanoma, PD-1 expression is detected on tumor-infiltrating lymphocytes including tumor antigen–specific T cells, which are functionally impaired. Moreover, the biological activity of these cells can be partially recovered by inhibiting the PD-1 pathway 10 , 11 , 12 . Indeed, anti-PD-1 blocking antibodies such as nivolumab and pembrolizumab function as immune checkpoint inhibitors, and have proven effective for the treatment of melanoma 13 , 14 . However, as the PD-1 pathway also maintains peripheral T cell tolerance and regulates inflammation 15 , inhibition of this pathway may lead to autoimmune manifestations referred to as immune-related adverse events (irAEs) 16 , 17 . Early clinical trials and reviews have reported that anti-PD-1 antibody-related irAEs occur in more than 70% of patients, and cutaneous irAEs are the most frequently observed (approximately 40%). Further, most cutaneous irAEs are mild (low-grade) and manageable with topical steroids 16 , 18 , 19 , 20 , 21 . On the other hand, it has also been recently reported that two-thirds of patients with cutaneous irAEs reportedly required systemic corticosteroids for the treatment of eruptions, and 19% of patients discontinued cancer-immunotherapy due to irAEs, even though 75% experienced antitumor responses with the therapy 22 . High-dose and/or long-term use of systemic immunosuppressive therapies are required to control such irAEs 23 , potentially resulting in prolonged interruption of cancer treatment. Moreover, these immunosuppressive therapies may also abrogate the antitumor response by counteracting lymphocyte activation 20 , 24 . Therefore, more efficacious, systemic therapies that resolve the symptoms of irAEs while also enabling shorter interruptions of cancer treatments and do not interfere with their antitumor effects would be ideal. In addition, a recent American Society of Clinical Oncology guideline suggests that cutaneous irAEs are increasingly recognized as a contributing factor to treatment noncompliance, discontinuation, or dose modification 24 . Plausibly, such skin manifestations cause changes in appearance along with discomfort, which reduces patient quality of life and results in loss of treatment motivation. We previously reported a case of nivolumab-induced psoriasis-like dermatitis 25 , which has been reported to develop in patients treated with anti-PD-1/PD-L1 antibody 25 , 26 . The latest post-marketing surveillance of nivolumab in Japan reports that 2,391 cases of cutaneous irAE occurred, of which 103 cases (4.3 %) were labeled as psoriasis. Notably, more than 18% (19 /103) of those cases were reportedly severe 27 . Importantly, the mechanism by which psoriasis-like dermatitis occurs following PD-1/PD-L1 inhibition remains unknown, and strategies to mitigate the occurrence of especially severe cases are yet to be identified. With the recent increase in use of anti-PD-1 antibody for patients with various types of cancers, clarification of the underlying mechanisms and development of more efficacious treatment for PD-1 signal blockade-induced psoriasis-like dermatitis is needed. Application of imiquimod (IMQ), a toll-like receptor 7/8 agonist, is known to induce psoriasis-like dermatitis in both humans 28 and mice 29 . Furthermore, it has already been reported that both PD-1 genetic deficiency and blockade of PD-1 with a specific monoclonal antibody exacerbate IMQ-induced psoriasis-like dermatitis in mice 30 . Therefore, it is likely that the pathophysiological mechanism of PD-1 signal blockade-induced psoriasis-like dermatitis could be elucidated using this murine model. The present study aimed to elucidate the characteristics and mechanisms underlying psoriasis-like dermatitis induced by blocking PD-1 signaling, and to identify suitable treatments. The observations from human samples and further experiments using a preclinical murine model of IMQ-induced psoriasis-like dermatitis demonstrated that the dermatitis was accelerated by an increase of skin-infiltrating activated, cytotoxic CD8 T cells allowing pathogenic crosstalk with keratinocytes and subsequent production of IL-6. Moreover, blockade of interleukin (IL)-6 signaling by anti-IL-6 receptor blocking antibody (MR16-1) restrained the PD-1 signal blockade provoked by severe dermatitis by inhibiting both Th17 cell differentiation and cytotoxic CD8 T cell activation. Thus, this highlights the significance of IL-6 blockade therapy specifically for the regulation of PD-1 signal blockade-induced dermatitis. Results Increased CD8/CD4 ratio of epidermal-infiltrating lymphocytes in cases of anti-PD-1 antibody-induced psoriasis-like dermatitis compared to cases of idiopathic psoriasis Immunohistochemical (IHC) evaluation of skin biopsy samples, as demonstrated in Fig. 1a , revealed that CD8/CD4 ratios of epidermal-infiltrating mononuclear cells were significantly increased in cases of anti-PD-1 antibody-induced psoriasis-like dermatitis (median ± standard deviation [SD], 3.48 ± 1.0) compared to that in cases of idiopathic psoriasis (1.06 ± 0.19, P = 0.008 by Mann–Whiney U test, Fig. 1b ). Fig. 1: Characteristics of anti-programmed cell death-1 (PD-1) antibody-induced psoriasis-like dermatitis. a Representative clinical images of patients with idiopathic psoriasis and anti-PD-1 antibody-induced psoriasis-like dermatitis. Both patients developed well-defined scaly plaques scattered over their trunks and extremities. b Representative hematoxylin and eosin (HE)-stained, and anti-CD8 or CD4 antibody-stained skin biopsy samples from patients with idiopathic psoriasis and anti-PD-1 antibody-induced psoriasis-like dermatitis. Scale bars = 50 μm. c CD8/CD4 ratios of epidermal-infiltrating lymphocytes ( n = 6 and 7 in idiopathic psoriasis and anti-PD-1 antibody-induced psoriasis-like dermatitis, respectively). ** P < 0.01 by nonparametric 2-tailed Mann–Whitney U test. d Profiles of serum interleukin (IL)-6 levels in serum samples from anti-PD-1 antibody-treated cancer patients who developed psoriasis-like dermatitis as an immune-related adverse event (irAE, n = 8) and those with no irAE ( n = 19). **** P < 0.0001 by nonparametric 2-tailed Mann–Whitney U test. Full size image Elevated serum IL-6 correlates with the development of anti-PD-1 antibody-induced psoriasis-like dermatitis in humans We reported in our preliminary study that only increased serum levels of IL-6, but not those of IL-17A, interferon (IFN)-γ and IL-8, correlated with the development of anti-PD-1 antibody-induced psoriasis-like dermatitis in patients with malignant melanoma 25 . In order to validate this phenomenon, we analyzed the serum levels of IL-6 in eight cases of psoriasis-like dermatitis, and 19 cases without any irAEs. Cases of psoriasis-like dermatitis exhibited significantly higher serum IL-6 levels compared to those of IL-6 in cases without any irAEs ( P < 0.0001 by Mann–Whiney U test, Fig. 1c ). Our additional analysis using the remaining samples showed that there was no significant difference in the serum levels of soluble IL-6 receptor alfa (sIL-6Rα) between the two groups, six cases of psoriasis-like dermatitis and 18 cases without any irAEs (Supplemental Fig. 1 ). Collectively, these results suggest that the pathogenesis of anti-PD-1 antibody-induced psoriasis-like dermatitis may depend on IL-6. PD-1 −/− mice exhibit more severe IMQ-induced psoriasis-like dermatitis than WT mice PD-1 −/− mice developed significantly more severe IMQ-induced psoriasis-like dermatitis (Fig. 2a ) when compared to WT mice as revealed by clinical measurements including ear swelling (change from the baseline at day 7, 20.6 ± 2.6 μm vs. 7.2 ± 1.4 μm, P = 0.0014 by two-way ANOVA, Fig. 2b ) and PASI score, which represents the severity of erythema, scaling and skin thickness, (7.8 ± 0.2 vs. 3.2 ± 0.2, P < 0.0001 by two-way ANOVA, Fig. 2c ) at day 7. Moreover, pathological analysis, as shown in Fig. 2d , of epidermal thickness (61.5 ± 7.9 μm vs. 35.6 ± 3.2 μm, P = 0.008 by Mann–Whitney U test, Fig. 2e ) and the number of epidermal neutrophilic micro-abscesses (3.2 ± 0.58/ear slide vs. WT 0.6 ± 0.24/ear slide, P = 0.008 by Mann–Whitney U test, Fig. 2f ) at day 7 further indicates the protective role of PD-1 in IMQ-induced psoriasis-like dermatitis. Fig. 2: Comparison of clinical and histological appearance and cytokine mRNA expression in imiquimod (IMQ)-induced psoriasis-like dermatitis between PD-1 −/− mice and wild-type (WT) mice. a Representative clinical images at day 7 of IMQ-induced psoriasis-like dermatitis in WT and PD-1 −/− mice. Application of vehicle cream was used as a control. b , c The course of ear swelling ( b ) and PASI score ( c ) representing the severity of erythema, scaling, and skin thickness of WT and PD-1 −/− mice. ** P <0.01 and **** P <0.0001 by two-way ANOVA. d Representative images of HE-stained ear samples from IMQ-induced psoriasis-like dermatitis in WT and PD-1 −/− mice at day 7. Application of vehicle cream was used as a control. Scale bars, 100 μm. e , f Epidermal hyperplasia ( e ), and the number of epidermal neutrophilic micro-abscesses ( f ) in the ear samples from IMQ-applied WT or PD-1 −/− mice ( n =5 in each group). Data are shown as mean ± standard deviation (SD). Data are representative of three independent experiments. ** P <0.01 by nonparametric 2-tailed Mann–Whitney U test. g Quantitative reverse transcriptase-polymerase chain reaction (qRT-PCR) analysis of psoriasis-related cytokines and the neutrophilic surface marker Ly6g in ear samples from IMQ- or vehicle-treated WT and PD-1 −/− mice at day 7 ( n =7–8 in each group). Fold changes in mRNA levels were calculated and normalized against GAPDH mRNA levels. Data are expressed as mean ± SD. Data are representative of two independent experiments. * P <0.05, ** P <0.01, and *** P <0.001 by nonparametric 2-tailed Mann–Whitney U test. Full size image Moreover, we confirmed that mice treated with anti-PD-1 blocking monoclonal antibody developed clinically and histopathologically severe IMQ-induced psoriasis-like dermatitis compared to control mice treated with isotype IgG2a control (Ctrl) antibody (Supplemental Fig. 2A–E ). The results corresponded to the experiments using PD-1 −/− mice. We also conducted an experiment using the B16 melanoma murine model, in which B16F10 melanoma cells were inoculated into backs of both WT and PD-1 −/− mice, to investigate whether the presence of cancer involves PD-1 blockade-induced psoriasis-like dermatitis. First, this model did not induce psoriasis-like dermatitis spontaneously nor under vehicle cream treatment (Supplemental Fig. 3A–E ). Moreover, the presence of B16 melanoma did not lead to a significant difference in PASI score or ear swelling in WT or PD-1 −/− mice (Supplemental Fig. 3A–H ). Taken together, these data suggest that PD-1 blockade, either by genetic knockout or antibody treatment, promotes IMQ-induced psoriasis-like dermatitis, and that PD-1 blockade in the context of cancer does not increase the severity of dermatitis. PD-1 deficiency in mice results in increased epidermal infiltration of CD8 T cells with enhanced production of IFN-γ and CXC chemokine ligand (CXCL)9 IHC analysis of murine ear skin samples revealed significantly increased numbers of CD8 T cells infiltrating into the epidermis of PD-1 −/− mice and anti-PD-1 antibody-treated mice when compared to control mice (Fig. 3a and b, P = 0.008 by Mann–Whitney U test, and Supplemental Fig. 2F , P = 0.008 by Mann–Whitney U test), similar to what was seen in the patients with anti-PD-1 antibody-induced psoriasis-like dermatitis. Next, qRT-PCR analysis revealed that PD-1 −/− mice have significantly higher CD8a and IFN-γ mRNA levels in CD45 + epidermal cells and CXCL9 in keratinocytes (CD45-negative epidermal cells) compared to that of WT mice (Fig. 3c , P = 0.008, P = 0.008 and P = 0.03 by Mann–Whitney U test, respectively). Fig. 3: PD-1 deficient CD8 T cells display enhanced interferon (IFN)-γ production, and IFN-γ-stimulated keratinocytes produce CXC chemokine ligand (CXCL)9 in IMQ-induced psoriasis-like dermatitis. a A schematic illustrating the protocol for processing of ear skin samples into keratinocyte, epidermal CD45 + cell and dermal cell populations. b Representative images of immunohistochemical (IHC) staining of CD8 T cells in IMQ-applied WT and PD-1 −/− mice at day 5. Scale bars, 20 μm. c The number of infiltrated CD8 T cells in the epidermis. d The qRT-PCR analysis of CD8a, IFN-γ, and CXCL9. Data are from five mice per group, and are representative of two independent experiments. * P < 0.05 and ** P < 0.01 by nonparametric 2-tailed Mann–Whitney U test. Full size image PD-1 on CD8 T cells regulates the development of IMQ-induced psoriasis-like dermatitis Following 7 days of daily IMQ application, PD-1-cKO (PD-1 fl/fl CD8 Cre ) mice were found to have developed more severe IMQ-induced psoriasis-like dermatitis than littermate Ctrl (PD-1 fl/+ CD8 Cre ) mice (Fig. 4c ), when evaluated clinically by the change in ear swelling from the baseline to day 7 (17 ± 1.4 μm vs. 6.8 ± 0.75 μm, P < 0.0001 by two-way ANOVA, Fig. 4d ), PASI score at day 7 (6.3 ± 0.42 vs. 3.8 ± 0.25, P = 0.0001 by two-way ANOVA, Fig. 4e ), and pathologically by epidermal thickness at day 7 (59.5 ± 2.6 μm vs. 37.4 ± 3.7 μm, P = 0.008 by Mann–Whitney U test, Fig. 4f , g). qRT-PCR analysis revealed that PD-1-cKO mice showed significantly higher levels of both CD8a and IFN-γ mRNA in serum than that found in littermate Ctrl mice ( P = 0.015 and P = 0.015 by Mann–Whitney U test, respectively, Fig. 4h ). The number of CD8 T cells (CD45 + CD3 + CD8a + cells) in draining lymph nodes (dLNs) was increased, and more CD8 T cells in PD-1-cKO mice produced IFN-γ and Gzm B than that of littermate Ctrl mice (Fig. 4i , j). Fig. 4: Clinical and histological evaluation of IMQ-induced psoriasis-like dermatitis in conditional knockout mice with PD-1 deficiency specifically in CD8 T cells. a An overview of the PD-1-floxed mouse and the breeding strategy for conditional mutation using loxP and cyclization recombinase (Cre) driving mouse lines. Specific 34bp DNA fragments representing the loxP (locus of x-over, P1) sites were inserted across the PD-1 gene (Top). Conditional knockout (cKO) mice were generated by breeding the CD8a-Cre knock-in mouse strain with the PD-1-floxed mouse strain (Bottom). b Specific deletion of PD-1 expression in CD8 T cell, but not in CD4 T cells or B cells, of IMQ-treated PD-1-cKO mice (red line) compared to IMQ-treated WT littermates (blue line). Fluorescence Minus One (FMO) was used as a control (gray area). Data represent three independent experiments. c Representative clinical images of psoriasis-like dermatitis in PD-1cKO (CD8 Cre PD-1 fl/fl ) mice and littermate control (Ctrl, CD8 Cre PD-1 fl/+ ) mice at day 7. d , e Ear swelling ( d ) and PASI score ( n = 4 in each group). Data are representative of three independent experiments. *** P <0.001 and **** P <0.0001 by two-way ANOVA. f Representative images of HE-stained ear skin samples from IMQ-treated littermate Ctrl mice and PD-1-cKO mice at day 7. Scale bars, 100 μm. g Epidermal thickness ( n = 5 in each group). Data are representative of three independent experiments. h qRT-PCR analysis of CD8a and IFN-γ mRNA levels in ear skin samples from IMQ-treated littermate Ctrl mice ( n = 4) and PD-1-cKO mice ( n = 5) at day 7. Fold changes in mRNAs levels were normalized against GAPDH mRNA levels. i Total numbers of CD8 T cells in draining lymph nodes (dLNs) from IMQ-treated littermate Ctrl mice and PD-1-cKO mice at day 7 ( n = 5 in each group). j Representative histograms of IFN-γ and Granzyme B (GzmB) production by CD8 T cells in the dLNs. The gray histograms represent negative controls. Graphs of median fluorescent intensities (MFIs) of IFN-γ and Gzm B. The results are presented as means ± SDs. Data are representative of two independent experiments. * P <0.05 and ** P <0.01 by nonparametric 2-tailed Mann–Whitney U test. Full size image In summary, these in vivo results suggest that the PD-1 deficiency enhances the numbers of infiltrating activated cytotoxic CD8 T cells, resulting in acceleration of psoriasis-like dermatitis. Enhanced expression of cutaneous IL-6 via PD-1 deficiency in mice qRT-PCR analysis revealed that unstimulated ear skin from PD-1 −/− and WT mice contain similar low levels of psoriasis-related cytokines, IL-6, IL23-A, and IL-17A ( P = 0.95, P = 0.57 and P = 0.21 by Mann–Whitney U test, respectively, Fig. 2g ). IMQ application significantly increased mRNA expression of IL-23A and IL-17A in both WT mice and PD-1 −/− mice ( P = 0.007 and P = 0.0002 in WT mice, and P = 0.0003 and P = 0.0003 in PD-1 −/− mice by Mann–Whitney U test, respectively, Fig. 2g ). Notably, increased IL-6 mRNA expression induced by IMQ application was observed only in PD-1 −/− mice and not in WT mice ( P = 0.0006 and P = 0.27, respectively, by Mann–Whitney U test, Fig. 2g ). Expression of Ly6g, a neutrophil surface marker, mRNA was undetectable in both groups after vehicle cream application, but were increased significantly in PD-1 −/− mice compared to WT mice after IMQ application ( P = 0.048 by Mann–Whitney U test, Fig. 2g ). In addition, these results were also confirmed using PD-1-specific blocking antibody treatment (Supplemental Fig. 2G ). Further investigations revealed a significantly higher level of IL-6 mRNA expression in the CD45-positive epidermal cells, and an increased total number of CD45-positive epidermal cells with specific infiltration of neutrophils in PD-1 −/− mice compared to WT mice (Supplemental Fig. 4 ). Collectively, IL-6 expression related to expression of Th17 cytokines and infiltration of neutrophils correlates with PD-1 deficiency-enhanced IMQ-induced psoriasis-like dermatitis. Blockade of the IL-6R ameliorates PD-1 deficiency-exacerbated psoriasis-like dermatitis The increase of serum IL-6 levels post-treatment with anti-PD-1 blocking antibody implies that the pathogenesis of PD-1 signal blockade-induced psoriasis-like dermatitis is dependent on IL-6. Therefore, we employed blockade of IL-6 signaling using an anti-IL-6R blocking antibody (MR16-1) in order to assess the effects on IMQ-induced psoriasis-like dermatitis. The baseline serum levels of psoriasis-related cytokines (IL-6, IL-17A, and IL-23A) were the same between WT and PD-1 −/− mice ( n = 3). Induction of psoriasis-like dermatitis by IMQ elevated these cytokines in both IgG Ctrl-treated WT and PD-1 −/− mice, and markedly in mice with PD-1 deficiency (Fig. 5h ). MR16-1-treated PD-1 −/− mice showed significantly less IMQ-induced psoriasis-like dermatitis compared to IgG Ctrl-treated PD-1 −/− mice as evaluated by ear swelling (7.0 ± 0.6 μm vs. 15.7 ± 1.8 μm, P = 0.0013 by two-way ANOVA) and PASI score (4.5 ± 0.6 vs. 7.3 ± 0.8, P = 0.005 by two-way ANOVA) at day 7. Moreover, MR16-1-treated PD-1 −/− mice were clinically similar to IgG Ctrl-treated WT mice (ear swelling 7.8 ± 0.6 μm and PASI 4.3 ± 0.5; P = 0.56 and P = 0.10 by two-way ANOVA, respectively, Fig. 5a–c ). Histological analyses also revealed that MR16-1-treated PD-1 −/− mice had less severe psoriasis than IgG Ctrl-treated PD-1 −/− mice with reduced epidermal hyperplasia (38.8 ± 2.0 μm vs. 55.8 ± 1.8 μm, P = 0.01 by Mann–Whitney U test, Fig. 5d , e) and reduced numbers of epidermal neutrophilic micro-abscesses (3.2 ± 0.4 vs. 6.2 ± 0.6, P = 0.008 by Mann–Whitney U test), which was the same as IgG Ctrl-treated WT mice (34.8 ± 2.5 μm, P = 0.06; and 2.4 ± 0.4, P = 0.32 by Mann–Whitney U test, respectively, Fig. 5f ). Further, compared to IgG Ctrl-treated PD-1 −/− mice, MR16-1-treated PD-1 −/− mice presented significantly suppressed levels of psoriasis-related cytokine mRNAs IL-6, IL-17a, and IL-23a in the ear skin at day 7 ( P = 0.003, P = 0.03 and P = 0.02, respectively, by Mann–Whitney U test, Fig. 5g ), and significantly decreased serum levels of IL-17A and IL-23A ( P = 0.03 by Mann–Whitney U test) to the baseline level at day 7 (Fig. 5h ). These cytokine expression levels (IL-6, IL-17A, and IL-23A) in MR16-1-treated PD-1 −/− mice were the same as those seen in IgG Ctrl-treated WT mice ( P = 0.44, P = 0.21 and P = 0.66 in skin mRNA levels, respectively, and P = 0.57, P = 0.15 and P = 0.15 in serum levels, respectively, analyzed by Mann–Whitney U test). Fig. 5: Characteristics of anti-IL-6 receptor (IL-6R) antibody-treated IMQ-induced psoriasis-like dermatitis in PD-1 −/− mice. a Representative clinical images of anti-IL-6R antibody (MR16-1)- or IgG Ctrl-treated IMQ-induced psoriasis-like dermatitis in PD-1 −/− mice compared to IgG Ctrl-treated WT mice at day 7. b Ear swelling. c PASI score ( n = 4 in each group). Data are representative of two independent experiments. * P <0.05 and ** P <0.01 by two-way ANOVA. d Representative HE staining of ear skin samples from IgG Ctrl-treated WT mice, IgG Ctrl- or MR16-1-treated PD-1 −/− mice at day 7 ( n = 5 in each group). Scale bars, 50 μm. e Epidermal thickness ( n = 5 in each group). f The number of epidermal, neutrophilic micro-abscess ( n = 5 in each group). g qRT-PCR analysis of mRNA expression levels of psoriasis-related cytokines, IL-6, IL-23a, and IL-17a, in ear skin samples from IgG Ctrl-treated WT mice ( n = 10), IgG Ctrl-treated PD-1 −/− mice ( n = 7), and MR16-1-treated PD-1 −/− mice ( n = 9) at day 7 for IMQ application. Fold changes in mRNA levels normalized to GAPDH mRNA levels. h Multiplex, bead-based analysis of serum levels of psoriasis-related cytokines, IL-6, IL-23A, and IL-17A in IgG Ctrl-treated WT mice ( n = 4), IgG Ctrl-treated PD-1 −/− mice ( n = 4) and MR16-1-treated PD-1 −/− mice ( n = 5) at day 7 for IMQ application. Baseline (B/L) serum levels of these cytokines were also measured ( n = 3 each). In some samples, cytokines were not detected (ND). Data are expressed as mean ± SEM. Data are representative of two independent experiments. * P <0.05 and ** P <0.01 by nonparametric 2-tailed Mann–Whitney U test. Full size image Furthermore, we employed blockade of IL-17A signaling using an anti-IL-17A neutralizing monoclonal antibody in order to compare the effect with anti-IL-6R antibody on IMQ-induced psoriasis-like dermatitis in PD-1 −/− mice. When evaluated both clinically and histologically at day 7, treatment with anti-IL-17A antibody improved the dermatitis in both PD-1 −/− and WT mice to a level equivalent to treatment with anti-IL-6R antibody (Supplemental Fig. 5A–E ). However, anti-IL-17A antibody-treated PD-1 −/− mice showed more severe IMQ-induced psoriasis-like dermatitis than IgG Ctrl-treated WT mice, as evaluated by PASI score (4.8 ± 0.8 vs. 3.6 ± 0.5, P = 0.003 by two-way ANOVA) and ear swelling (7.4 ± 1.6 μm vs. 5.2 ± 1.4 μm, P = 0.053 by two-way ANOVA) at day 5 (Supplemental Fig. 5B ). In contrast, anti-IL-6R antibody treatment improved the dermatitis in PD-1 −/− mice earlier, at day 5 (Fig. 5b , c). These results indicate the delayed efficacy of anti-IL-17A neutralizing antibody treatment, compared to treatment with anti-IL-6R antibody, for PD-1 signal blockade-induced psoriasis-like dermatitis. Taken together, blockade of IL-6 signaling with an anti-IL-6R antibody is a potential therapeutic approach to resolve psoriasis-like dermatitis caused by inhibition of PD-1. Moreover, the treatment kinetics of anti-IL-6R antibody appear to be shorter than that of anti-IL-17A antibody treatment. Accelerated psoriasis-like dermatitis due to PD-1 deficiency on CD8 T cells can be ameliorated by the treatment with anti-IL-6R blocking antibody To investigate whether blockade of IL-6 with anti-IL-6R antibody could restrain PD-1 deficiency-induced activation of CD8 T cells, both PD-1-cKO mice and their littermate Ctrl mice were treated with either anti-IL-6R antibody (MR16-1) or isotype IgG control. MR16-1-treated PD-1-cKO mice exhibited significant improvement in clinical manifestations of IMQ-induced psoriasis-like dermatitis compared to IgG Ctrl-treated PD-1-cKO mice (ear swelling change from the baseline on day 7, 7.8 ± 1.5 μm vs. 16.3 ± 1.4 μm, P = 0.0012 by two-way ANOVA; and PASI score at day 7, 4.3 ± 0.2 vs. 8.2 ± 0.7, P < 0.0001 by two-way ANOVA). Moreover, this was to a similar level as that of IgG Ctrl-treated littermate Ctrl mice (ear swelling 6.8 ± 1.4 μm, P = 0.85 by two-way ANOVA; and PASI 3.6 ± 0.4, P = 0.66 by two-way ANOVA, Fig. 6a–c ). Histological evaluation also indicated that MR16-1-treated PD-1-cKO mice had less epidermal hyperplasia and reduced numbers of epidermis-infiltrating CD8 T cells than did IgG Ctrl-treated PD-1-cKO mice (48.7 ± 3.1 μm vs. 74.0 ± 3.7 μm, P = 0.002 by Mann–Whitney U test, Fig. 6d , E; 20.8 ± 6.9 vs. 73.8 ± 15.2, P = 0.015 by Mann–Whitney U test). In fact, the response in MR16-1-treated PD-1-cKO mice occurred at the same level as that of IgG Ctrl-treated littermate Ctrl mice (43.7 ± 1.0 μm, P = 0.25 by Mann–Whitney U test; and 8.4 ± 2.8, P = 0.16 by Mann–Whitney U test, Fig. 6d, e ). Furthermore, MR16-1-treated PD-1-cKO mice displayed significantly suppressed numbers of CD8 T cells in the dLNs and reduced CD8a and IFN-γ mRNA levels in the ear skin samples at day 7 when compared to IgG Ctrl-treated PD-1-cKO mice ( P = 0.004, P = 0.03 and P = 0.09, respectively, by Mann–Whitney U test, Fig. 5g, h ). Fig. 6: Characteristics of anti-IL-6R antibody-treated IMQ-induced psoriasis-like dermatitis in cKO mice with PD-1 deficiency specifically in CD8 T cells. Representative clinical images of IgG Ctrl- or MR16-1-treated IMQ-induced psoriasis-like dermatitis in littermate Ctrl mice or PD-1-cKO mice. b Ear swelling. c PASI score. Data are representative of two independent experiments. ** P < 0.01 and **** P < 0.0001 by two-way ANOVA. d Representative histological images of HE-stained ear skin samples from these mice at day 7. Scale bars, 50 μm. e Epidermal thickness. f The number of epidermal, neutrophilic micro-abscess. g Total numbers of CD8 T cells in dLNs at day 7. h mRNA expression levels of CD8a and IFN-γ in ear skin samples at day 7. Fold changes in mRNAs levels were normalized to GAPDH mRNA levels. n = 5–6 in each group. Data are expressed as mean ± SEM. Data are representative of two independent experiments. * P < 0.05, ** P < 0.01 by nonparametric 2-tailed Mann–Whitney U test. Full size image Importantly, there were not any differences between MR16-1-treatment and IgG Ctrl-treatment in littermate Ctrl mice, highlighting the significance of IL-6 blockade therapy for the regulation of PD-1 signal blockade-activated CD8 T cells in psoriasis-like dermatitis. Discussion The pathogenesis of cutaneous irAEs in patients treated with anti-PD-1 antibody has yet to be elucidated. However, previous reports suggest that activated proliferative intradermal CD8 T cells evoke cutaneous irAEs such as lichen planus-like dermatitis and eczematous reaction 31 , 32 . The present study highlights the importance of PD-1 expression on CD8 T cells for the regulation of psoriasis-like dermatitis. We found that CD8-positive lymphocyte infiltration into the epidermis was significantly increased in patients with anti-PD-1 antibody-induced psoriasis-like dermatitis compared to that in idiopathic psoriasis. A murine model of IMQ-induced psoriasis-like dermatitis clearly demonstrated that PD-1 deficiency accelerates infiltration of epidermal CD8 T cells with enhanced IFN-γ production of inflamed skin, and IFN-γ-stimulated keratinocytes produced an IFN-γ-inducible chemokine (CXCL9) for recruitment of T cells. Furthermore, the newly generated cKO mice with PD-1 deficiency specifically in CD8-positive cells demonstrated more severe IMQ-induced psoriasis-like dermatitis compared to the littermate control mice. These results suggest that PD-1 regulates skin-infiltrating CD8 T cells to engage in pathogenic crosstalk with PD-L1 expressed on various cells including keratinocytes 33 . In idiopathic psoriasis activation of conventional dendritic cells producing IL-23 lead to expansion and activation of autoreactive CD8 T cells in the dermis, which in turn acquire expression of α1β1-integrin and migrate into the epidermis. The epidermis has been identified as an ideal location for CD8 T cells to engage in pathogenic crosstalk with keratinocytes 34 , 35 . Furthermore, intra-epidermal CD8 T cells are shown to be highly pathogenic as the accumulation of epidermal T cells parallels the increase in proliferating keratinocytes in vivo 34 . Collectively, PD-1 signal blockade-induced activation of CD8 T cells is essential to induce and accelerate anti-PD-1 antibody-induced psoriasis-like dermatitis. We also found a significant increase in the serum levels of IL-6 in patients with anti-PD-1 antibody-induced psoriasis-like dermatitis, as we had shown in our preliminary study 25 , indicating that IL-6 could play an important role during disease development and thus, may be a suitable treatment target. As expected, a murine model of IMQ-induced psoriasis-like dermatitis enhanced via PD-1 deficiency was significantly improved by anti-IL-6R blocking antibody. These results clearly show the efficacy of IL-6–targeting therapy for PD-1 deficiency abrogated psoriasis-like dermatitis. One essential role of IL-6 is in the promotion of T helper 17 (Th17) cell production 36 . Th17 cells were recently shown to be a main pathological cell population in idiopathic psoriasis, and blockade of IL-17A and IL-23 have been established as treatments 37 , although IL-6 was not established as a potential therapeutic target. Our results also demonstrate that increased production of Th17-related cytokines, such as IL-17A and IL-23A, was accelerated in PD-1 −/− mice and was significantly suppressed by IL-6 blockade at both the tissue mRNA and serum levels to the same level as control WT mice. IL-6 signals through the IL-6Rα and β subunit glycoprotein 130 (gp130). However, as for cells that do not express IL-6Rα on their surface, such as CD8 T cells, trans-signaling, a process whereby IL-6 signaling occurs through a complex of IL-6 and a soluble form of the IL-6Rα binding to ubiquitously expressed gp130 38 , is believed to occur. Thus, IL-6 trans-signaling likely plays an important role for the development of cytotoxic CD8 T cell function 39 . Therefore, it is likely that increased levels of soluble IL-6 in PD-1 −/− mice promotes cytotoxic CD8 T cell function via IL-6 trans-signaling. Furthermore, our analysis of human samples from anti-PD-1 antibody-treated cancer patients revealed that the serum level of sIL-6 presented correlates with that of sIL-6Rα in patients with anti-PD-1 antibody-induced psoriasis-like dermatitis, which would result in enhanced epidermal infiltration of CD8 T cells. Therefore, blocking this trans-signaling process with anti-IL-6R antibody might decrease the inflammation seen during PD-1 signal inhibition-provoked psoriasis-like dermatitis by impairing the promotion of CD8 T cells. Indeed, mice with PD-1-deficiency specifically in CD8 T cells display severe psoriasis-like dermatitis, which can be restrained by blockade of IL-6 signaling. Collectively, this treatment strategy comprised of selective blockade of IL-6 signal with anti-IL-6R blocking antibody could be effective and ideal for treating PD-1 signal blockade-induced psoriasis-like dermatitis. In fact, there have been a few case reports and a retrospective cohort study showing successful treatment of steroid-refractory irAEs with one dose of an anti-human IL-6R antibody, tocilizumab 40 , 41 , 42 , even though its use in irAE has not yet been validated. Thus, the present study for the first time demonstrates the rationale for this treatment and the pathophysiology of IL-6 signaling in PD-1 signal inhibition-provoked autoimmunity. On the other hand, a synergistic antitumor effect has been demonstrated on combined blockade of both IL-6 signaling and PD-1/PD-L1 pathways in tumor-bearing mice 43 , suggesting the efficacy of the dual signal blockade in terms of resolving the symptoms of irAEs without interfering antitumor effects. Even though psoriasis-like eruptions have been reported as a paradoxical phenomenon after use of tocilizumab 44 , our experiments and the previous clinical case report 42 demonstrate that IL-6 blockade therapy during the initial phase of PD-1 signal blockade-induced psoriasis-like dermatitis may rapidly reduce the severity of irAE and therefore, result in shorter interruptions of cancer treatments. Collectively, individuals with PD-1 signal blockade-induced psoriasis-like dermatitis can potentially benefit from IL-6-targeted therapeutic intervention, which is expected to inhibit both Th17 cell differentiation and cytotoxic CD8 T cell activation in the pathological mechanisms of irAE. The first and foremost possible limitation of the current study is its retrospective nature in human sample collection from a limited number of institutes. Therefore, potential biases, such as selection bias and reporting bias, cannot be excluded, and functional analysis of CD8 T cells in skin and blood has yet to be completed. In addition, there are potentially some differences in the pathogenic mechanisms of psoriasis-like dermatitis between PD-1-deficient mice and anti-PD-1 antibody-treated mice/humans. These include our results that serum levels of IL-23 were significantly elevated in anti-PD-1 antibody-treated mice with psoriasis-like dermatitis compared to control mice, while the levels were equal between PD-1-deficient mice and WT mice. Further, our study did not directly addressed if PD-1 signal blockade on CD4 T cells could have some effects, as was reported in a murine model of virus infection where the effects on CD4 T cells alter CD8 T-cell function through PD-1 signal blockade 45 . Moreover, the exact source of IL-6 is yet to be determined, even though IL-6 mRNA levels in CD45-positive epidermal cells, a potential cell population identified in the current study, were significantly elevated in PD-1-deficient mice compared to WT mice. Further prospective studies are needed to clarify those findings. Despite the limitations, data from the current study highlighted the unique characteristics of PD-1 signal blockade-induced psoriasis-like dermatitis, most strikingly the significance of strong correlation between the enhanced IL-6 production and the dermatitis development, indicating the potential significance of IL-6-targeting for therapeutic intervention. In summary, IL-6 plays important roles during disease development of PD-1 signal blockade-induced psoriasis-like dermatitis. Moreover, PD-1 expressed on CD8 T cells is responsible for the regulation of skin inflammation. Blockade of IL-6 signaling decreases inflammation in PD-1 signal inhibition-provoked psoriasis-like dermatitis, and specifically, causes a reduction in the levels of Th17-related cytokines in a murine model of IMQ-induced psoriasis-like dermatitis. Thus, these findings highlight the potential significance of IL-6-targeting for therapeutic intervention of PD-1 signal blockade-induced psoriasis-like dermatitis in humans. Methods Human sample collection Formalin-fixed paraffin-embedded (FFPE) skin biopsy samples were obtained from melanoma ( n = 3), renal cell carcinoma ( n = 2), gastric cancer ( n = 1) and lung cancer ( n = 1) patients with anti-PD-1 antibody-induced psoriasis-like dermatitis ( n = 7, totally), and idiopathic psoriasis patients ( n = 6), who visited Tsukuba University Hospital (Japan) and Mito Saiseikai General Hospital (Japan) from 2014 to 2018. Serum samples were collected post-treatment from melanoma ( n = 25), renal cell carcinoma ( n = 1), and lung cancer ( n = 1) patients treated with anti-PD-1 antibody at Tsukuba University Hospital (Japan) from 2014 to 2019 ( n = 27), including eight patients who developed psoriasis-like dermatitis after the treatment. Patient clinical data were retrospectively reviewed from their medical records. Mice Wild-type (WT) C57BL/6J male mice originally from the Jackson Laboratories were purchased from Charles River Japan. PD-1-knockout (PD-1 −/− ) mice were provided by Dr. Tasuku Honjo (Kyoto University, Japan). We generated mice with a PD-1 allele mutated by the insertion of two loxP sites flanking parts of the promoter region (PD-1 fl/fl mice) using a CRISPR-Cas9 system at Laboratory Animal Resource Center, University of Tsukuba. PD-1 fl/fl mice develop normally indicating that the insertion of the loxP sites does not significantly interfere with regulation of the PD-1 gene. Floxed heterozygous PD-1 fl/+ and heterozygous CD8 cre mice (C57BL/6-Tg(Cd8a-cre)1ltan/J, Jackson Laboratories) were crossed to generate double heterozygous PD-1 fl/+ ; CD8 cre mice, which were bred with homozygous PD-1 fl/fl mice to produce conditional PD-1 homozygous (PD-1 fl/fl CD8 cre ), PD-1 conditional knockout (PD-1-cKO), mice, and their PD-1 heterozygous (PD-1 fl/+ CD8 cre ) littermates (Littermate Ctrl, Fig. 3 ). The primer sequences used for genotyping of CD8 cre , PD-1 −/−, and PD-1 fl are listed in Supplemental Table 1 . We confirmed the complete deletion of PD-1 expression specifically in the CD8 T cell population (CD3 + CD8 + lymphocytes) in lymph nodes of PD-1-cKO mice by flow cytometry (Supplemental Fig. 1 ). C57BL/6 background male mice, 8 to 12-weeks-old, were maintained in specific pathogen-free conditions and used for all experiments. Murine model of psoriasis-like dermatitis In order to replicate a modified murine model of IMQ-induced psoriasis-like dermatitis 30 , 3.5% IMQ cream diluted from 5% IMQ cream (Beselna ® ; Mochida Pharmaceuticals) with vehicle control cream (Vanicream ® ; Pharmaceutical Specialties) (62.5 mg IMQ total, which was a lower dose compared to the dose used in a conventional model of IMQ-induced psoriasis-like dermatitis) was applied topically on a daily basis to the shaved back and both ears for 5 or 7 consecutive days. Control mice were treated with the vehicle control cream only. Scoring system for evaluating the severity of skin inflammation To score the severity of inflammation of the back skin, an objective scoring system mimicking the Psoriasis Area and Severity Index (PASI) score for psoriasis patients was used as in a previous study 46 , in which independent scores of erythema, scaling, and thickening with a scale from 0 to 4 (0, none; 1, slight; 2, moderate; 3, marked; 4, very marked) were cumulated (ranged from 0 to 12). The ear thickness was measured using a micrometer (Mitutoyo). IL-6 blockade An anti-interleukin-6 receptor (anti-IL-6R) blocking antibody (MR16-1, Chugai Pharmaceuticals), which is a rat IgG1 monoclonal antibody against murine IL-6Rα chain, was injected intravenously at the dose of 2 mg per mouse prior to IMQ application at day 0. IgG isotype control (MP Biomedicals™) was used as a control. Histological analysis All FFPE human skin biopsy samples and murine ear skin samples were sectioned into 2 and 4-μm-thick slides, respectively, and subsequently undergone hematoxylin-eosin (HE) staining. Human FFPE skin samples were stained immunohistologically with anti-human CD8 and anti-human CD4 monoclonal antibodies (clone C8/144B and 4B12, respectively, Nichirei Biosciences) using an automatic slide stainer according to the manufacturer’s instructions. The numbers of epidermal-infiltrating cells per sample (magnification, ×400) were counted. Murine ear FFPE samples were stained immunohistochemically with primary anti-CD3 (clone SP7, diluted 1:100, Abcam) and anti-CD8a (clone 4SM15, diluted 1:400, eBioscience) monoclonal antibodies, fluorescent-labeled secondary antibodies (Alexa Fluor ® 488-labeled goat anti-rabbit IgG, and Alexa Fluor ® 555-labeled goat anti-rat IgG, Abcam), and 4′,6-diamidino-2-phenylindole (DAPI) to detect the nucleus, by standard immunohistochemical staining techniques. A fluorescence microscope (BZ-X700, Keyence) was used for observation and to count the number of infiltrating cells per sample (magnification, ×400). Blood sample assay system Murine blood samples were collected using the submandibular bleeding method and serum samples were subsequently isolated. Human and murine serum samples were immediately stored at ≤ −20 °C for later use. In order to analyze cytokine (human and murine IL-6, murine IL-23A, and IL-17A) serum levels, the MILLIPLEX® MAP Kit (Merck Millipore) using Bio-Plex ® Luminex 200 multiplex assay system (Bio-Rad) was employed according to the manufacturer’s protocol. Human serum levels of sIL-6 and sIL-6Rα were measured using Enzyme-Linked Immuno Sorbent Assay (ELISA) kit (Duoset and Quantikine; R&D systems) according to the manufacturer’s protocol. Quantitative reverse transcription-polymerase chain reaction (qRT-PCR) Total RNA was extracted from the murine ear samples using Trizol Reagent (Invitrogen). RNA concentrations were quantified and the OD 260/230 and the OD 260/280 ratio of the RNA samples were confirmed to be more than 1.8 and 1.6, respectively, with the NanoDrop ND-1000 (peqLab Biotechnologie GmbH). Complementary DNA (cDNA) was synthesized with a High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher) according to the manufacturer’s instructions. Messenger RNA (mRNA) expression levels were detected by PCR amplification of cDNA using the QuantStudio™ 5 Real-Time PCR Systems (Applied Biosystems) with PrimeTime ® Gene Expression Master Mix and Prime Tim qPCR predesigned primers (Integrated DNA Technologies) listed in Supplemental Table 1 . All qRT-PCR analyses were performed in triplicate. Amplification products were quantified by the comparative CT method. The mRNA level of each gene was normalized to that of glyceraldehyde-3-phosphate dehydrogenase ( GAPDH ). Skin separation Murine ear skin samples were treated with 0.25% trypsin (FUJIFILM Wako Pure Chemical Corporation) solution for 40 minutes at 37 °C in order to separate the epidermis and dermis. After washing two times with phosphate-buffered saline without Ca 2+ and Mg 2+ and passing through a 70 µm cell strainer, dissociated epidermal cells were then separated into CD45 - single cells (keratinocytes) and CD45 + single cells using MACS ® cell separation technology with CD45 MicroBeads beads (Miltenyi Biotec) according to the manufacturer’s instructions. The positive selected fractions and the negative sorted fractions contained more than 95% and less than 1% of CD45-positive cells, respectively, by flow cytometry (data not shown). Flow cytometry Draining lymph nodes (dLNs) were harvested and single-cell suspensions were prepared. For the exclusion of dead cells, the Zombie fixable viability kit (BioLegend) was used. Cells were incubated in FACS staining buffer (PBS containing 1% BSA and 5 mM EDTA) with anti-FcγIII/II receptor antibody (BD), and anti-CD45 (30-F11, BioLegend), anti-CD4 (Gk1.5, BioLegend), anti-CD8 (53–6.7, BioLegend), anti-CD3e (145-2C11, BioLegend), anti-B220 (RA3-6B2, eBioscience), and anti-PD-1 (29F.1A12, BioLegend) antibodies. For intracellular IFN-γ and Gzm B staining, cells were stimulated with 25 ng/ml PMA and 1 µg/ml Ionomycin in RPMI 1640 medium supplemented with 10% fetal bovine serum, 2 mM l -glutamine, 100 U/ml penicillin, and 100 µg/ml streptomycin (complete RPMI) with monensin (Golgi Stop, BD). After five hours of incubation, cell surface staining was followed by intracellular cytokine staining using the Fix/Perm Kit (BD) in accordance with the manufacturer’s instructions with anti–IFN-γ (XMG1.2, BD) and anti-Gzm B (NGZB, eBioscience) antibodies. Fluorescence-minus-one controls were used as negative controls. Cells were acquired on the Gallios (Beckman-Coulter) and data were analyzed using the FlowJo software (v7.6.5). Statistics and reproducibility The differences between the groups were evaluated by Student’s t test, Mann–Whiney U test or two-way ANOVA using GraphPad Prism 7.0 Software. A value of P < 0.05 was considered to be statistically significant. We repeated at least twice experiments and the exact sample size (n) for each experiment appear in the figure legend. Study approval All patients provided written, informed consent in compliance with the approval by the Institutional Ethics Committee at the University of Tsukuba Hospital (number: H28-045 and H30-256). All animal experiments were approved by the Animal Experiment Committee of the University of Tsukuba (Permit Number: 17–137), and performed in accordance with the Guide for the Care and Use of Laboratory Animals of the University of Tsukuba. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw data for graphs can be found in Supplementary Data 1 . All other data are available within the manuscript files or from the corresponding author upon reasonable request.
Using the body's immune system to fight cancer has great potential, but can also bring serious side effects, including itchy and painful skin reactions. But now, researchers from Japan have found how these skin reactions happen, potentially leading to a way to prevent them. In a study published this month in Communications Biology, researchers from the University of Tsukuba have determined that one unpleasant side effect of immunotherapy with PD-1 inhibitors, called "anti-PD-1 antibody-induced psoriasis-like dermatitis," is caused by inflammation resulting from high levels of a specific protein. Cancer immunotherapies work through a process that allows the body's T cells to recognize and attack cancers. But because these same processes regulate inflammation, things can get out of balance. Therapies targeting PD-1 often lead to side effects called immune-related adverse events (irAEs), which happen in more than 70% of patients who take them. The most common of these is a skin reaction, and while some of these are mild and can be easily treated with steroid creams, other patients have itchy, painful, or scaly rashes requiring more intensive treatment. Nearly a fifth of patients receiving immunotherapy stop taking the treatment because of irAEs—even though the treatment may be working well against their cancer. "Inhibition of the PD-1 pathway is becoming front-line treatment for more and more cancers," says senior author Professor Naoko Okiyama. "But it can't work if patients experience adverse events and discontinue treatment because of them. We hoped that by finding out exactly how PD-1 inhibitors cause dermatitis, we could also find a way to stop it." The new study builds on earlier research from the same team, who examined blood samples from cancer patients with this side effect, finding high levels of a cell signaling protein called IL-6. Testing this theoretical connection in mice, they found that PD-1 deficiency increased numbers of a specific type of white blood cells (called CD8 T cells) infiltrating the epidermis. CD8 T cells help the immune system kill viruses and bacteria as well as cancer cells. But when activated in large numbers, they can cause an excessive immune response leading to irAEs. The experiments in mice showed that PD-1 expressed on CD8 T cells regulates skin inflammation. The mice with PD-1 deficiency had high levels of IL-6 expression and subsequently developed dermatitis. As a final step, the researchers used an antibody to block IL-6 signaling in some of these mice—and those mice developed significantly less dermatitis than the control group. "Altogether, the results clearly show the efficacy of targeting IL-6 in mice," explains Professor Okiyama. "With further study in humans, we may have a potential approach to resolving PD-1-related dermatitis." On the basis of these results, the researchers also propose that blockade of both IL-6 and PD-1 together could have an even better combined anti-cancer effect, though this has not yet been systematically studied. It's also unknown whether the approach will work as well in people as it does in mice. "Our most striking finding is the importance of PD-1 expression on CD8 T cells in the development of dermatitis, showing real potential of IL-6 as a target for therapeutic intervention," says Professor Okiyama. "But the hope is that we can implement this combined strategy without compromising the anti-tumor effects of the anti-PD-1 therapy." Immunotherapies for cancer treatment are still relatively new; therefore, limited information is available on their long-term side effects in comparison with older chemotherapy treatments. As increasing numbers of cancer patients are treated with anti-PD-1 immunotherapy, it will be ever more important to identify strategies to prevent or lessen these adverse events.
10.1038/s42003-020-01308-2
Chemistry
Computing with biochemical circuits made easy
Anupama J. Thubagere et al, Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components, Nature Communications (2017). DOI: 10.1038/NCOMMS14373 Journal information: Nature Communications
http://dx.doi.org/10.1038/NCOMMS14373
https://phys.org/news/2017-02-biochemical-circuits-easy.html
Abstract Biochemical circuits made of rationally designed DNA molecules are proofs of concept for embedding control within complex molecular environments. They hold promise for transforming the current technologies in chemistry, biology, medicine and material science by introducing programmable and responsive behaviour to diverse molecular systems. As the transformative power of a technology depends on its accessibility, two main challenges are an automated design process and simple experimental procedures. Here we demonstrate the use of circuit design software, combined with the use of unpurified strands and simplified experimental procedures, for creating a complex DNA strand displacement circuit that consists of 78 distinct species. We develop a systematic procedure for overcoming the challenges involved in using unpurified DNA strands. We also develop a model that takes synthesis errors into consideration and semi-quantitatively reproduces the experimental data. Our methods now enable even novice researchers to successfully design and construct complex DNA strand displacement circuits. Introduction The success of computer engineering has inspired attempts to use hierarchical and systematic approaches for developing molecular devices with increasing complexity. To enable the design and construction of a wide range of functional molecular systems, we need software tools such as a compiler that can automatically translate high-level functions to low-level molecular implementations and provide models and simulations for predicting and debugging the behaviours of designed molecular systems. The mechanism of DNA strand displacement has been used to create a variety of synthetic molecular systems including circuits, motors and triggered assembly of structures 1 . Software tools have been developed for designing and analysing DNA strand displacement systems, capable of generating nucleic acid sequences from well-defined structures and molecular interactions 2 , 3 , calculating the thermodynamic 2 , 4 , 5 and kinetic 6 properties of designed molecules, and evaluating if the behaviours of the molecular systems agree with the higher-level designs 3 , 7 , 8 , 9 , 10 , 11 . There also exist a few molecular compilers that can translate abstract functions such as a logic function to DNA strand displacement implementations without requiring an understanding of the molecular level details 12 , 13 . However, there has been little independent experimental validation of these compilers, most of which were developed in parallel with or after experimental findings 12 , 14 . In addition to software tools that facilitate automated design and analysis of DNA strand displacement circuits, we also need to simplify the experimental procedures for creating these circuits in vitro , so that it is possible for researchers with diverse backgrounds to build their own circuits and explore potential applications. A great inspiration is DNA origami 15 , a technique that folds DNA into sophisticated structures. In just 10 years since its birth, DNA origami has become one of the most significant successes in the field of DNA nanotechnology. Over 170 research groups have contributed to advancing this technique or developing it for applications in a variety of research areas 16 , 17 , 18 , 19 . A fundamental reason why DNA origami was able to quickly spread around the world is that the experimental procedure is extremely simple and makes use of cheap, unpurified nucleic-acid strands. In contrast, other than a few very simple circuits with just one or two double-stranded components 20 , most DNA strand displacement circuits were constructed using strands that were purchased either purified or unpurified, but all followed by in-house polyacrylamide gel electrophoresis (PAGE) purification to reduce undesired products due to synthesis errors and stoichiometry errors 12 , 14 , 21 . Purified strands are approximately ten times more expensive than unpurified strands, which significantly increases the cost for building large-scale DNA circuits. In-house PAGE purification is both time consuming and labour intensive. In this work, we show that one can successfully build a complex DNA strand displacement circuit, using DNA sequences automatically generated from a molecular compiler. We also show that one can even do so using cheap, unpurified DNA strands, following simple and systematic experimental procedures. Results Circuit design A simple DNA strand displacement motif called the seesaw gate was developed to scale up the complexity of DNA circuits 22 and was used to demonstrate digital logic computation 12 and neural network computation 23 . The Seesaw Compiler 12 , 24 was developed to automatically translate an arbitrary feed forward digital logic circuit into its equivalent seesaw DNA circuit ( Fig. 1 ). The compiler takes an input file that describes a logic circuit with a list of input and output terminals, and a list of AND, OR, NOT, NAND and NOR gates with the connectivity of their terminals specified. First, a technique called dual-rail logic is applied to translate the original logic circuit into an equivalent circuit that contains AND and OR gates only 25 . This is because the NOT gate cannot be directly implemented in multi-layer use-once DNA circuits, if the OFF and ON state of a signal is represented by low and high concentration of a single DNA strand, respectively. If a NOT gate were implemented this way, then output molecules of the gate could be immediately produced in the absence of input. However, once this reaction reaches equilibrium it cannot be reversed, even if input molecules are added at a later point. With dual-rail logic, each terminal in the original circuit is replaced by two terminals, representing the OFF and ON states of a signal separately (for example, each input signal x i is replaced by and ). Thus, no reaction will take place until signal molecules on one of the two wires have arrived. With this representation, the NOT gate can be implemented by exchanging the two wires of an input and output signal. Each AND, OR, NAND and NOR gate in the original circuit is replaced by a pair of AND and OR gates. Figure 1: Automated circuit design steps using the Seesaw Compiler. A feedforward digital logic circuit is first translated into an equivalent dual-rail logic circuit and then translated into an equivalent seesaw DNA circuit. Visual DSD code and Mathematica code are generated for analysing and simulating the seesaw DNA circuit, and DNA sequences are generated for constructing the circuit. Bottom right diagram introduces the notations of seesaw circuits: black numbers indicate identities of nodes. The locations and values of red numbers indicate the identities of distinct DNA species and their relative initial concentrations, respectively. Full size image Next, the compiler translates the dual-rail logic circuit into an equivalent seesaw DNA circuit. In a seesaw DNA circuit, each signal is defined as a wire w j , i connecting seesaw nodes j and i , and implemented using a single-stranded DNA molecule. Each AND and OR gate in the dual-rail circuit is replaced by a seesaw AND and OR gate, respectively, which is defined as a pair of integrating and amplifying seesaw nodes connected with a set of input and output wires 12 . The seesaw nodes are composed of double-stranded threshold and gate:output molecules and single-stranded fuel molecules ( Fig. 1 , bottom right). We will explain how the seesaw logic gates work in the next section. Input fan-out gates are introduced to take an input signal that is used for multiple logic gates and produce the corresponding number of output signals. Reporters are introduced to take each output signal and generate a distinct fluorescence signal for readout. Finally, the compiler generates Visual DSD 3 , 26 code and Mathematica code for simulating and analysing the seesaw DNA circuit and a file that contains DNA sequences for all molecular species in the circuit. The Visual DSD code can be used to automatically produce diagrams of species, reactions and network graphs with domain-level representation of DNA and to simulate the circuit behaviour based on the network of chemical reactions. The Mathematica code provides more customized and efficient simulations of seesaw circuits. The simulation uses the CRNSimulator package 27 and models a specific set of side reactions in addition to the designed reactions in a seesaw network 12 . As a demonstration of using the Seesaw Compiler, we designed a single DNA strand displacement circuit that implements two distinct elementary cellular automata transition functions. An elementary cellular automaton (CA) is one of the simplest models of computation 28 . It consists of a one-dimensional grid of cells, collectively called a generation, where each cell has a binary state of 0 or 1. In each subsequent generation, the state for a cell C is determined by its current state and those of its left neighbour L and right neighbour R . A state transition rule maps each of the 2 3 =8 possible combinations of states for L , C and R to either 0 or 1. Thus, a length 8 binary string uniquely identifies one of the 2 8 possible transition functions that specify how an elementary CA will evolve between generations. The rule 110 elementary CA (binary number 01101110 written in decimal) is famously known to be Turing universal 29 . Another rule that is equally powerful is rule 124 (binary number 01111100 written in decimal), generated by applying the following mirror transformation: the new state of the centre cell for LCR = zyx in rule 124 is the same as the new state for LCR = xyz in rule 110. Our circuit was designed to compute a combined logic function of the two transition rules ( Fig. 2a ). It consists of five logic gates in two layers, including a three-input two-output NAND gate. It is noteworthy that we designed the circuit to demonstrate an interesting logic function associated with cellular automata and not to implement the actual cellular automata model. The circuit operates in a well-mixed test tube environment that does not involve spatial dynamics (that is, no geometry of cells). Figure 2: Design of a rule 110–124 circuit using the Seesaw Compiler. ( a ) Gate diagram and truth table of a digital logic circuit that computes the transition rules 110 and 124 of elementary cellular automata. ( b ) Seesaw gate diagram of the equivalent DNA strand displacement circuit. Each seesaw node connected to a dual-rail input implements input fan-out. Each pair of seesaw nodes labelled and implements a dual-rail AND and OR gate, respectively. Each pair of dual-rail AND and OR gates implements an AND, OR or NAND gate in the original logic circuit. Each dual-rail output is converted to a fluorescence signal through a reporter, indicated as a half node with a zigzag arrow. Each circle and dot inside a seesaw node indicates a double-stranded threshold and gate molecule, respectively. Each dot on a wire indicates a single-stranded fuel molecule. ( c ) Simulations of the DNA strand displacement circuit using the previously developed model for purified seesaw circuits. Trajectories and their corresponding outputs have matching colours. Overlapping trajectories were shifted to be visible. Dotted and solid lines indicate dual-rail outputs that represent logic OFF and ON, respectively. For example, when input LCR =001, meaning L 0 , C 0 and R 1 were introduced at a high concentration and L 1 , C 1 and R 0 at a low concentration, two output trajectories R 124 0 and R 110 1 reached an ON state and the other two output trajectories R 124 1 and R 110 0 remained in an OFF state, indicating that the output was computed to be 0 and 1 for rule 124 and 110, respectively. Simulations were performed at 1 × =50 nM—the compiler recommended standard concentration for large-scale purified seesaw circuits. Full size image The DNA circuit generated by the Seesaw Compiler consisted of 6 layers and a total of 78 distinct initial DNA species ( Fig. 2b and Supplementary Fig. 1 ). Mathematica simulations of the DNA circuit predicted correct computation for all 8 possible input combinations under ideal experimental conditions ( Fig. 2c ). The next step was to construct the DNA circuit using strands that were purchased unpurified and with no additional in-house purification. We expected that the main challenges would be to understand how synthesis errors and stoichiometry errors affect the behaviours of DNA circuits and to explore solutions that restore the desired circuit behaviour. We took a bottom-up approach and began building the DNA circuit from the simplest functional component—digital signal restoration. Calibrating effective concentrations Digital signal restoration is a process that pushes the intrinsically analog signal towards either the ideal ON or OFF state, therefore cleaning up the noise and compensating for the signal decay that occurs during circuit execution. In seesaw circuits, digital signal restoration is a component of every logic gate, and is implemented by an amplifying seesaw node with the following idealized input-output function: At the molecular level, the digital signal restoration process consists of two basic reactions: catalysis and thresholding. Catalysis is implemented with two toehold exchange pathways that release free output strands w i , k from double-stranded gate molecules G i : i , k , using the input strands w j , i as a catalyst ( Supplementary Fig. 2a ): Catalysis can be used for signal amplification, since a small amount of input can trigger the release of a much larger amount of output. Thresholding is implemented with double-stranded threshold molecules Th j , i : i consuming the input at a much faster rate ( k f ≫ k s ) than the input acting as a catalyst ( Supplementary Fig. 2b ): As shown in simulations generated using the Seesaw Compiler ( Fig. 3a ), when the concentration of the threshold molecule is 0.5 × (where 1 × is a standard concentration of 100 nM), we expect that input less than the threshold (for example, 0.3 × ) should be cleaned up to an ideal OFF state via reaction 3 and input greater than the threshold (for example, 0.7 × ) should be amplified to an ideal ON state via reaction 2. However, the observed circuit behaviour was different: when input=0.7 × , the output signal was higher than an ideal OFF state, but did not reach an ideal ON state ( Fig. 3b ). This experimental result suggested that the input did not sufficiently exceed the threshold, which was an indication that the effective concentration of an unpurified threshold species, compared with that of an unpurified signal species, was higher than expected. Figure 3: Calibrating effective concentrations. ( a ) Simulations and ( b ) experimental data of digital signal restoration. ( c ) Estimating effective threshold concentration by fitting simulations to the data obtained. ( d ) OR and AND logic gates constructed using adjusted nominal threshold concentrations. ( e ) Estimating effective gate concentration. Data show steady-state fluorescence level. 1 × =100 nM. Here and in later figures, all output signals in the data were normalized using the minimum fluorescence signal (the first data point) of an OFF trajectory as 0 and the maximum fluorescence signal (the average of the last five data points) of an ON trajectory as 1. Full size image The nominal concentration of a DNA species can be measured using ultraviolet absorbance, but it can be higher than the effective concentration, which is the concentration of the DNA species actually performing the desired reactions. If the sequences of the DNA strands are properly designed, the difference between nominal concentration and effective concentration is typically caused by synthesis errors including nucleotide insertion, deletion and mismatch. To calibrate the effective concentrations of unpurified DNA molecules, we defined the following ratio between effective (eff) and nominal (nom) concentrations of an arbitrary signal, threshold and gate species: The effective to nominal concentration of a DNA species cannot be measured in isolation. More importantly, the absolute values of α , β and γ should only affect the speed but not the correctness of computation, if the values remain comparable to each other. Thus, we chose to estimate the ratio between β and α for a threshold consuming a signal, by comparing simulation with experimental result of a signal restoration circuit. For example, manipulating the threshold value in simulation (sim) identified that agreed with the experimental data ( Fig. 3c ), which means the effective concentration of the threshold was similar to that of the signal for and . Thus, the threshold to signal ratio can be calculated as: A possible explanation for an unpurified threshold having a higher effective concentration than an unpurified signal, when the nominal concentrations are the same, is the following: the synthesis errors of an unpurified strand depend on the length of the strand, because in the process of chemical synthesis each nucleotide is attached to a growing chain of oligonucleotide one at a time and the coupling efficiency of each step is less than 100% (ref. 30 ). Threshold molecules are composed of shorter strands (15 and 25 nucleotides) than signal molecules (33 nucleotides) and thus may contain fewer synthesis errors. Additional signal restoration experiments suggested that the threshold to signal ratio β / α =1.4 was consistent for different threshold and signal molecules ( Supplementary Fig. 3 ). Thus, using this ratio, we can then calculate how to adjust the nominal thresholds for correctly computing logic AND and OR. Each seesaw logic gate has an integrating node upstream of an amplifying node. Ideally, an integrating node outputs the sum of all inputs: A two-input logic function can be computed as: Assuming that an ideal OFF state is [0, 0.2] and an ideal ON state is [0.8, 1], th =0.6 will compute logic OR and th =1.2 will compute logic AND, if the effective concentrations of the threshold and input signals are comparable to each other (that is, β / α =1). As β / α ≠1 for unpurified threshold and signal molecules, we can take this ratio into consideration while calculating the lower and upper bounds of the nominal threshold for an n -input logic gate: Using β / α =1.4, we chose a nominal threshold of 0.35 × and 0.85 × for two-input OR and AND gate, respectively, and 0.4 × and 1.6 × for three-input OR and AND gate. Experiments of the logic gates showed desired behaviours ( Fig. 3d and Supplementary Fig. 4 ). An alternative approach for adjusting the nominal threshold is to use the following equations: Compared with choosing a nominal threshold based on the lower and upper bounds, this approach is less flexible but simpler. Next, we can estimate the ratio between γ and α for a gate releasing a signal, using an experiment that compares the fully triggered (tri) concentration of the gate with the signal when their nominal concentrations are the same. For example, the data in Fig. 3e showed that when . Thus, the gate to signal ratio can be calculated as: Additional gate calibration experiments suggested that the ratio γ / α =0.8 was consistent for different gate and signal molecules ( Supplementary Fig. 5 ). We suspect that due to synthesis errors in gate molecules, not all gates can successfully release a signal, which is why an unpurified gate has a lower effective concentration compared to a signal. As signal restoration was built in within every logic gate to accept an ON state of [0.8, 1], we decided not to make any adjustment for nominal gate concentrations if γ / α ≥0.8. Otherwise, nominal concentration of an amplifying gate and an n -input integrating gate can be adjusted as: Importantly, the values of α , β and γ should depend on the strand quality and thus could vary with different DNA synthesis providers, procedures and even batches. It is necessary to recalculate the ratios β / α and γ / α , if these conditions change. Identifying outliers With calibrated logic gates, we investigated how well they compose together in larger circuits. We constructed a two-layer logic circuit that is part of the rule 124 sub-circuit and is composed of an AND gate and two upstream OR gates ( Fig. 4a ). The expected circuit behaviour is that the output should remain OFF when only one of the upstream OR gates is ON. However, the observed circuit behaviour showed that the output was reasonably OFF when one upstream OR gate was ON, but was half ON when the other upstream OR gate was ON. This experimental result suggested that the ON signals pushed onto the two input wires of the downstream AND gate (that is, the output wires of the two upstream OR gates) were significantly different from each other, which was an indication that the effective concentrations of the two unpurified gate species that released the output signals were different—one of the gates must be an outlier with γ / α ≠0.8. Figure 4: Identifying an outlier gate. ( a ) Logic circuit diagram, seesaw circuit diagram and experimental data of a two-layer logic circuit. ( b ) Measuring the effective concentrations of the gate species. Three independent circuits were used to measure the effective concentrations of two gates fully triggered by x 1 and x 2 , respectively, comparing with the effective concentration of x 3 (using signal strand w 18,53 ). ( c ) Experimental data of the two-layer logic circuit using adjusted nominal gate concentration. 1 × =100 nM. Full size image Indeed, with a gate calibration experiment shown in Fig. 4b , we measured that γ 18,53 / α 18,53 =0.8 × for one gate and γ 22,53 / α 22,53 =0.44 × for another. A possible explanation is that the synthesis errors of unpurified strands somewhat depend on DNA sequences 30 and variations of effective concentrations may occur between different gate or threshold species. We suspect it was not a coincidence that the outlier gate had a lower effective concentration compared with other unpurified gates, because a particular DNA strand having much worse quality than average is probably more likely than it having much better quality. Once an outlier is identified, either a threshold or a gate, the nominal concentration can be adjusted using its own threshold to signal ratio (that is, β / α ) or gate to signal ratio (that is, γ / α ), the common nominal concentration described in the previous section, and the common ratio for other thresholds and gates: We constructed the two-layer logic circuit using the adjusted nominal gate ( Fig. 4c ). The trajectories that compute logic ON reached an ideal high fluorescence state faster than the previous experiments shown in Fig. 4a and the trajectories that compute logic OFF remained at a lower fluorescence state that were roughly identical for all three input combinations, regardless of which upstream OR gate was ON. However, after identifying and adjusting the outlier gate, we still had a problem: the OFF trajectories were not at an ideal low fluorescence state. This led to the next tuning step that is necessary for unpurified seesaw circuits. Tuning circuit output Comparing the behaviour of the AND gate when it was in isolation ( Fig. 3d ) and that when it was connected with two upstream OR gates ( Fig. 4c ), the ON/OFF separation was significantly decreased in the latter. These experimental results suggest that, compared with purified seesaw DNA circuits in which the ON/OFF separations were roughly identical from a single logic gate to four-layer logic circuits 12 , unpurified circuits are much noisier and the behaviour becomes less robust with more than one layer. We suspect this is caused by the stoichiometry errors in unpurified gate species. The double-stranded gate molecules were annealed with the same amount of top and bottom strands, because both strands have combinations of toehold and branch migration domains that can cause undesired interactions with other circuit components and thus neither should be in excess. However, due to variations in the pipetting volume and in the accuracy of concentrations, the equal stoichiometry cannot be guaranteed. Without purification, a small excess of one strand or another in the gate species cannot be removed. Therefore, the excess of strands would result in undesired release of output signals in logic gates, even without input signals, and introduce extra noise to downstream logic gates. Fortunately, thanks to the thresholding function in every logic gate, we can tune the circuit output by increasing a threshold. A simple method for estimating how much threshold adjustment is needed is based on the ON/OFF separation of the circuit output. Using experimental data of a logic circuit with different inputs, we can choose a trajectory that should compute logic ON and OFF, respectively, and calculate the difference ( δ ) between the observed OFF value and an ideal OFF value, when the ON trajectory reaches an ideal ON value. Considering 0.7 and 0.3 as the lower bound, and 0.9 and 0.1 as the upper bound for an ideal ON/OFF separation, the range of δ can be determined as: The nominal threshold in the logic gate that produces the circuit output can then be adjusted accordingly: Using the data of the two-layer logic circuit shown in Fig. 4c , we chose the trajectory with input=01010 and 11100 as the reference ON and OFF trajectory, respectively, and calculated 0.08≤ δ ≤0.41. We then increased the threshold in the downstream AND gate to and repeated the experiment. The circuit behaviour was improved with a much better ON/OFF separation ( Fig. 5a ). Figure 5: Tuning circuit output. Logic circuit diagram, seesaw circuit diagram and experimental data of a two-layer logic circuit with ( a ) two upstream OR gates connected to a downstream AND gate and ( b ) two upstream AND gates connected to a downstream OR gate. Nominal concentrations shown in grey and black indicate adjustments made in a previous step and in this step, respectively. Small insets of experimental data show the circuit behaviours before adjustments. 1 × =100 nM. Full size image With the same method, we constructed another two-layer logic circuit that is composed of an OR gate and two upstream AND gates ( Fig. 5b ). In this case, using input=00011 and 01110 as the reference ON and OFF trajectories, we obtained a similar range of δ and decided to apply the same amount of increase to the threshold in the downstream OR gate. It is noteworthy that a rule of thumb is to choose the slowest ON trajectory and the fastest OFF trajectory as the references for threshold adjustment, but different choices can be made if one has the knowledge of which data set is experimentally more reliable. Also note that increasing the threshold not only suppresses the OFF trajectories but also slows down the ON trajectories and thus this method of tuning the circuit output is only applicable if all ON trajectories are significantly faster than all OFF trajectories (which should be true if the thresholds and gates are properly calibrated). Combining the two logic circuits shown in Fig. 5 and adding fan-out gates for input signals that are used in multiple logic gates, we successfully demonstrated the rule 124 sub-circuit consisting of 54 distinct DNA species ( Supplementary Fig. 6 ). We do not have evidence of how well unpurified circuits with multiple layers can be constructed, but we suspect that with the same amount of threshold increase (that is, δ × α / β ) in all logic gates at layer two and above, undesired signals released from upstream gates can be effectively suppressed at every layer without accumulating over an increasing number of layers. Systematic procedure Starting from the calibration of effective concentrations for threshold and gate species in general, to the identification and adjustment of any outliers, and then to the final tuning of circuit output, we established three sequential steps for building unpurified seesaw circuits. To make these steps easy to follow, we now further describe a systematic procedure, and evaluate the procedure by constructing a new logic circuit from scratch—the rule 110 sub-circuit. We summarized the procedure in a flowchart ( Fig. 6 ). It starts with constructing the simplest functional component, digital signal restoration, and estimating the effective threshold compared to a signal. If the threshold to signal ratio β / α >1.2, adjust the nominal thresholds in all logic gates. Next, construct a single logic gate. If it fails to compute correctly, it indicates that the threshold species in this logic gate is an outlier, and thus one needs to go back to the first step and repeat the process to calibrate this particular threshold. Otherwise, move on to gate calibration experiments. If the gate to signal ratio γ / α <0.8, adjust all nominal gates. Figure 6: Flowchart for building seesaw DNA circuits using unpurified components. Insets show how the flowchart was used to construct the rule 110 sub-circuit. Y (yes) and N (no) highlighted in orange in the flowchart indicate the situations encountered and decisions made while building the rule 110 sub-circuit. 1 × =100 nM. Full size image Then construct a two-layer logic circuit, and identify if there exists an outlier gate. If so, repeat the process to calibrate this particular gate. At this point, the circuit still may not exhibit desired ON/OFF separation (for example, the OFF trajectories may be higher than 0.3 when the ON trajectories reach 0.7). However, if the ON trajectories are significantly faster than the OFF trajectories, increase the nominal threshold in the logic gate that directly produces the circuit output to tune the circuit behaviour. Continue to construct a larger circuit. If it fails to compute correctly, the most likely reason would be a new outlier gate. Identify the outlier based on cases where the ON/OFF separation is worst, and repeat the steps for calibrating the gate accordingly. Following the flowchart, we completed the construction of the rule 110 sub-circuit in only 3 days ( Fig. 6 ). If all components were PAGE purified, incrementally building the circuit would require at least one additional day for each new experiment, assuming no experimental errors. The turnaround time would be significantly increased. Combining the components from both rule 110 and rule 124 sub-circuits, using shared input-fanout gates and a three-input NAND gate ( Fig. 2ab ), the full rule 110–124 circuit consisting of 78 distinct DNA species was constructed in one test tube. The fluorescence kinetics experiments showed correct ON and OFF states of the two pairs of dual-rail outputs, for all eight possible inputs ( Fig. 7a ). To pictorially compare the ideal logic behaviour and the DNA circuit behaviour, we plotted each output into an array that represents eight cellular automata generations ( Fig. 7b ). The ideal logic circuit behaviour corresponds to four images of dogs. The DNA circuit behaviour yielded less contrast between the dogs and their backgrounds, but the patterns were still clearly recognizable. Figure 7: Implementing the rule 110–124 full circuit. ( a ) Fluorescence kinetics data of the two pairs of dual-rail outputs. 1 × =100 nM. All DNA sequences are listed in Supplementary Table 1 . ( b ) Comparing the ideal logic circuit behaviour (left) with the DNA circuit behaviour (right). Each of the circuit outputs is illustrated by an array of 7 × 8 cells, representative of eight cellular automata generations on a torus with starting configuration (0,0,0,1,0,0,0). The arrays for the DNA circuit were plotted using the output values at 24 h from the data. The ideal logic circuit behaviour corresponds to an image of a black dog with a white background for R 124 1 , an inverted image for R 124 0 and their mirror images for R 110 1 and R 110 0 , respectively. Full size image Modelling Despite that the experiments were performed at a higher concentration (that is, 1 × =100 nM), the rule 110–124 circuit computed much slower than what the simulations predicted for 1 × =50 nM ( Fig. 2c ). We suspect that the difference was caused by the impurity of the molecules. To better predict the behaviour of seesaw circuits using unpurified components, we developed a model that takes synthesis errors into consideration. We first define the probability of having n errors in a chemically synthesized DNA strand of l bases, given that r is the probability of synthesis error per base: We then calculate the populations of signal, gate and threshold molecules with and without synthesis errors ( Fig. 8a ). To make the model simple enough, but accurate enough to describe reactions that involve molecules with synthesis errors at distinct locations, we treat the very small population of molecules with more than one synthesis error as non-reactive, and classify the remaining molecules containing a single synthesis error based on the domain where the error occurs. For example, a signal strand is composed of two branch migration domains flanking a toehold domain. Given that a branch migration domain has 15 bases and a toehold domain has 5 bases, the probability of a signal strand having s errors in a specific branch migration domain (and thus not in the other) and t errors in the toehold domain can be calculated as: Figure 8: A model for unpurified seesaw circuits. ( a ) Populations of signal, gate and threshold molecules without and with synthesis errors in the marked locations. r =0.01. ( b ) Example reactions that involve DNA strands without and with synthesis errors. ∀ i , j , k , x and y . Full size image In a previous study on the robustness of a catalytic DNA strand displacement motif 21 , a single base mutation in an invading strand significantly slowed down (on the scale of 100 fold) a reversible strand displacement reaction that was designed with Δ G °≈0, both when the mutation was in the toehold and when it was in the branch migration domain. In contrast, an irreversible strand displacement reaction was only slowed down significantly (also on the scale of 100-fold) when the mutation was in the toehold domain, but the reaction rate remained roughly unchanged when the mutation was in the branch migration domain. These observations lead us to the following interpretations: A synthesis error in the toehold domain can slow down strand displacement by increasing the disassociation rate of the toehold and thus decreasing the overall reaction rate. A synthesis error in the branch migration domain can also slow down strand displacement, but only when the energy change caused by the synthesis error is significant compared to the designed standard free energy of the reaction, and not when the reaction is already strongly favoured in one direction. Based on these interpretations, we estimated the rates of all five types of reactions in a seesaw network, involving all populations of defective molecules ( Fig. 8b and Supplementary Note 1 ). We first simulated the rule 110–124 circuit assuming that all molecules do not have synthesis errors, at the concentrations used in the experiments ( Fig. 9a ). Using exactly the same concentrations for all species, and the same rate parameters for reactions that are not affected by synthesis errors, we then simulated the circuit with each species divided into multiple populations including synthesis errors ( Fig. 9b ). The results of these two simulations were dramatically different: only the latter exhibited a remarkable degree of agreement with the data shown in Fig. 7a . Figure 9: Simulations comparing the purified and unpurified models. ( a ) Simulations of the rule 110–124 circuit using the previously developed model for purified seesaw circuits, predicting that the circuit should yield desired outputs in roughly 8 h (shown as dotted lines) and the undesired reactions will take over in 24 h. ( b ) Simulations using the new model for unpurified seesaw circuits, predicting that the circuit should yield desired outputs in roughly 24 h. k f =2 × 10 6 M −1 s −1 , k s =5 × 10 4 M −1 s −1 , k l =10 M −1 s −1 , k rf =26 s −1 , k rs =1.3 s −1 . 1 × =100 nM. Full size image Discussion The biggest challenge that could prevent a molecular compiler from working in practice is that a new circuit may require new molecular components, which may not behave the same as the ones previously characterized. Thus, what made it possible to build a new complex circuit using the Seesaw Compiler? First, there are only three types of molecular components (signal, gate and threshold) for arbitrary feedforward logic circuits, which yield highly predictable circuit behaviour. Second, because of the simplicity of the molecules, there is minimal sequence design challenge. A three-letter code (A, T and C) for all signal strands is sufficient to eliminate undesired reactions. Finally, exact kinetics is not essential for qualitatively correct computation and thus small difference caused by DNA sequences should not affect the desired circuit behaviour. On the other hand, the biggest challenge that could prevent us from using unpurified DNA strands is that the synthesis errors may lead to completely unpredictable molecular behaviours. Thus, what made it possible to build a complex circuit using unpurified strands? First, the Seesaw Compiler provides simulations as a debugging tool and makes it straightforward to identify problems caused by the synthesis errors. Second, again because there are only three types of species, it is relatively easy to understand the behaviours of defective molecules, as we expect similar synthesis quality across distinct species of the same type. More importantly, the signal restoration built in to every logic gate allows simple tuning to restore desired circuit behaviour, compensating for the impurity of molecules. In general, there are several factors that we find important for the goals of producing a better molecular compiler, and implementing unpurified DNA circuits with more robust behaviours. Given that it is difficult to obtain fully predictable behaviour for newly designed molecular components, alternative architectures that enable arbitrary circuits to be created from a constant number of molecules will likely promote the development of compilers that work reliably in these contexts 31 . It is also necessary to eliminate leak reactions in DNA circuits 32 and to improve the building blocks such that they are substantially less sensitive to synthesis errors and stoichiometry errors. Nonetheless, with an experimental validation of the Seesaw Compiler and simplified experimental procedures using unpurified DNA strands described in this work, it is now possible to imagine a near future in which a molecular compiler can generate protocols from a high-level circuit function, and the protocols can then be executed by a liquid handling robot. Molecular engineers typing away on a computer to create biochemical circuits in a test tube is no longer just a distant dream. Methods DNA oligonucleotide synthesis DNA oligonucleotides were purchased from Integrated DNA Technologies (IDT). The DNA strands in gate, threshold and fuel species were purchased unpurified (standard desalting). The reporter strands with fluorophores and quenchers were purchased purified (HPLC). All strands were purchased at 100 μM in TE buffer pH 8.0 and stored at 4 °C. Annealing protocol and buffer condition Gate complexes were annealed together at 20 μM, with equal stoichiometry of top and bottom strands. Threshold and reporter complexes were annealed together at 20 μM with a 20% excess of top strands. All DNA complexes were annealed in 1 × TE buffer with 12.5 mM Mg 2+ , prepared from 100 × TE pH 8.0 (Fisher BioReagents) and 1 M MgCl 2 (Invitrogen). Annealing was performed in a thermal cycler (Eppendorf), first heating up to 90 °C for 2 min and then slowly cooling down to 20 °C at the rate of 6 s per 0.1 °C. All annealed complexes were stored at 4 °C. Fluorescence spectroscopy Fluorescence kinetics data in Figs 3 , 4 , 5 , 6 and Supplementary Figs 3–6 were collected every 2 min in a monochromator-based plate reader (Synergy H1M, BioTek). Experiments were performed with 100 μl reaction mixture per well, in 96-well microplates (black with clear flat bottom, polystyrene NBS, Corning 3651) at 25 °C. Clear adhesive sealing tapes (Thermo Scientific Nunc 232701) were used to prevent evaporation. The excitation/emission wavelengths were set to 497/527 nm for ATTO 488 and 597/629 nm for ATTO 590. Fluorescence kinetics data in Fig. 7 were collected every 4 min in a spectrofluorimeter (Fluorolog-3, Horiba). Experiments were performed with 500 μl reaction mixture per cuvette, in fluorescence cuvettes (Hellma 115 F-QS) at 25 °C. The excitation/emission wavelengths were set to 502/522 nm for ATTO 488, 602/624 nm for ATTO 590, 560/575 nm for ATTO 550 and 649/662 nm for ATTO 647. Both excitation and emission bandwidths were set to 2 nm and the integration time was 10 s for all experiments. Data analysis A Mathematica Notebook file for data analysis and example data files are available to download at the Seesaw Compiler website: . Data availability Key data supporting the findings of this study are available to download at the Seesaw Compiler website and all other data are available from the corresponding author upon reasonable request. Additional information How to cite this article: Thubagere, A. J. et al . Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components. Nat. Commun. 8, 14373 doi: 10.1038/ncomms14373 (2017). Publisher’s note : Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic circuits are found in almost everything from smartphones to spacecraft and are useful in a variety of computational problems from simple addition to determining the trajectories of interplanetary satellites. At Caltech, a group of researchers led by Assistant Professor of Bioengineering Lulu Qian is working to create circuits using not the usual silicon transistors but strands of DNA. The Qian group has made the technology of DNA circuits accessible to even novice researchers—including undergraduate students—using a software tool they developed called the Seesaw Compiler. Now, they have experimentally demonstrated that the tool can be used to quickly design DNA circuits that can then be built out of cheap "unpurified" DNA strands, following a systematic wet-lab procedure devised by Qian and colleagues. A paper describing the work appears in the February 23 issue of Nature Communications. Although DNA is best known as the molecule that encodes the genetic information of living things, they are also useful chemical building blocks. This is because the smaller molecules that make up a strand of DNA, called nucleotides, bind together only with very specific rules—an A nucleotide binds to a T, and a C nucleotide binds to a G. A strand of DNA is a sequence of nucleotides and can become a double strand if it binds with a sequence of complementary nucleotides. DNA circuits are good at collecting information within a biochemical environment, processing the information locally and controlling the behavior of individual molecules. Circuits built out of DNA strands instead of silicon transistors can be used in completely different ways than electronic circuits. "A DNA circuit could add 'smarts' to chemicals, medicines, or materials by making their functions responsive to the changes in their environments," Qian says. "Importantly, these adaptive functions can be programmed by humans." To build a DNA circuit that can, for example, compute the square root of a number between 0 and 16, researchers first have to carefully design a mixture of single and partially double-stranded DNA that can chemically recognize a set of DNA strands whose concentrations represent the value of the original number. Mixing these together triggers a cascade of zipping and unzipping reactions, each reaction releasing a specific DNA strand upon binding. Once the reactions are complete, the identities of the resulting DNA strands reveal the answer to the problem. With the Seesaw Compiler, a researcher could tell a computer the desired function to be calculated and the computer would design the DNA sequences and mixtures needed. However, it was not clear how well these automatically designed DNA sequences and mixtures would work for building DNA circuits with new functions; for example, computing the rules that govern how a cell evolves by sensing neighboring cells. "Constructing a circuit made of DNA has thus far been difficult for those who are not in this research area, because every circuit with a new function requires DNA strands with new sequences and there are no off-the-shelf DNA circuit components that can be purchased," says Chris Thachuk, senior postdoctoral scholar in computing and mathematical sciences and second author on the paper. "Our circuit-design software is a step toward enabling researchers to just type in what they want to do or compute and having the software figure out all the DNA strands needed to perform the computation, together with simulations to predict the DNA circuit's behavior in a test tube. Even though these DNA strands are still not off-the-shelf products, we have now shown that they do work well for new circuits with user-designed functions." "In the 1950s, only a few research labs that understood the physics of transistors could build early versions of electronic circuits and control their functions," says Qian. "But today many software tools are available that use simple and human-friendly languages to design complex electronic circuits embedded in smart machines. Our software is kind of like that: it translates simple and human-friendly descriptions of computation to the design of complex DNA circuits." The Seesaw Compiler was put to the test in 2015 in a unique course at Caltech, taught by Qian and called "Design and Construction of Programmable Molecular Systems" (BE/CS 196 ab). "How do you evaluate the accessibility of a new technology? You give the technology to someone who is intellectually capable but has minimal prior background," Qian says. "The students in this class were undergrads and first-year graduate students majoring in computer science and bioengineering," says Anupama Thubagere, a graduate student in biology and bioengineering and first author on the paper. "I started working with them as a head teaching assistant and together we soon discovered that using the Seesaw Compiler to design a DNA circuit was easy for everyone." However, building the designed circuit in the wet lab was not so simple. Thus, with continued efforts after the class, the group set out to develop a systematic wet-lab procedure that could guide researchers—even novices like undergraduate students—through the process of building DNA circuits. "Fortunately, we found a general solution to every challenge that we encountered, now making it easy for everyone to build their own DNA circuits," Thubagere says. The group showed that it was possible to use cheap, "unpurified" DNA strands in these circuits using the new process. This was only possible because steps in the systematic wet-lab procedure were designed to compensate for the lower synthesis quality of the DNA strands. "We hope that this work will convince more computer scientists and researchers from other fields to join our community in developing increasingly powerful molecular machines and to explore a much wider range of applications that will eventually lead to the transformation of technology that has been promised by the invention of molecular computers," Qian says. The paper is titled, "Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components."
10.1038/NCOMMS14373
Biology
Study proves importance of bird poo in enhancing coral growth
Candida Savage. Seabird nutrients are assimilated by corals and enhance coral growth rates, Scientific Reports (2019). DOI: 10.1038/s41598-019-41030-6 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-41030-6
https://phys.org/news/2019-03-importance-bird-poo-coral-growth.html
Abstract Nutrient subsidies across ecotone boundaries can enhance productivity in the recipient ecosystem, especially if the nutrients are transferred from a nutrient rich to an oligotrophic ecosystem. This study demonstrates that seabird nutrients from islands are assimilated by endosymbionts in corals on fringing reefs and enhance growth of a dominant reef-building species, Acropora formosa . Nitrogen stable isotope ratios (δ 15 N) of zooxanthellae were enriched in corals near seabird colonies and decreased linearly with distance from land, suggesting that ornithogenic nutrients were assimilated in corals. In a one-year reciprocal transplant experiment, A. formosa fragments grew up to four times faster near the seabird site than conspecifics grown without the influence of seabird nutrients. The corals influenced by elevated ornithogenic nutrients were located within a marine protected area with abundant herbivorous fish populations, which kept nuisance macroalgae to negligible levels despite high nutrient concentrations. In this pristine setting, seabird nutrients provide a beneficial nutrient subsidy that increases growth of the ecologically important branching corals. The findings highlight the importance of catchment–to–reef management, not only for ameliorating negative impacts from land but also to maintain beneficial nutrient subsidies, in this case seabird guano. Introduction Nutrient subsidies can transcend ecosystem boundaries where they can enhance productivity 1 and functional diversity 2 , alter food webs 3 , and increase stability 4 and persistence of recipient marine communities 5 . Allochthonous nutrients can transcend ecotones either passively, such as macroalgal detritus that washes up on coastlines 3 , 6 , 7 , or via active vectors including seabirds 1 , 8 . The ecological effects of these nutrient subsidies are particularly pronounced when the receiving ecosystem has low production 5 , 9 . A case in point is the Gulf of California islands where seabirds forage in highly productive marine waters and deposit guano around their roosting sites that enhance local productivity 1 and influence community structure in terrestrial desert ecosystems 4 , 10 . Nutrient enrichment from seabird colonies can also increase marine production via sea–land–sea transfer. For example, ornithogenic nutrients increased macroalgal production 11 and altered benthic community structure of a temperate intertidal rocky reef community 12 . In tropical ecosystems, seabird nutrients can enrich nitrogen inputs to soil on islands 8 , 13 and increase nutrient availability in adjacent pelagic 14 and benthic food webs 15 . Seabird-derived nutrients have been traced into coral holobionts 16 , however the ecological effects of these nutrients on reef-building (scleractinian) corals have not been demonstrated previously. This study assessed the influence of seabird nutrient subsidies on coral growth rates using a spatial gradient sampling scheme and a reciprocal transplant experiment. Coral reefs are among the most productive ecosystems yet occur in oligotrophic waters 17 . This paradigm is due largely to the tight coupling in nutrient cycling between the coral host and endosymbionts (commonly referred to as zooxanthellae), whereby inorganic nutrients excreted by the coral animal are assimilated by the symbiotic dinoflagellates of the family Symbiodiniaceae 18 to support photosynthesis 19 . In turn, the zooxanthellae translocate organic compounds to the coral animal to support metabolic demands 19 . Within the coral holobiont, endosymbionts can acquire inorganic nutrients from their host’s waste metabolites or from surrounding seawater 20 . At a community level, the mutualistic association between the branching coral Sylophora pistillata and the coral obligate damselfish Dascyllus marginatus results in significantly higher growth rates of corals with resident damselfish due to nutrient subsidies from the fish waste 21 . Thus, external nutrients that elevate local nitrogen conditions in waters surrounding corals can increase zooxanthellae density, enhancing photosynthesis and coral growth rates 22 , 23 . However, there are environmental constraints and energetic costs associated with the maintenance of the mutualistic association between corals and endosymbionts with some studies showing that excessive nutrients can act like a stressor and cause a breakdown in the coral-algal symbiosis 24 . Elevated nutrient concentrations to coral reefs today are typically associated with anthropogenic sources including human sewage 25 , 26 , 27 , 28 and agricultural fertilizer 29 , 30 , where their effects are often considered detrimental to the coral reef ecosystem 31 . By contrast, nutrient subsidies from natural nutrient sources such as bird guano are principally excreted in an organic form of nitrogen 32 that undergoes speciation into various forms of nitrogen 33 and it remains to be shown whether it acts as a natural analogue to anthropogenic nutrient inputs. Nutrients generally increase cell densities of endosymbionts 22 , however the biochemical effect of this on corals is conflicting. Some studies show an increase in photosynthetic performance 34 and calcification 35 , 36 with increased nutritional supply. Conversely, other studies show a decrease in autotrophy caused by a chemical imbalance in the zooxanthellae 37 and a build-up of reactive oxygen species 38 , 39 which affects the stress tolerance of corals 40 . The relationship between nutrient availability and coral growth and photobiology is context-dependent, with exogenous factors like nutrient source likely a key determinant of the direction of the response at an individual coral level 41 . At the community level, excess nutrients can alter coral reproduction 42 and lead to loss of coral diversity and percent cover 43 . It can stimulate macroalgal growth and give algae a competitive advantage over slower-growing reef-building corals that once established, can create changes in chemical conditions on the reef 44 , 45 that maintain the reef in a macroalgal dominated state 46 . However, most studies on nutrient impacts on corals have been conducted on reefs that are already in a degraded state 47 or subject to multiple stressors in addition to excess nutrient availability 48 , including habitat transformation 49 and overfishing 50 . The reduction in numbers of herbivorous fishes, even at low levels of subsistence fishing 51 , together with increased nutrient delivery has been shown to erode resilience of coral reefs and cause transitions from healthy coral-dominated reefs to degraded algal-dominated systems 52 . By contrast, there are few studies on the effects of nutrient subsidies to coral reefs in less-disturbed ecosystems 14 , 16 , 53 , and no studies that have investigated the effects of seabird nutrients on coral growth rates. This study assessed whether seabird-derived nutrients assimilated by corals enhances coral growth rates. To investigate the spatial influence of seabird nutrients on one of the dominant reef-building corals in the Pacific, in hospite colonies of Acropora formosa were sampled every 20 m (from 20 m to 200 m) perpendicular to shore from Namenalailai (hereafter Namena), a remote island with abundant nesting seabirds and a large marine protected area. The zooxanthellae were extracted from these coral samples and analyzed for cell density and natural abundance stable isotope ratios of nitrogen (δ 15 N). Since ornithogenic nitrogen is enriched in 15 N over background levels of nutrients 8 , 15 , 54 , 55 , 56 , δ 15 N provides a natural tracer that can be used to assess the influence of seabird-derived nutrients in corals 16 . We expected a decreasing trend in δ 15 N values and zooxanthellae densities in corals with increasing distance from the island consistent with a decreasing influence of seabird nutrient subsidies. To test whether seabird nutrients enhance growth rates of coral, we used a reciprocal transplant experiment for one year with fragments of A. formosa between Namena and Cousteau, the closest practical site with a similar physical environment but without nesting seabirds (Fig. 1 ). Cousteau is located on the island of Vanua Levu, it is also a marine protected area and had a few colonies of A. formosa at ca. 150 m offshore that were suitable for the transplant experiment. We hypothesized that growth rates of corals near the seabird roosting island would be greater than conspecifics from reefs without seabird colonies due to elevated nutrient availability from seabird guano. Figure 1 Location of study sites. Left: The Fiji archipelago (insert) and the position of the northern division, Vanua Levu, where the study sites are located. Right: ( a ) Namena island, with abundant populations of breeding seabirds, and ( b ) Cousteau on Vanua Levu. The spatial transect sites are shown as circles and the reciprocal transplant sites as stars. Map of Fiji and Vanua Levu created using Geographic Information System ArcGIS v.10.2 and the satellite images were obtained from Google Earth v.7.3.2. Full size image Results Nutrient characteristics of the two sites Dissolved inorganic nitrogen (DIN) concentrations were significantly elevated in the waters of the nearshore coral reef at Namena with DIN concentrations up to 12.7 µM compared to 1.8 µM at Cousteau (Table 1 ). Concentrations of nitrate (Wilcoxon W = 20, p = 0.025) and ammonia (Wilcoxon W = 21, p = 0.031) were significantly elevated at Namena relative to Cousteau. There was temporal variation in nutrient concentrations, with extremely high nitrate concentrations (up to 11.5 µM) measured in April 2013 at Namena. Phosphate concentrations also tended to be higher in April 2013 at both sites, although this was not significant. Phosphate concentrations were not statistically different (p > 0.05) between the transplant sites. The N:P in seawater was higher at Namena, with a ratio between 14–33 compared to Cousteau at 3–5. Table 1 Nutrient concentrations (mean ± S.E.) in the water column at the seabird-influenced marine protected area (MPA) site, Namena, and another MPA without seabirds, Cousteau. Full size table Spatial gradient of endosymbiont parameters at Namena The δ 15 N values of extracted zooxanthellae decreased significantly with distance from land at Namena (F 2,28 = 177.4, p < 0.001) with an R 2 of 0.86 (Fig. 2 ). The mean δ 15 N value for endosymbionts decreased from 7.7‰ at 20 m to 3.1‰ at 200 m from the island. Similarly, symbiont density was greater in coral colonies closer to land and decreased significantly with distance from shore at Namena (F 2,28 = 6.639, p = 0.016), although the relationship was weak (Fig. 3 ). When considering all sampled corals growing naturally within 200 m of the seabird roosting site at Namena island, the average density of zooxanthellae cells in corals was 1.7 × 10 6 cells.cm −2 host tissue (n = 30). Figure 2 Spatial transect. The stable nitrogen isotope values, δ 15 N, of extracted endosymbionts with distance from shore (in meters) on the leeward side of Namena island, Fiji. Values are mean ± 1 S.E. (n = 9) for the three Acropora formosa colonies sampled at 20 m intervals along the three transect lines perpendicular to the shore. Each colony is a pooled and homogenized sample of 3–5 fragments. Dashed line represents the linear regression, R 2 = 0.86, p < 0.001. Full size image Figure 3 Spatial transect. Cell density of endosymbionts (x10 6 cells per cm −2 host tissue) with distance from shore (in meters) on the leeward side of Namena island, Fiji. Values are mean ± 1 S.E. (n = 9) for the three Acropora formosa colonies at 20 m intervals along the three transect lines perpendicular to the shore. Each colony is a pooled and homogenized sample of 3–5 fragments. Dashed line represents the linear regression, R 2 = 0.22, p = 0.016. Full size image Reciprocal transplant experiment Coral growth rates (measured as skeletal linear extension) were significantly different between coral nubbins grown at Namena and Cousteau (F 3,68 = 210.6, p < 0.001), with fragments maintained at Namena exhibiting up to four times greater linear extension rates than conspecifics transplanted to Cousteau (Figs 4 , 5 ). Tukey’s post-hoc tests showed that the corals from Namena that were maintained at their natal site (N–N) achieved significantly higher growth rates (mean 15.29 ± 0.35 cm.y −1 ) than other nubbins. The next highest growth rates (mean 12.79 ± 0.33 cm.y −1 ) were fragments from Cousteau that were transplanted to Namena (C–N) for one year. By contrast, fragments outplanted at Cousteau that were collected from Cousteau (C–C: mean 5.08 ± 0.27 cm.y −1 ) or Namena (N–C mean 3.75 ± 0.20 cm.y −1 ) had significantly lower growth rates. There was no mortality during this experiment. Figure 4 Transplant experiment. Coral growth (linear extension in cm.y −1 ) of Acropora formosa fragments from the one-year reciprocal transplant experiment between Cousteau (C) and Namena (N), with the median and interquartile range shown in box-and-whisker plots. Treatments (left to right): C–C = fragments from Cousteau and retained at their natal site; N–C = fragments from Namena and transplanted to Cousteau; N–N = fragments from Namena and retained at their natal site; C–N fragments from Cousteau and transplanted to Namena. Significantly different treatments according to Tukey’s Post Hoc tests are denoted by letters. Full size image Figure 5 Transplant experiment. ( a ) Individually-labelled fragments of Acropora formosa grown at Namena when the nubbins were created in January 2012, and ( b ) one year later in January 2013. ( c ) Examples of A. formosa fragments that originate from Namena and were transplanted to Cousteau (N–C: three nubbins on left) or retained at Namena (N–N: three fragments on right) after one year. Photographs: C. Savage. Full size image The water temperature averaged 28.1 °C at Cousteau (range: 23.3–30.7 °C) and 27.8 °C at Namena (range: 23.2–32.01 °C) during the four months for which data were reliably recorded with loggers. There was no significant difference in the average monthly temperature between the transplant sites (Welch Two-Sample t-test, t 1,6 = 0.482, p = 0.647). The average light environment at Namena tended to have slightly higher incident light (average PAR: 613 µmol photons m −2 s −1 ) compared to Cousteau (average PAR: 568 µmol photons m −2 s −1 ), however this was not significantly different (Welch Two-Sample t-test, t 1,6 = −0.220, p = 0.834). Discussion This is the first study to demonstrate a positive effect of seabird nutrient subsidies for corals, with significantly greater growth rates of a dominant branching coral near a seabird island. Elevated nutrients delivered to nearshore coral reefs adjacent to a breeding colony of seabirds provided a bottom-up nutrient subsidy that was assimilated by endosymbionts, as reflected by decreasing δ 15 N values of zooxanthellae with distance from shore. Acropora formosa colonies growing in proximity to this elevated nutrient source and fragments transplanted from distant reefs to the area exhibited growth rates four times greater than conspecifics grown at the same depth on a coral reef without seabird-derived nutrients. Therefore, in contrast to excess anthropogenic nutrients, seabird guano can benefit coral reefs, which should be considered in catchment–to–reef management, particularly given the worldwide threat to seabirds. Seabird nutrients elevate nitrogen availability Nutrients were significantly elevated in seawater bathing the fringing reefs on the leeward side of Namena island, Fiji, where seabirds including 1000–3000 breeding pairs of red-footed boobies ( Sula sula ) roost year round 57 . The gradient of decreasing δ 15 N values in extracted endosymbionts with distance from shore indicated that the elevated nitrogen source was most likely ornithogenic, since there are no rivers or point sources of nutrients on the island and seabird guano is enriched in 15 N relative to background nitrate δ 15 N values 8 , 15 , 54 , 56 , 58 . Guano δ 15 N values are >10‰ for seabirds 59 , with red-footed booby guano reported as 11‰ 16 and decaying guano on another Fijian seabird island having δ 15 N values as high as 50‰ 8 . Local nitrate enrichment and elevated δ 15 N values in corals have been linked with nesting seabirds, where ornithogenic nutrients can contribute 15–50% of the nitrogen requirements of the coral Pocillopora damicornis 16 . Thus, the findings of this study are consistent with seabird nutrients elevating nitrogen availability on local reefs and being assimilated by the endosymbionts. However previous studies have not assessed whether an ornithogenic nutrient subsidy has a direct effect on coral growth. This study shows that reef-building corals grown near a large seabird colony exhibited growth rates up to four times greater than conspecifics from the same area that were transplanted distant from seabird nutrients. Linear extension rates of 15 cm.y −1 at Namena are amongst the highest rates reported in the literature for comparable growth experiments of Acropora fragments 60 , 61 , 62 . The light conditions were above saturation levels for corals 63 and since there were no significant differences in the light or temperature conditions between the transplant sites and wave energy was similar with both sites north-facing and sheltered from the prevailing south-east trade winds, it suggests that inter-site differences in growth were mainly driven by the different nutrient conditions. Seabird guano elevates dissolved organic nitrogen 32 , inorganic nitrogen 15 , 16 and phosphate concentrations in seawater 15 , 58 . In this study, there was no significant difference in measured phosphate concentrations between Namena and Cousteau, despite differences in seabird populations. Phosphate fluxes may have been higher from seabird guano, however if this is assimilated readily by benthic organisms it would not show in the water column concentrations. Nevertheless, phosphate concentrations were elevated and not limiting at both sites 64 , 65 , suggesting that the endosymbionts were replete in phosphorus to support coral growth and metabolism 66 . Nitrogen concentrations in the water column, in comparison, were significantly different between sites with ammonia and nitrate significantly elevated at the seabird site (Namena) compared to Cousteau and other coastal sites in Fiji 64 . It should be cautioned that nutrients are temporally variable and this study reports on only two sampling occasions, however nitrate concentrations were significantly elevated for both sampling intervals at the seabird site. Large bird populations on small islands can result in extremely high nitrate concentrations in groundwater, which is advected into adjacent coastal lagoons 33 . In this study, despite nitrate concentrations above thresholds considered harmful to corals 64 , 65 , the A. formosa fragments growing near the seabird nesting island remained healthy during the experiment, grew vigorously and had endosymbiont cell densities considered optimal 67 for branching corals 68 , 69 , 70 to maintain photosynthetic performance 71 . The findings provide an interesting perspective on the contested issue of whether excess nutrients are harmful or beneficial to coral reefs 31 . The findings in this study suggest that natural sources of nutrient enrichment to the coast like seabird guano can have positive effects on acroporid corals in contrast to anthropogenic nutrient sources 41 . Guano nutrient subsidies have increased production of mangroves 72 and seagrass 73 and the current study shows that ornithogenic nutrients result in a nutrient-replete environment that can enhance coral production. The composition of seabird guano contains essential nutrients (nitrogen, phosphorus), including trace elements 58 and iron 74 , in sufficient amounts that biochemical functions remain stable 37 . Changes in nutrient stoichiometry can affect carbon acquisition and nutrient partitioning in the coral holobiont 37 , 75 . The seawater near the seabird colony had N:P which approximated Redfield ratio 76 in contrast to the site distant from seabirds. Thus the stoichiometric balance and nutrient source 41 is also important to consider along with input rates in determining the effect of nutrients on coral performance and production. An important caveat is that the reef where the study was done is located within a no-take marine protected area with abundant fish populations 77 , 78 . There are numerous studies that have documented phase shifts in benthic community composition from scleractinian corals to a degraded macroalgal-dominated state 46 , 79 following nutrient enrichment 26 , 80 , particularly with declines in herbivorous fishes 81 , 82 , 83 . In this study, the elevated nutrients from the seabirds didn’t promote nuisance macroalgal blooms 84 despite the highly elevated DIN concentrations, most likely due to the presence of healthy fish populations, which would have maintained critical ecosystem functions like grazing and bioerosion 53 that prevents establishment of macroalgae. Conservation and management implications Marine conservation tends to focus on connectivity among reefs within a seascape to inform management decisions including where to locate marine protected areas 85 . However, catchment-to-reef connectivity can also be important to consider in marine management and conservation 86 , 87 , not only for taking into account the negative consequences from land, for example increased sediment inputs 29 , but also for positive gains when coral reefs are adjacent to pristine forested landscapes 14 . As shown in this study and other recent papers 14 , 15 , 16 , 53 , seabirds can provide important nutrient subsidies to the adjacent coast where seabird roosting sites are adjacent to coral reefs. Given that nearly one-third of seabird species are at risk of extinction globally 88 , conservation needs to consider possible effects of declines in this nutrient subsidy on coral growth around pristine remote atolls and reefs. To this end, Namena may provide an ideal before-and-after study system to investigate the effects of a decline or loss of guano for the adjacent coastal ecosystem as the island experienced severe destruction from hurricane Winston in February 2016 after this study was conducted and most seabird roosting sites were destroyed (pers. obs.). Apart from the direct effects of storm damage to the fringing coral reefs, investigating the indirect effects of a severe reduction in ornithogenic nutrients would advance our understanding of the role of allochthonous nutrient subsidies on productivity and recovery following disturbance. Methods Study site Namena is a ~0.5 km 2 island within the Kubulau District in northern Fiji that provides a model ecosystem to investigate the role of ornithogenic nutrient subsidies on coral growth without the confounding effects of other human stressors. Namena Marine Reserve is the largest (60.6 km 2 ) and oldest (established 1997) no-take marine protected area in Fiji 78 with high coral cover and abundant fish populations including healthy populations of top predators 77 . Namena’s marine reserve is strictly no-take and compliance is self enforced by the local communities 78 . The island has an intact coastal forest with abundant populations of roosting seabirds, including an estimated 1000–3000 breeding pairs of red-footed boobies, Sula sula (population estimate: 1986–2008) 57 . The closest practical site without nesting seabirds for the transplant experiment is adjacent to the Cousteau resort on the island of Vanua Levu. While Cousteau had lower live coral cover than Namena that prevented comparative sampling along a spatial gradient every 20 m from shore, the focal species A. formosa was found ca. 150 m offshore, which enabled fragmentation to create transplant nubbins. Cousteau is a no-fishing marine protected area since 2000 and was extended in area in 2005. The physical environment is similar between Cousteau and Namena with comparable depth where the transplant corals were collected, water temperature and wave energy were similar and both sites were north-facing, thus protected from the prevailing south-east trade winds. Spatial transect sampling At Namena, samples of Acropora formosa colonies were collected for analyses of zooxanthellae density and nitrogen isotope ratios (δ 15 N) in January 2012. Transect lines were conducted perpendicular to the shore and sampling done at 20 m intervals between 20 m from land to 200 m seaward. Fragments (ca. 5 cm) of A. formosa were collected from attached colonies by snorkeling along the transect line and collecting fragments at a depth of approximately 3 m. If A. formosa colonies were not available on the transect line, another colony within a 1.5 m radius of the transect line at the same distance from shore was sampled. Three transects were taken perpendicular to land approximately 50 m apart, and at each 20 m increment three separate A. formosa colonies were sampled by collecting between 3–5 fragments per colony (depending on availability). These fragments collected from a single colony were pooled and homogenized to get an averaged δ 15 N value and zooxanthellae count per colony. The coral samples were immediately frozen and processed individually in the laboratory for endosymbiont density and stable isotope ratios. In total, nine samples were analyzed at each 20 m distance. These samples were collected with an approved permit (Fiji Immigration Research Permit 3273/11). Transplant experiment A reciprocal transplant experiment was conducted between Namena and Cousteau. Coral fragments of A. formosa were created from visually healthy colonies at Namena and Cousteau between 08–16 January 2012 using established procedures 89 . The initial sizes of the fragments were comparable at the two sites, ranging between 3 cm and 10 cm, with fragment size determined by the size and shape of the colony from where they were collected. The nubbins were placed on individual, labeled (Hallprint®) concrete blocks using underwater epoxy and measured using calipers. They were left in aerated tanks under shade cloth for ca. 2 hours to establish on the bases before being planted out in situ . A total of 36 coral fragments were created at each site with half (n = 18) being retained at the natal site and half (n = 18) transplanted to the other site. Samples were transported in large containers with site seawater, under shade cloth and using battery-operated air bubblers to minimize stress during the 1-hour boat transport time between sites. At each site, the coral fragments were placed on a customized array at 3 m depth and elevated 50-cm off the seabed. The arrays were located ca. 150 m from land at both sites, on the leeward side of Namena Island, Fiji (17°6′26.66″S, 179°6′6.21″E) and at Cousteau (16°48′43.57″S, 179°17′11.59″E). These sites were chosen to be sufficiently close to land to be influenced by land-derived nutrient sources but deep enough to prevent wave damage or interference from snorkelers. The coral fragments were left to grow for 12 months, after which time the individual fragments were collected and measured to quantify growth using calipers. Growth was recorded as skeletal linear extension, including growth of side branches as well as the main axial branch of each fragment 60 . Samples of water column nutrient concentrations were collected mid-water (~2 m) above the transplant arrays at both transplant sites. The nutrient samples were taken in December 2011, 3 weeks before the spatial transect sampling at Namena and the initiation of the transplant experiment, and again in April 2013, after the reciprocal transplant experiment. The seawater samples were taken in acid-washed vials and immediately filtered through pre-combusted Whatman 0.45 μm GFF filters and stored on ice until frozen (within 2 h) at −20 °C. Samples were analyzed within 2 months of collection for dissolved inorganic ammonia (NH 4 + ), nitrite/nitrate (NO 2 − /NO 3 − ), and phosphorus (PO 4 2+ ) concentrations on a Lachat QuikChem 8500 series 2 Flow Injection Analysis autoanalyser. HOBO ® pendant temperature/light 64k data loggers (Onset) were deployed at the two transplant sites to measure the temperature and light environment at each of the arrays. Two loggers were attached on diagonally opposite corners of each array at the height of the coral fragments and set to log at 10-minute intervals. The HOBO light loggers record in Lux and were therefore calibrated by simultaneous recording underwater using a cosine corrected LI-COR ® underwater sensor (LI-192 underwater quantum sensor coupled with a LI-250A light meter, LI-COR) and the data reported in PAR (µmol photons m −2 s −1 ) using a correction following established methods 90 . Laboratory analyses The zooxanthellae were extracted from the coral fragments using a waterpik 91 and 0.2 µm filtered site seawater. Zooxanthellae were separated from animal tissues using four centrifugation steps (2700 g for 10 min). The pellet containing the zooxanthellae was resuspended in 10 mL of 0.2 µm filtered sterile seawater and a known volume filtered onto pre-combusted GF/F 0.45 µm filters and dried for stable isotope analyses. The filters were analyzed for nitrogen stable isotope ratios at Isotrace, Department of Chemistry, University of Otago, on a Europa Hydra mass spectrometer coupled to a Carlo Erba NC 2500 elemental analyser. The isotope ratios are reported in the delta notation: $${{\rm{\delta }}}^{{\rm{15}}}{\rm{N}}=[({{\rm{R}}}_{{\rm{sample}}}/{{\rm{R}}}_{{\rm{standard}}})\,-{\rm{1}}]\times {\rm{1000}}$$ where R refers to the ratio 15 N: 14 N and all values are reported in the units, per mil (‰). Raw isotope ratios are normalized by three-point calibration to the international scales using two IAEA (International Atomic Energy Agency) reference materials (USGS-40 and USGS-41) and a laboratory standard (EDTA-OAS, Elemental Microanalysis Ltd, UK). EDTA-OAS has multi-year and multi-laboratory calibration records against IAEA reference materials and is used as a drift control material by assaying a pair of aliquots after every twelve samples of a batch. Precision for δ 15 N is ± 0.2‰. A second aliquot of the resuspended pellet was used to determine cell density. The cell density of endosymbionts was counted using a Scepter 2.0 handheld automated cell counter (Millipore) with a 40 µm sensor after diluting the extracted zooxanthellae samples 2:1 in phosphate-buffered saline (PBS) and checking for accuracy on select samples using a haemocytometer. The surface area of the coral fragment used was measured according to the paraffin wax dipping technique 92 , 93 and the symbiont density normalized as cells.cm −2 host tissue. Statistical analyses The isotope (δ 15 N) and zooxanthellae density (cells.cm −2 ) data for the three replicate colonies along each transect line with distance from shore were averaged and analyzed using Generalised Linear Models (GLM) with distance from land a fixed factor and the measured symbiont parameters analyzed as continuous predictor variables. Growth of the coral fragments from the reciprocal transplant experiment were compared after testing for normality and homoscedasticity of variances using a one-way analysis of variance (ANOVA) and Tukey’s Post Hoc tests. To test for differences in nutrient concentrations between the Namena and Cousteau transplant sites the nutrient concentrations (ammonia, nitrate, phosphate) were compared using a nonparametric Wilcoxon Signed-rank test, since the nutrients were collected at two time points and the data violated the assumptions of normality even after log-transformation. The temperature and light logger data from Namena and Cousteau were combined into monthly measurements. Since the two replicate loggers at each site were not significantly different (p > 0.05), these data were averaged for Namena and Cousteau sites, respectively. When the data were downloaded, the light readings were not reliable after four months due to biofouling, hence the data were filtered to the first four months of reliable data. The monthly average temperature and light conditions at Namena were compared to Cousteau using Welch’s t tests following Shapiro Wilks tests for normal distribution of the data. Two measures of light conditions were analyzed: the total incident light and average light conditions monthly at each site. All statistical analyses were conducted using R Studio v3.0.1 94 . The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
A University of Otago study has shown the positive impact bird poo, or guano, has on coral growth in tropical seas. Published online in the respected scientific journal Scientific Reports, the study Seabird nutrients are assimilated by corals and enhance coral growth rates demonstrates that seabird nutrients can significantly boost coral growth rates, offering a positive news story in a decade that has documented dramatic declines in reef health and percentage cover. "The findings have important implications for catchment-to-reef connectivity and demonstrate that coral conservation should also consider catchment management in addition to marine protection," says author Dr. Candida Savage, of Otago's Department of Marine Science. The research was conducted in two Fiji marine protected areas; one remote island (Namena) with an intact coastal forest with breeding seabirds, the other (Cousteau) is away from any seabirds and their associated guano. Natural chemical tracers in coral tissues showed that corals growing near the roosting seabirds took up seabird nutrients. A one-year growth experiment demonstrated that corals grew up to four times faster at the Namena reef compared to the Cousteau reef due to the presence of seabirds. "Bird guano is known for its qualities as a fertiliser, however the impact it had on coral growth has been unknown until now. I was astounded at how much of a difference the presence of guano had in promoting coral growth," Dr. Savage says. The research shows that natural sources of nutrients like seabird guano may benefit coral reefs, in contrast to man-made nutrients from land that tend to degrade coral reefs. Comparison of staghorn corals grown for one year without the influence of seabird guano (three corals on left) with corals grown near a seabird colony (three corals on right). Credit: Dr Candida Savage Coral reefs face multiple global and local threats including excess nutrient runoff from land. Over the last decade, the percent of threatened reefs has increased by 30 per cent, with nearly 75 per cent of the world's reefs threatened today. Coral reefs are crucially important for biodiversity and people. Despite covering less than one per cent of the earth's surface, coral reefs are home to one-quarter of all marine fish species and countless invertebrates. Data obtained on the reefresilience website illustrates the importance of coral reefs for humans. At least five hundred million people rely on coral reefs for food, coastal protection, and livelihoods. In developing countries, coral reefs contribute about one-quarter of the total fish catch, providing food to an estimated one billion people in Asia alone. They form natural barriers that protect nearby shorelines from the eroding forces of the sea, thereby protecting coastal dwellings, agricultural land and beaches. Corals growing underwater at a site with roosting seabirds grew up to four times faster than corals grown distant from seabirds. Credit: Dr Candida Savage "Given that nearly one-third of seabird species are at risk of extinction globally and now that we know how beneficial seabird subsidies are for coral growth, we should consider catchment-to-reef management to protect our marine ecosystems. This could be in the form of protection of established seabird nesting grounds or promoting new seabird habitats by enhancing natural vegetation on land alongside protecting marine areas. If the birds are there, the benefits of their droppings will be too," Dr. Savage says.
10.1038/s41598-019-41030-6
Biology
Old methods prove true for studying proteins
Vladlena Kharchenko et al. Dynamic 15N{1H} NOE measurements: a tool for studying protein dynamics, Journal of Biomolecular NMR (2020). DOI: 10.1007/s10858-020-00346-6
http://dx.doi.org/10.1007/s10858-020-00346-6
https://phys.org/news/2020-10-methods-true-proteins.html
Abstract Intramolecular motions in proteins are one of the important factors that determine their biological activity and interactions with molecules of biological importance. Magnetic relaxation of 15 N amide nuclei allows one to monitor motions of protein backbone over a wide range of time scales. 15 N{ 1 H} nuclear Overhauser effect is essential for the identification of fast backbone motions in proteins. Therefore, exact measurements of NOE values and their accuracies are critical for determining the picosecond time scale of protein backbone. Measurement of dynamic NOE allows for the determination of NOE values and their probable errors defined by any sound criterion of nonlinear regression methods. The dynamic NOE measurements can be readily applied for non-deuterated or deuterated proteins in both HSQC and TROSY-type experiments. Comparison of the dynamic NOE method with commonly implied steady-state NOE is presented in measurements performed at three magnetic field strengths. It is also shown that improperly set NOE measurement cannot be restored with correction factors reported in the literature. Working on a manuscript? Avoid the common mistakes Introduction Since its first use of magnetic relaxation measurements of 15 N nuclei applied to the protein, the staphylococcal nuclease (Kay et al. 1989 ), this method has become indispensable in the determination of molecular motions in biopolymers (Jarymowycz and Stone 2006 ; Kempf and Loria 2003 ; Palmer, III 2004 ; Reddy and Rayney 2010 ; Stetz et al. 2019 ). The canonical triad of relaxation parameters—longitudinal ( R 1 ) and transverse ( R 2 ) relaxation rates accompanied by the 15 N{ 1 H} nuclear Overhauser effect (NOE)—have been most often used in studies investigating the mobility of backbone in proteins. It is a common opinion that 15 N{ 1 H} NOE is unique among the mentioned three relaxation parameters because it is regarded as essential for the accurate estimation of the spectral density function at high frequencies (ω H ± ω N ), and it is crucial for the identification of fast backbone motions. (Idiyatullin et al. 2001 ; Gong and Ishima 2007 ; Ferrage et al. 2009 ). The most common method for the determination of X{ 1 H} NOE is a steady-state approach. It requires measurements of the longitudinal polarization at the thermal equilibrium of spin X system, S 0 , and the steady-state longitudinal X polarization under 1 H irradiation, S sat (Noggle and Schirmer 1971 ). Note that the nuclear Overhauser effect , defined as \(\varepsilon = {{S_{sat} } \mathord{\left/ {\vphantom {{S_{sat} } {S_{0} }}} \right. \kern-\nulldelimiterspace} {S_{0} }}\) , should not be mistaken with nuclear Overhauser enhancement , \(\eta = {{\left( {S_{sat} - S_{0} } \right)} \mathord{\left/ {\vphantom {{\left( {S_{sat} - S_{0} } \right)} {S_{0} = \varepsilon - 1}}} \right. \kern-\nulldelimiterspace} {S_{0} = \varepsilon - 1}}\) (Harris et al. 1997 ). It has to be pointed out that NOE measurements appear to be very demanding and artifact prone observations. One of severe obstacles in these experiments is their ca . tenfold lower sensitivity in comparison to R 1 N and R 2 N which is due to the fact that the NOE experiments with 1 H detection start with the equilibrium 15 N magnetization rather than 1 H. The steady-state 15 N{ 1 H} NOEs (ssNOE) are normally determined as a ratio of cross-peak intensities in two experiments—with and without saturation of H N resonances. Such arrangement creates problems with computing statistically validated assessment of experimental errors. 15 N{ 1 H} NOE pulse sequence requires a very careful design as well. Properly chosen recycle delays between subsequent scans and saturation time of H N protons have to take into account the time needed to reach the equilibrium or stationary values of 15 N and H N magnetizations (Harris and Newman 1976 ; Canet 1976 ; Renner et al. 2002 ). Exchange of H N protons with the bulk water combined with the long longitudinal relaxation time of water protons leads to prolonged recycle delay in the spectrum acquired without saturation of H N resonances. Unintentional irradiation of the water resonance suppresses H N and other exchangeable signals owing to the saturation transfer and many non-exchangeable 1 H resonances via direct or indirect NOE with water (Grzesiek and Bax 1993 ) while interference of DD/CSA relaxation mechanisms of 15 N amide nuclei disturbs the steady-state 15 N polarization during 1 H irradiation (Ferrage et al. 2009 ). All aforementioned processes depend directly or indirectly on the longitudinal relaxation rates of amide 1 H and 15 N nuclei R 1 H and R 1 N as well as the longitudinal relaxation rate of water protons, R 1 W , and the exchange rate between water and amide protons, k . In this study, the dynamic NMR experiment (DNOE), a forgotten method of the NOE determination in proteins, was experimentally tested, and the results were compared with independently performed steady-state NOE measurements at several magnetic fields for widely studied, small, globular protein ubiquitin. Additionally, several difficulties inherent in 15 N{ 1 H} NOEs and methods for overcoming or minimizing these difficulties are cautiously discussed. Experimental The uniformly labeled U-[ 15 N] human ubiquitin was obtained from Cambridge Isotope Laboratories, Inc in lyophilized powder form and dissolved to 0.8 mM protein concentration in buffer containing 10 mM sodium phosphate at pH 6.6 and 0.01% ( m / v ) NaN 3 . DSS- d 6 of 0.1% ( m / v ) in 99.9% D 2 O was placed in a sealed capillary inserted into the 5 mm NMR tube. Amide resonance assignments of ubiquitin were taken from BioMagResBank (BMRB) using the accession code 6457 (Cornilescu et al. 1998 ). NMR experiments were performed on three Bruker Avance NEO spectrometers operating at 1 H frequencies of 700, 800 and 950 MHz equipped with cryogenic TCI probes. The temperature was controlled before and after each measurement with an ethylene glycol reference sample (Rainford et al. 1979 ) and was set to 25 °C. The temperature was stable with maximum detected deviation of ± 0.3 °C. Chemical shifts in the 1 H NMR spectra were reported with respect to external DSS- d 6 while chemical shifts of the 15 N signals were referenced indirectly using frequency ratio of 0.101329118 (Wishart et al. 1995 ). The spectral widths were set to 12 ppm and 22 ppm for 1 H and 15 N, respectively. The number of complex data points collected for 1 H and 15 N dimensions 2048 and 200, respectively. In each experiment, 8 scans were accumulated per FID. Double zero filling and a 90°-shifted squared sine-bell filter were applied prior to Fourier transformation. Data were processed using the program nmrPipe (Delaglio et al. 1995 ) and analyzed with the program SPARKY (Goddard and Kneller). Resonance intensities were used in calculating relaxation times and NOE values obtained from a nonlinear least-squares analysis performed using Fortran routines written in-house, based on the Newton–Raphson algorithm (Press et al. 2007 ). The pulse programs used in this work were based on the HSQC-type R 1 ( 15 N) and 15 N{ 1 H} NOE experiments (Lakomek et al. 2012 ). The carrier frequency during 1 H saturation with 22 ms spaced 180° hard pulses on 1 H was moved from water frequency to the centre of amide region (8.5 ppm). Evolution times in R 1 ( 15 N) and dynamic NOE experiments were collected in random order. Reproducibility of experiments was excellent. Therefore, the interleaved mode was not used since it could introduce instabilities of water magnetization (Renner et al. 2002 ). The list of delays applied in the experiments used in this work is given in Table S3. Results and discussion Dynamic NOE measurement—introduction It can be concluded from the Solomon equations (Solomon 1955 ) that in the heteronuclear spin system X–H, the heteronuclear Overhauser effect is built up with the rate R 1 (X) under the condition of proton saturation as shown for the 13 C- 1 H spin system (Kuhlmann et al. 1970 ; Kuhlmann and Grant 1971 ). As a consequence of this observation a dynamic NOE was employed for the simultaneous determination of R 1 ( 13 C) and 13 C{ 1 H} NOE using Eq. ( 1 ) $$S(t) = S_{0} [\varepsilon + (1 - \varepsilon )\exp ( - R_{1} t)]$$ (1) Measurements of time dependent changes of signal intensities S ( t ) allow for the determination of ε , R 1 , and their probable errors, as defined by any standard criterion of nonlinear regression methods. The DNOE can be especially beneficial in studying nuclei with negative magnetogyric ratios since in unfavorable circumstances, nulling of the resonance in a proton saturated spectrum can occur. Therefore, the DNOE has been successfully used in relaxation studies of 29 Si (Kimber and Harris 1974 ; Ejchart et al. 1992 ) and 15 N (Levy et al. 1976 ) nuclei in organic molecules. The 15 N-DNOE has been also investigated in small protein (Zhukov and Ejchart 1999 ). This approach can be especially profitable in studies of medium to large size proteins displaying highly dynamic fragments. Time schedule of NOE measurement Both nitrogen polarizations, S sat and S 0 , depend on a number of physical processes in the vicinity of amide nitrogen nuclei. Dipolar interaction between 15 N and 1 H N brings about the nuclear Overhauser effect. Additional processes as chemical shift anisotropy relaxation mechanism of 15 N and its interference with 15 N/ 1 H N dipolar interaction, direct NOE and saturation transfer from water to 1 H N protons due to chemical exchange influence both nitrogen polarizations, especially if the pulse sequence itself will result in non equilibrium state of water protons. Presaturation of the water resonance resulting in partial saturation of water magnetization attenuates 1 H N signal intensities mostly through the chemical exchange or through homonuclear NOE with water protons. (Grzesiek and Bax 1993 ; Lakomek et al. 2012 ). Therefore, evolution of the spin system towards S sat or S 0 nitrogen polarizations depends on the rates of the processes mentioned above, the longitudinal relaxation rates of 15 N, 1 H N , and water protons, R 1 N , R 1 H , and R 1 W , and the chemical exchange rate, k , between amide and water protons. These rates strongly determine the time schedule of NOE measurements, which is schematically shown in Fig. 1 . Hence, their knowledge is a prerequisite for choice of optimal delays. The numerical data of R 1 H and R 1 W for the sample studied here are given in Table 1 . Nevertheless, one should be aware that the R 1 W depends on temperature, pH, and protein concentration. Residue specific R 1 N values for the ubiquitin sample will be discussed further. Fig. 1 Steady-state NOE measurement is composed of two sequences: NOE and noNOE with saturated and unperturbed H N protons, respectively. Dynamic NOE measurement comprises several NOE type sequences with a set of different D sat values Full size image Table 1 Longitudinal relaxation rates of water protons R 1 W and averaged rates of amide protons R 1 H for ubiquitin sample at 25 °C Full size table In the noNOE reference measurement, 15 N nuclei have to reach the thermal equilibrium at the end of delay RD 1 . During the block denoted as measurement in Fig. 1 , the pulse sequence resulting in the 2D 15 N/ 1 H spectrum with the desired cross peak intensities is executed. At the start of acquisition, several coupled relaxation processes take place, resulting in multi-exponential decay of 15 N, 1 H N , and water protons (Ferrage et al. 2008 ). Keeping in mind that R 1 W is much smaller than the rates of other processes, it can be reasonably assumed that R 1 W rate mainly defines RD 1 . Fulfillment of the condition $$\exp ( - RD_{1} \cdot R_{1W} ) < 0.02$$ (2) where factor 0.02 has been chosen to some extent arbitrarily, should properly determine RD 1 values in most of the cases. Still one has to be aware that the smallest decay rate resulting from the exact solution of full relaxation matrix can be smaller than R 1 W . In NOE measurement, the buildup of 15 N magnetization takes place with the rate R 1 N . 15 N relaxation rates can be, however, broadly dispersed if mobility of N–H vectors in a studied molecule differ significantly. Therefore, to meet the condition $$\exp ( - D_{sat} \cdot R_{1N} ) < 0.02$$ (3) a compromise may be required (c.f. Table S1). Experiments of steady-state and dynamic NOE measurements differ in the RD 2 setting. In the case of steady-state NOE, the value RD 2 = 0 is adequate. Even if the nitrogen polarization displays a nonzero value at the beginning of the D sat period, it will still have enough time to reach the steady-state condition. In dynamic NOE, however, the nitrogen polarization has to start from closely controlled thermal equilibrium. Therefore, condition (2) with RD 1 replaced with RD 2 has to be fulfilled. The description ( RD 1 – RD 2 – D sat )/ B 0 will be further adopted to characterize particular NOE experiments used in this work. Analysis of systematic errors resulting from an incorrect delay setting in NOE values, ε = S sat / S 0 , for nuclei with γ < 0 should take into account that these errors can be caused by false S 0 values and/or S sat values. The apparent S 0, app value in not fully relaxed spectrum is always smaller than the S 0 of true equilibrium value. On the other hand, the non-equilibrium apparent S sat,app value is always larger than the S sat , equilibrium value, i.e. more positive for ε > 0 or less negative for ε < 0. The joint effect of erroneous S sat and S 0 , however, does not always result in the relation ε app > ε as could be hastily concluded. An attenuated S 0 value in conjunction with properly determined, negative S sat results in ε app < ε, and this is experimentally confirmed by ε values observed for the C-terminal, mobile residue G76. Its values obtained in the measurements free of systematic errors (10-10-8)/16.4 T and (10-10-5)/22.3 T are equal to − 0.812 and − 0.246, respectively. Herein, both, S 0 and S sat values are expected to be error free. In the measurements (10-10-4)/16.4 T and (10-10-1.3)/22.3 T with proper S 0 value and S sat,app > S sat owing to too short D sat , ε app are equal to − 0.738 and 0.162, respectively, while in (3-0-3)/22.3 T with too short RD 1 and D sat delays, S 0, app < S 0 and ε app = − 0.379 (cf. Figure 8 ). Such misleading behavior could be expected for mobile residues in flexible loops, unstructured termini, or intrinsically disordered proteins. Setup and data processing of DNOE measurement Relation between signal intensities and evolution times in a dynamic NOE experiment, D sat , depend on three parameters: nuclear Overhauser effect, ε , nitrogen longitudinal relaxation rate, R 1 N , and signal intensity at the thermal equilibrium, S 0 (Eq. 1 ). Provided that the longitudinal relaxation rates have been previously obtained in a separate experiment, their values can be entered in Eq. 1 , reducing the number of determined parameters in a computational task further denoted as a sequential one. Influence of the propagation of R 1 N errors on the ε values is usually negligible; variation of R 1 N values within the range ± σ (standard deviation) typically results in dε changes smaller than 10 –5 except for residues exhibiting ε < 0.4 (Figs. S1, S2). In ubiquitin, such residues are located at the C-terminus. This behavior is attributed to the stronger correlation between ε and R 1 N parameters owing to the increased range of signal intensities for smaller ε values (Fig. 2 ). Another possibility of data processing, simultaneous use of dynamic NOE and relaxation rate data in one computational task, brings about results ( ε and dε values) practically identical to those obtained in the sequential task. Fig. 2 Experimental data obtained in DNOE measurement at 16.4 T for D58 residue (brown circles), R74 (orange triangles), and G76 (light green squares). NOE values determined in the sequential task are: (D58) = 0.805, ε (R74) = 0.186, and ε (G76) = − 0.813. Color-coded lines correspond to the nonlinear least-square fit of the Eq. ( 1 ) to the experimental data. Correlations between ε and R 1 , c ( ε , R 1 ), in the simultaneous task are: c (D58) = − 0.003, c (R74) = 0.013, and c (G76) = 0.099. The larger range of intensities results in larger correlation c ( ε , R 1 ) between fitted parameters Full size image The dynamic NOE data can also be used without support from separate R 1 N data. Such data processing delivers the ε values and their errors close to those resulted from the sequential or simultaneous approach (Figs. S3, S5). On the other hand, derived R 1 relaxation rates are less accurate with errors an order of magnitude larger than those obtained in the dedicated R 1 experiment (Figs. S4, S6). Therefore, a dynamic NOE measurement cannot be regarded as a complete equivalence of a separate R 1 experiment. Numerical data for three different data processing methods of dynamic NOE at 22.3 T are given in the Table S2, and a comparison of the discussed numerical methods are presented in Table 2 using data acquired for ubiquitin at 16.4 and 22.3 T. The pairwise root-mean-square deviations (RMSD) for ε values are extremely small in all cases, while those for R 1 values are larger. Their values, together with average standard deviations, are given in Table 3 . Recently, an experimentally demanding TROSY-based pulse sequence dedicated to deuterated proteins has been invented for simultaneous measurement of R 1N relaxation rates and ε values. The accuracy of the proposed technique has been verified by comparison to the results of both relaxation parameters measured conventionally (O'Brien and Palmer III 2018 ). Table 2 Values of standard error ratios averaged over 70 amino acid residues of ubiquitin available from our experiments Full size table Table 3 The pairwise RMSDs and the mean values of standard deviations determined for three data reduction methods for dynamic NOE experiments Full size table Dynamic NOE measurements, as with relaxation rate experiments, require optimization of a number and length of saturation periods, D sat . One important assumption in the selection of D sat values is to sample a broad range of intensities I ( t ) ~ S ( t ) in a uniform manner. The shortest D sat equal to zero delivers I 0 ~ S 0 . The longest D sat should be as close to a value fulfilling the condition (2) as is practically feasible (c.f. Table S1). These assumptions were checked on the DNOE measurement comprising 11 delays. Next the number of delays was reduced to seven and then to 4 selected delays, and results were compared. Apparent NOE values and their standard deviations changed only slightly. Residue specific differences in ε values between the full experiment and each of the reduced ones were smaller than appropriate dε values. They are compared in Fig. 3 , and the presented data assure that four correctly chosen D sat values do not deteriorate ε values and their accuracies. This conclusion allows us to state that DNOE measurement can require an acceptable amount of spectrometer time. Fig. 3 Residue specific differences with error bars between DNOE measurement at 22.3 T comprising 11 D sat values and curtailed DNOE measurements composed of four or seven D sat values (upper part and lower part, respectively). Horizontal, dashed lines represent averages of Δ ε values given in plots. Full set of D sat values [0.0, 0.11, 0.22, 0.35, 0.55, 0.66, 0.79, 1.10, 1.30, 3.00, 4.00]. Four values: 0.22, 0.66, 1.10, and 3.00 were rejected to get seven D sat value measurement. Further rejection of 0.11, 0.55, and 1.30 D sat values resulted in four-value set Full size image Error determination of NOE measurements The NOE errors are equally important to NOE values themselves. They are used to weigh the NOE data in the relaxation-based backbone protein dynamics calculation (Palmer et al. 1991 ; d’Auvergne 2008 ; Jaremko et al. 2015 ). Inaccurate values of NOE errors can result in the erroneous estimation of protein backbone dynamics. Particularly, the overestimation of NOE leads to significant errors in the local dynamics parameters as evidenced by appropriate simulations (Ferrage et al. 2008 ). Occasionally, the average values of the NOE and standard errors in the mean have been determined from several separate NOE data sets (Stone et al. 1992 ; Renner et al. 2002 ). Nonetheless, it has been most often accepted to use signal-to-noise ratios ( SNR ) in the determination of steady-state NOE errors (Farrow et al. 1994 ; Tjandra et al. 1995 ; Fushman 2003 ). $$d\varepsilon = \left| \varepsilon \right|\sqrt {SNR_{sat}^{ - 2} + SNR_{nonsat}^{ - 2} }$$ (4) The Eq. ( 4 ) is an approximation of exact formulation of experimental error determination since it takes into account only this part of experimental errors which arises from the thermal noise. It can be safely used if the thermal noise dominates other contributions to the total experimental error. A weak point in Eq. ( 4 ) arises also from the fact that amino acid residues located in flexible parts of macromolecules often display NOE values close to zero, which results in the underestimation of dε , owing to the factor \(\left| \varepsilon \right|\) as shown in Eq. ( 4 ). Justification of an SNR -based approach should comprise two issues: checking of the reliability of SNR determination delivered by commonly used processing tools and comparison of the SNR -determined errors with those obtained from the statistical analysis of a series of independent NOE measurements. To the best of authors' knowledge, such study has not been yet undertaken for 15 N nuclei in proteins and has only be performed once for 13 C nuclei (Bernatowicz et al. 2010 ). In our study, we found that SNR values automatically derived in the peak intensity determination differed from those obtained semi-manually; their larger part was overestimated. Therefore, automatically delivered SNR values concomitant cross peak intensities cannot be taken for granted. Description of the SNR issue is given in the Supporting Material (section: Determination of signal-to-noise ratio). In order to closely analyze the relevance of SNR -based NOE errors, a series of 10 NOE measurements was performed at 22.3 T using identical spectrometer setup. A comparison of standard deviations ( σ ) calculated for each of 70 residues of ubiquitin with corresponding means of SNR -based NOE errors is presented in Fig. 4 . It can be concluded from Fig. 4 that values of two presented sets of NOE errors are very similar, and their means are close to one another with a difference of 8⋅10 −5 . Individual ε values for the residue A46 showing the largest NOE data dispersion are compared with the mean and the standard deviation in Fig. 5 . Examination of Figs. 4 and 5 allows us to conclude that properly determined SNR -based NOE errors are reliable and can be safely used in further applications. Fig. 4 Standard deviations (σ) calculated for 70 residues of ubiquitin (brown circles) and their mean (solid horizontal brown line) determined for series of ten measurements. Means of ten SNR -based errors calculated for each residue (orange circles) and their mean (solid horizontal orange line) Full size image Fig. 5 The NOE values of A46 residue obtained in a series of 10 measurements with appropriate SNR -based errors (gray circles with SNR -based error bars) and their mean with standard deviation (red circle). Dashed red lines correspond to the mean ± σ Full size image Saturation of H N protons Originally, saturation of proton resonances was achieved by a train of 250° pulses at 10 ms intervals (Markley et al. 1971 ). In protein relaxation studies, however, a train of 120° pulses spaced 20 ms apart was commonly used for this purpose (Kay et al. 1989 ). In search of the optimal 1 H saturation scheme, different pulse lengths (120°, 180°, 250°) and different pulse spacings (5 ms, 10 ms, 20 ms) were employed (Renner et al. 2002 ). Finally, it was concluded that pulses of approximately 180° at l0 ms intervals performed slightly better than other settings. Extensive experimental survey of H N proton saturation accompanied by theoretical calculations based on averaged Liouvillian theory was carried out on all components of saturation sequence (Ferrage et al. 2009 , 2010 ). It was concluded that the best results were obtained using the symmetric 180° pulse train ( τ /2 − 180° − τ /2) n with τ = k / J NH , where n —the integer determining length of saturation time ( D sat = n ⋅ τ ) and k —a small integer, usually k = 2, giving τ about 22 ms. It was also suggested to move the proton carrier frequency from water resonance to the center of the amide region and reduce the power of the 180° pulses to minimize sample heating. Analysis of NOE experiments NOE experiments performed to analyze the influence of a particular sequence of parameters on the apparent nuclear Overhauser effects values, ε app , are presented in Table S3. Experiments, ssNOE(10-10-8)/16.4, DNOE/16.4, ssNOE(14-0-14)/18.8, ssNOE(13-0-3)/22.3, and DNOE/22.3 can be expected to deliver the most accurate results. They are regarded as a kind of reference point for a selected magnetic field. The importance of using appropriate D sat values in steady-state NOE measurements is demonstrated by comparing NOEs in the experiments (14-0-4)/18.8 and (14-0-14)/18.8. The first displays a systematic increase of ε app owing to incomplete H N saturation during D sat . Residue specific differences between the mentioned experiments are shown in Fig. 6 . Residues G75 and G76 with negative ε values display decreased ε app as discussed earlier (section: Time schedule of NOE measurement). Fig. 6 NOE differences Δε = ε app − ε obtained in measurements performed at 18.8 T with D sat = 4 s (ε app ) and D sat = 14 s (ε). Average difference after rejection of G75 and G76 with ε < 0 is equal to 0.022 Full size image Calculation of factors \(\exp ( - D_{sat} \cdot R_{1N} )\) using residue specific R 1 N data is presented in Fig. 7 for D sat values utilized in the measurements performed at 22.3 T as listed in Table S3. The D sat = 3 s is sufficiently long for all residues except the last two C-terminal glycines, G75 and G76. In fact, even D sat = 4 s is not long enough for the observation of unperturbed G76. Therefore, it is not surprising that D sat = 1.3 s is much too short, and ε app values derived from the experiment (10-10-1.3)/22.3 are significantly larger than those obtained at the longer period of D sat = 4 s (Fig. 8 ), on average, 0.0348. Fig. 7 Factor characterizing efficiencies of the saturation of nitrogen magnetization for different D sat values were calculated using residue specific R 1 N values determined at 22.3 T in a separate measurement. A common sense but arbitrary limit 0.02 is marked with a horizontal line Full size image Fig. 8 Nuclear Overhauser effect values obtained in steady-state NOE experiments with the saturation period D sat set to 1.3 s (red squares) or 4 s (blue circles) Full size image The effect of a very short RD 1 delay can be demonstrated by comparing experiments ssNOE(13-0-3)/22.3, ssNOE(10-10-3)/22.3, ssNOE(6-0-3)/22.3, and ssNOE(3-0-3)/22.3 (Fig. 9 ). The RD 1 = 3 s and RD 1 = 6 s result in the increase of ε magnitudes relative to the RD 1 = 13 s on average, 0.0544 and 0.0042, respectively. On the other hand, average difference between measurements with RD 1 = 13 s and RD 1 = 10 s is negligible − 0.0007. This result gives evidence that RD 1 delay equal to 10 s allows to reach the equilibrium state of H N protons in the studies system. Fig. 9 NOE differences Δ ε = ε app − ε obtained for measurements performed at 22.3 T: ssNOE(13-0-3)/22.3, ssNOE(10-10-3)/22.3 (extracted from DNOE), ssNOE(6-0-3)/22.3, and ssNOE(3-0-3)/22.3. Δ for the RD 1 pair: 3 s and 13 s (brown circles), the RD 1 pair: 6 s and 13 s (orange triangles), the RD 1 pair: 10 s and 13 s(light green squares). Color coded average differences after rejection of G76 with ε < 0 are equal to 0.0544, 0.0042, and 0.0007 Full size image Concluding, comparison of the NOE values obtained at different settings of D sat or RD 1 highlights the importance of the correct choice of delays in the determination of accurate ε values. Correction factors As has been shown above, the effect of slow spin-lattice relaxation of water protons and the chemical exchange of amide protons with water combined with too short relaxation delays in the steady-state NOE experiments usually results in substantial systematic NOE errors owing to the incomplete relaxation towards the steady-state or equilibrium 15 N polarization. Therefore, several correction factors were introduced to compensate such errors using the following equation $$\varepsilon = \frac{{(1 - X)\varepsilon_{app} }}{{1 - X\varepsilon_{app} }}$$ (5) where ε and ε app are exact and apparent NOE values, respectively. It has been claimed that the effect of incomplete R 1 W recovery can be corrected by substituting the factor $$X = \exp ( - RD \cdot R_{1W} )$$ (5A) into Eq. 5 (Skelton et al. 1993 ). It has been also suggested that factor $$X = \exp ( - RD \cdot R_{1H} )$$ (5B) allows for the correction of the not sufficiently long relaxation delay RD with respect to R 1 H (Grzesiek and Bax 1993 ). Another correction that takes into consideration the inconsistency of both R 1 N and R 1 H with relaxation delays has also been recommended (Freedberg et al. 2002 ): $$X = \frac{{R_{1N} }}{{R_{1N} - R_{1H} }}\frac{{\exp ( - RD \cdot R_{1N} ) - \exp ( - RD \cdot R_{1H} )}}{{\exp ( - RD \cdot R_{1N} ) - 1}}$$ (5C) Efficiencies of all three corrections were checked on the NOE measurement with the intentionally too short delays: RD 1 = 3 s, RD 2 = 0, and D sat = 3 s, (3-0-3)/22.3. As shown earlier (Fig. 9 ), all ε app in (3-0-3)/22.3 measurement were larger than corresponding ε values in the correctly performed measurement (13-0-3)/22.3. The mean of differences was equal to 0.054. None of these above-listed corrections was able to fully compensate the effect of wrong adjustment of RD 1 delay. Three corrections allowing for R 1 W (Eq. 5 ), R 1 H (Eq. 5B ), and R 1 H and R 1 N (Eq. 5C ) resulted in the means of absolute differences equal to 0.019, 0.048, and 0.036, respectively (Fig. 10 ). Therefore, these corrections have compensated for the delay missetting by 67%, 17%, and 38%, respectively . Obviously, the R 1 W effect is the most important factor for compensation. Fig. 10 Residue specific differences between corrected ε app and ε values obtained in (13-0-3)/22.3 measurement. The ε app values were obtained from (3-0-3)/22.3 experiment after compensation for R 1 W (Eq. 5A , brown circles), R 1 H (Eq. 5B , orange triangles), and R 1 H , R 1 N (Eq. 6 , light green squares). Horizontal color-coded lines correspond to appropriate means of difference magnitudes Full size image Compensation for a not long enough D sat period with properly chosen RD 1 is an easier task. The experiment (10-10-1.3)/22.3 was discussed earlier, and its results were shown in Fig. 8 . Use of another correction, $$\varepsilon = \frac{{\varepsilon_{app} - X}}{1 - X},{\text{where}},X = \exp ( - D_{sat} \cdot R_{1N} )$$ (6) results in the corrected ε app values, which differ from the DNOE experiment by an average of 0.003 (Fig. S7). Nevertheless, in view of the above-mentioned results, it is obvious that none of the existing correction terms should be used as a substitute for a properly designed experiment. Conclusions In this study, it has been shown that dynamic NOE measurement is an efficient and accurate method for NOE determination. In particular, it presents its usefulness in cases of NOE values that are close to zero. This method provides a robust and more accurate alternative to widely used steady-state NOE measurement. The DNOE measurement allows for the determination of NOE values and their accuracies with standard nonlinear regression methods. If high accuracy longitudinal relaxation rates R 1 are not of great importance, they can be simultaneously obtained with a reduced accuracy as a "by-product" in the DNOE data processing without any significant reduction of the accuracy and precision of determined NOE values. It has been proven that commonly used methods of NOE accuracy based on the signal-to-noise ratio accompanying steady-state NOE measurements are reliable provided that root-mean-square noise has been determined correctly. It has to be stressed that in view of the results presented in this work, none of the existing correction terms are able to restore accurate NOE values in cases where measurements are improperly set up and performed.
A fresh new look at an old technique in protein biochemistry has shown that it should be reintroduced to the spectroscopy toolkit. For decades, scientists have used nuclear magnetic resonance (NMR) spectroscopy to probe the molecular motions of proteins on various timescales. This technique has revealed aspects of enzyme reactions, protein folding and other biological processes, all on an atomic scale. Typically, spectroscopists will gage the rotation of NMR-active atoms in the protein backbone with and without proton irradiation to calculate a ratio known as a steady-state nuclear Overhauser effect (NOE); however, it was not always done this way. Before steady-state NOE experiments became the norm in biological investigations, scientists would often take a greater number of measurements over the course of an irradiation experiment. This method, termed "dynamic" NOE, might seem more complicated, but according to Ph.D. student Vladlena Kharchenko, it is no more time consuming than steady-state NOE, while dynamic NOE provides additional information about protein flexibility and is far more accurate to minute biological motions in proteins. "It works for proteins and makes studying their dynamics even more accurate," says Kharchenko, a member of Łukasz Jaremko's lab at KAUST. "Our message to biological NMR spectroscopists is simple: 'Don't be afraid of dynamic NOE.'" To prove the technique's worth, Kharchenko, Jaremko and their team performed a series of NMR experiments on ubiquitin, a globular protein that regulates a range of processes inside the cell. Working with Mariusz Jaremko, also from KAUST, and collaborators in Poland, the researchers collected both steady-state and dynamic NOE measurements and demonstrated that the dynamic approach is always preferable—except under a few specific conditions, such as when instrument access is limited or when proteins degrade very rapidly. Notably, the steady-state approach proved especially prone to errors in regions of the ubiquitin protein that were flexible and disposed to moving around. The dynamic technique, in comparison, offered no such misleading results. In light of their findings, the KAUST team hopes that other scientists with an interest in atomic-level protein mechanics will now begin to adopt, or at least reconsider, dynamic NMR methods. Kharchenko says that sometimes, "it's worth dusting off forgotten methods and checking if they fit to new emerging questions and systems of research interest."
10.1007/s10858-020-00346-6
Biology
How sessile seahorses speciated and dispersed across the world's oceans in 25 million years
Genome sequences of 21 seahorse species shed light on global dispersal routes and suggest convergent developmental mechanisms of unusual bony spines. Nature Communications. DOI: 10.1038/s41467-021-21379-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-21379-x
https://phys.org/news/2021-02-sessile-seahorses-speciated-dispersed-world.html
Abstract Seahorses have a circum-global distribution in tropical to temperate coastal waters. Yet, seahorses show many adaptations for a sedentary, cryptic lifestyle: they require specific habitats, such as seagrass, kelp or coral reefs, lack pelvic and caudal fins, and give birth to directly developed offspring without pronounced pelagic larval stage, rendering long-range dispersal by conventional means inefficient. Here we investigate seahorses’ worldwide dispersal and biogeographic patterns based on a de novo genome assembly of Hippocampus erectus as well as 358 re-sequenced genomes from 21 species. Seahorses evolved in the late Oligocene and subsequent circum-global colonization routes are identified and linked to changing dynamics in ocean currents and paleo-temporal seaway openings. Furthermore, the genetic basis of the recurring “bony spines” adaptive phenotype is linked to independent substitutions in a key developmental gene. Analyses thus suggest that rafting via ocean currents compensates for poor dispersal and rapid adaptation facilitates colonizing new habitats. Introduction Explaining mechanisms of marine biodiversification is challenging, owing to persistent paucity of information on patterns of speciation and phylogeography in marine ecosystems 1 , 2 , 3 . Major geological vicariance events, such as the closure of the Panama seaway 4 or the Tethys seaway 5 , 6 , have been suggested to impact patterns of marine biodiversification, particularly for organisms whose dispersal strategies rely on ocean currents transporting pelagic larvae or rafting individuals across large distances 7 . In such lineages, ecomorphological divergence and local adaptation after a colonization event can be slow even in the presence of strong divergent selective pressures 8 . Thus, comprehensive studies addressing spatio-temporal diversification patterns that include dynamics of geophysical processes, as well as knowledge of the genetic bases and developmental mechanisms of key adaptive traits, are required to understand the mechanisms that drive the evolution of marine biodiversity. The radiation of seahorses (Family Syngnathidae ) is a particularly iconic and suitable model system to investigate the effects that tectonic activity and ocean current dynamics can have on the dispersal and diversification of marine taxa due to the seahorses’ dispersal by rafting 7 , 9 , as well as to study the rapid evolution of adaptive phenotypes in new environments. Seahorse genomes evolve under some of the highest mutation rates among teleosts 10 and have the greatest diversification rates within their family (Supplementary Fig. 1 , Figshare: Dataset 1 ). All seahorses are sedentary but exhibit specialized morphological and life-history traits 11 , 12 , 13 , such as a prehensile tail (and the lack of a caudal fin), an elongated snout, lack of pelvic fins, an armor of bony plates instead of scales, and a unique mode of male pregnancy whereby males give birth to developed juveniles 14 , 15 . Species of seahorses differ widely in body size, color patterns and other adaptive traits to their respective environments 11 , such as the presence or absence of bony spines, which are likely an adaption against predators 16 . Previous research revealed that the evolutionary origin of seahorses likely lies in the Late Oligocene’s Indo-Pacific 17 , 18 , 19 from where different lineages dispersed around the globe despite the seahorses’ poor endurance swimming abilities and their reliance on rafting as primary long-distance dispersal strategy 9 , 20 . Nonetheless, a comprehensive understanding of the seahorses’ colonization routes is still missing as phylogenetic reconstructions were typically either derived only from relatively few species and/or few genetic markers 18 , 21 , 22 , 23 . Here, we study the diversification patterns of these unique fishes based on the analysis of multiple sequenced seahorse genomes. By conducting comprehensive phylogenetic analyses, we infer their demographic history and clarify the role of seaway closures during their diversification as part of tracing the colonization routes from the origin of their common ancestor to their current distribution. Additionally, we address the adaptive phenotypic evolution of seahorses by studying the development of one of the most eye-catching traits within the genus: the presence or absence of bony spines. Results and discussion Global diversity of seahorses Using PacBio long-read sequencing (~115-fold coverage), Illumina short-read sequencing (~243-fold coverage), and Hi-C technology (~184-fold coverage) we de novo assembled the genome of a male Hippocampus erectus . With a contig N50 of 15.5 Mb, our chromosome-level assembly (total size 420.66 Mb; comprising 22 superscaffolds corresponding to the expected chromosome number) (Supplementary Figs. 2 – 4 , Supplementary Tables 1– 4 , and Supplementary Data 1 ) improved in sequence contiguity over previously available assemblies generated from Illumina short reads alone (contig N50: 14.57 kb) 10 , 24 . We re-sequenced the genomes (~16-fold coverage) of 358 seahorse specimens comprising 21 species reflecting Hippocampus ’ global distribution, with representatives of major seahorse lineages (Fig. 1a , Supplementary Fig. 5a , Supplementary Data 2 ). Fig. 1: Genetic diversity and phylogenetic relationships of 358 seahorse specimens. a Geographic sampling locations for sampled seahorses with patterns of nucleotide diversity ( π ) of the 21 seahorse species across 22 chromosomes. Maps from Wessel et al. (2013) under GNU GPL license 91 . b Neighbor-joining tree constructed with genome-wide SNPs of 358 seahorses. Location pin symbols in ( a ) and branch background in ( b ) correspond to each other. Seahorses illustrations by Geng Qin. Source data are provided as a Source Data file. Full size image Our analysis identified each seahorse species as a monophyletic group in a neighbor-joining tree inferred from 41 million genome-wide single nucleotide polymorphisms (SNPs) (Fig. 1b , Supplementary Tables 5 – 8 ), and they formed distinct clusters in a principal component analysis (Supplementary Fig. 5b ). Genetic diversity ( θπ and θω ) varied substantially among species and chromosomes, as it was, for example, generally higher for seahorses in the North Atlantic Ocean biome than in the South Atlantic Ocean biome (Fig. 1a , Supplementary Figs. 6 , 7 , Figshare: Dataset 2 ). The time-calibrated tree estimated that the common ancestor of all extant seahorses lived ~20–25 Ma (million years ago) (Fig. 2a , Supplementary Figs. 8 , 9 , Figshare: Datasets 3 – 6 ), which coincides with the beginning of a period of explosive diversification in most modern marine fish and coral lineages 25 , 26 . The Indo-Australian Archipelago was identified as the center of origin of the genus Hippocampus , in line with previous studies 18 , 19 (Fig. 2b , Supplementary Fig. 9 ). Subsequently, seahorses diversified and spread globally, with their colonization routes and dynamics strongly linked to prevalent oceanic currents and tectonic events (see Supplementary Text) 27 . Our species tree based on 2,000 loci suggests that H. abdominalis is the sister-lineage to a clade containing all other seahorses, and the latter are subdivided into two major phylogenetic clades: clade I comprises eight species exclusively inhabiting the Indo-Pacific Ocean, while clade II includes six species inhabiting the Atlantic Ocean, one from the East Pacific Ocean, and five from the Indo-Pacific Ocean (Fig. 2a , Supplementary Fig. 9 ). A more detailed description of clade II exemplifies the seahorses’ dependence on ocean currents as a means of far-distance dispersal and showcases how temporal seaways can boost or limit diversification and dispersal. Fig. 2: Colonization and demographic history of seahorses. a Phylogenetic tree and divergence time estimates for 21 seahorse species. The branch line thickness corresponds to the population size estimates ( N e ) and colors indicate different lineages. Symbols I–III indicate calibration points. b – d Predicted colonization routes (colored arrows) of seahorses based on divergence time, distribution, vicariance events, and ocean currents (white arrows). Maps modified from Ron Blakey © 2016 Colorado Plateau Geosystems Inc (License # 60519). b The Indo-Australian Archipelago was the center of origin (red marking) of the genus Hippocampus before seahorses diversified and dispersed globally 18–23 Ma. c Seahorses initially colonized the Atlantic Ocean through the opening Tethyan seaway, which, after its closure (Terminal Event during 7–13 Ma), separated this Tethyan lineage from its Indian Ocean sister lineage. The latter, subsequently rapidly diversified (yellow marking) in the Arabian Sea, establishing a second center of seahorse diversification. d A second seahorse colonization event of the Atlantic Ocean occurred from the Indian Ocean about 5 Ma by passing the South African tip, and finally arriving in the East Pacific Ocean through the still open Panama seaway approximately 3.6 Ma. Source data are available at Figshare (Datasets 4–6). Full size image Rapid diversification and colonization routes of clade II After separating from clade I by dispersing into the West Indian Ocean around 18.2 Ma, the ancestors of the South Atlantic and North Atlantic lineages diverged from each other approximately 15.2 Ma (Fig. 2a ). The North Atlantic lineage followed north-westward oceanic currents and passed through the Tethys Sea a few million years before the initial closure of the East Tethys Seaway due to tectonic shifts about 14 Ma 6 , 28 . Consistent with this colonization route for the North Atlantic lineage a strong genetic bottleneck in their ancestral population was detected (supporting the notion that founder dispersal is particularly common in seahorses 21 ), however, a rapid population expansions was detected after crossing the Atlantic Ocean in the mid Miocene (Fig. 2a, b , Supplementary Fig. 10 ). As previously proposed 22 , ancestors of H. hippocampus diverged from the North American lineages likely by back-crossing the Atlantic via the Gulf Stream (a dispersal route still effective today 29 ), and colonized the East Atlantic in the Pliocene. For many marine animal taxa inhabiting the shallow areas of the Arabian Sea, the closure of the East Tethys Seaway led to an increased biodiversity 6 , as it did for seahorses, leading to a second center of biodiversity in this group. For instance, about 13 Ma the ancestors of H. kelloggi and H. spinosissimus emerged as a new lineage by dispersing back into the Indo-Australian Archipelago. This event may had been facilitated by a reinforced Equatorial Counter Current in the Indian Ocean after the closure of the Tethys Seaway 30 , and thus further contributed to the high diversity in the original center of seahorse biodiversity (Fig. 2c, d ). The South Atlantic seahorse lineage split and dispersed from the Arabian Sea southernly, along the East of the African continent. The closure of the Tethys Seaway may have enhanced the East African coast current and the Agulhas Current, which potentially assisted in this southward long-distance migration 30 . This lineage passed the Cape of Good Hope, a potentially severe dispersal bottleneck reflected in the extremely low effective population size of this lineage ~4.8–3.6 Ma, and colonized the Southern and Western African coastlines ( H. capensis and H. algiricus , respectively). Following this second invasion of the Atlantic in the early Pliocene, ancestors of the South American lineages crossed the Atlantic and colonized the South American coastlines, with H. ingens emerging from an early lineage that colonized the north of South America. In line with previous studies 4 , 21 , we also found that this lineage crossed the Panama Seaway before its final closure 4 , where it thrived as indicated by a large average effective population size (Fig. 2a, d ). Subsequently, a second lineage successfully crossed the South Atlantic approximately 700k years ago and colonized the northern coast of South America and the Caribbean, from which H. reidi evolved. Average effective population sizes of this lineage remained relatively small, possibly as it was not able to spread into the East Pacific due to the prior closure of the Panama Seaway and the competitive disadvantage as its habitat likely overlapped with those of other seahorse species, such as H. erectus (Fig. 2d ). Repeated crossings of the South Atlantic via rafting along the Benguela & South Equatorial Current have been proposed before 21 , 22 . Indeed, ongoing gene flow from the West-African H. algiricus into the South American H. reidi population with much less pronounced gene flow in the opposite direction supports the notion that rafting along these ocean currents facilitated this colonization route (Fig. 3a ). Fig. 3: Gene flow and fluctuations in the effective population size. a Gene flow detected between species inhabiting the South Atlantic Ocean. Gene flow is shown nearby the white lines as migration rate deduced by G-PhoCS. Thickness and direction of the arrows correspond to rates and direction of gene flow, respectively. Maps modified from Ron Blakey © 2016 Colorado Plateau Geosystems Inc (License # 60519). Source Data are provided in Supplementary Table 10 . b Fluctuations in effective population size by PSMC. The x axis represents time in years before present while the y axis represents the effective population size. The charts are organized mainly according to the geographic distribution for each of the species with different distribution areas. Source data are provided as a Source Data file. c Sea level change during the past 1 million years in meters 33 . The yellow line indicates the last global interglacial peak while the cyan shade indicates the last glacial maximum period. Full size image The global diversification of seahorses thus involved long-distance dispersal and has been facilitated by paleo-seaway dynamics and changing ocean currents. Specifically, our analyses finally confirm that Indo-Pacific seahorses colonized the eastern coastline of America via two distinct routes and in two waves, a topic previously under debate 19 , 22 : firstly, by colonizing the still open Tethys seaway and subsequent crossing of the Atlantic Ocean, and later by passing the South African Cape of Good Hope. Interestingly, the second wave occurred only in the early Pliocene, potentially facilitated by a change in the South Atlantic and Caribbean ocean current dynamics driven by the ongoing closure of the Panama Seaway 27 , 31 . These findings contradict a recent study that suggested only one colonization via the South Africa route 22 and thus emphasizes the importance of a wide species representation in biogeography studies. As outlined above, tectonic shifts and subsequent changes in ocean current dynamics likely facilitated some of the major dispersal and diversification events in seahorses, however, more short-term changes in seawater levels can also drastically affect the evolution of marine organisms inhabiting shallow water, for example by changing the amount of suitable habitat in a given area or change its structure 32 . Fluctuations in effective population sizes ( N e s ) were estimated back up to 1 million years ago (Fig. 3b , Figshare: Dataset 7 ). When such fluctuations since the last glacial peak (~120 k years ago to ~10k years ago) were compared to fluctuations in seawater levels, which are primarily driven by variations in global temperature (via glaciations) 33 , the patterns suggest a complex effect of seawater levels on N e (Fig. 3c ). Several seahorses’ effective population sizes appeared to be positively associated with warm climate and thus high seawater levels, as suggested by local maxima in effective population sizes following a warm period ~115 k years ago with a delay of several thousand years. These species include H. hippocampus (the sole European species considered), H. casscsio , H. fuscus (both lineages have restricted distribution ranges in and south of the Red Sea and East African coast), and H. subelongatus (only found at the West Australian coast). Effective population sizes of multiple other species show a more negative association with seawater levels with a local maximum in N e coinciding with a local minimum in sea level. These species include H. ingens (the only species considered distributed along the Pacific side of the American continent), H. spinosissimus , and H. trimaculatus , two species broadly distributed across the Sundaic region. However, several species show no peak in N e sizes likely associated with high or low seawater levels, and other factors might have a stronger influence on population sizes. For instance, species inhabiting the North Atlantic biome ( H. erectus , H. hippocampus & H. zosterae ) show generally larger N e than most other lineages (e.g., those inhabiting the South Atlantic) suggesting that the biome type can affect species N e s . Furthermore, some species might be more resilient against seawater level fluctuations or glaciation induced habitat loss than others as a result of increased dispersal abilities (e.g., via rafting 7 ), or because regional refugia from glaciations were available 34 . Convergent evolution of adaptive phenotypes During their worldwide diversification, seahorses had to adapt to diverse combinations of abiotic and biotic factors leading to unique adaptive phenotypes 24 . Adult seahorses have only relatively few predators due to their excellent camouflage and unappetizing bony plates and spines 11 . Spines, which were derived from L-type plates covering the surface of seahorses just under the skin, are morphologically similar to the diamond-shaped dermal spines covering the skin surface in pufferfishes, which are the extreme-scale derivatives 35 . Vertebrates possess a huge diversity of skin-derived structures, including teleost fish scales, reptilian scales, avian feathers, and mammalian hair 36 . Although the skin structures are not structurally homologous, they seem to be controlled by highly conserved genetic mechanisms between the different vertebrate clades 37 , 38 , 39 . Previous studies have shown that Hh, Fgf, Bmp, Wnt/β-catenin, and Eda pathways were involved in teleost scale development 40 , 41 , 42 , 43 , 44 , 45 . It is likely that teleost skin structures (even when strongly modified), share common elements of these core signaling pathways known to underpin skin structure development throughout diverse vertebrate groups. Seahorses have also evolved variations in the degree of body coverage by spines, which may enable them to adapt to diverse ecological niches. Interestingly, species exhibiting bony spines were found to not be closely related by our species tree: H. spinosissimus , H. jayakari , H. histrix , and H. barbouri . This confirms previous findings 18 and suggests that some lineages were exposed to similar environmental pressures, such as specific predator types, have evolved similar phenotypes independently (Fig. 4 and Supplementary Fig. 11 ). Spiny seahorses inhabiting the north and west Indian Ocean split from their sister lineage 8.7 and 7.8 Ma, respectively, while spiny seahorses inhabiting the Pacific Ocean diverged from their sister lineage 14.7 and 6.8 Ma (Supplementary Fig. 11 ). Fig. 4: The evolution of spines. a Left, Species tree displaying the independent evolution of spines in seahorses. The branch length indicates number of substitutions per site. Four spiny seahorse species are highlighted in blue. Thicker branches correspond to higher rates of nonsynonymous-to-synonymous substitutions (d N/ d S ) for bmp3 gene. Canonical and generalized McDonald and Kreitman test (MKT) for bmp3 gene was performed for three pairwise sister species with divergent spiny and non-spiny features highlighted by background colors, whose significance levels were indicated by p value with blue and red font, respectively. Right, comparison of amino acid substitutions in bmp3 protein, polymorphic and fixed substitutions in spiny seahorses are indicated with red and blue circles, respectively. b Distribution of d N /d S values in bmp3 in spiny seahorses compared to non-spiny species. c Independent evolution in the phylogenetic tree reconstructed for the protein encoded by bmp3 . Seahorses illustrations by Geng Qin. d Whole-mount in situ hybridization of bmp3 in Hippocampus erectus . In situ photos of seahorses by Ralf F. Schneider. Source data are provided as a Source Data file. Full size image To investigate the molecular basis of this repeatedly evolved adaptive phenotype, we performed a positive selection analysis to investigate whether accelerated nonsynonymous/synonymous mutation rate ratios (d N /d S ) can be detected on the branches of spiny seahorses compared to non-spiny lineages. Using the codeml program in PAML we identified 37 genes putatively under positive selection with signals of accelerated d N /d S in spiny seahorses ( p < 0.001, Supplementary Data 3 , Figshare: Dataset 8 ). Protein trees obtained from the amino acid sequences of all 37 genes showed that the four spiny seahorses are not closely related to each other (Fig. 4 , Figshare: Dataset 9 ), indicating that the spiny phenotype likely evolved independently. Specifically, the four spiny seahorse lineages exhibit independent amino acid changes in the bone morphogenetic protein 3 ( bmp3 ) gene (Fig. 4a ), and canonical and generalized McDonald and Kreitman tests (MKT) showed that bmp3 evolved under positive selection (neutrality index < 1, Chi-square test p < 0.05) (Fig. 4a, b , Supplementary Data 4 ). Spines emerge in many syngnathid species’ embryos (including H. erectus ) and are lost in some species secondarily during maturation. Although the spiny phenotype likely has a polygenic basis, whole-mount in situ hybridizations demonstrate bmp3 expression in seahorse spines’ early developmental stages in H. erectus , a species whose adult stages do not have well-developed spines (Fig. 4d , Supplementary Fig. 12a ). Being a transcription factor, bmp3 was shown to negatively regulate osteoblast differentiation (and thus bone mass) in mammals 46 , 47 , suggesting that divergent sites in this gene between spiny and non-spiny seahorses may affect its regulatory interaction with downstream genes and thus contribute to spine outgrowth in those species with derived peptide sequences. Moreover, a knockout experiment using CRISPR/Cas9 in zebrafish showed that mutants have a series of significant scale defects, such as decrements in scale numbers, rearrangements, and irregular shapes, confirming that bmp3 plays a role in the formation of dermal bones in teleosts, and thus likely also spines (Supplementary Fig. 12b, c ). The independent evolution of complex adaptive phenotypes, such as the spine phenotype, suggests that seahorses have a generally high evolvability, in concordance with the high rates of nucleotide evolution already reported 10 and the high diversification rates of Hippocampus we reported here (Supplementary Fig. 1 , Figshare: Dataset 1 ). Thus, the ability to rapidly adapt to new environments and respond to changed selection regimes may, in addition to their unorthodox means of dispersal by rafting along oceanic currents, account for some of the evolutionary success seahorses had while diversifying globally. In conclusion, we report that seahorses dispersed over surprisingly long distances, and diversification was assisted by changing ocean currents and tectonic events. These include two independent invasions of the Atlantic Ocean from the West Indian Ocean, one of them facilitated by the last opening of the East Tethys Seaway and the other by passing the Cape of Good Hope and, finally, the colonization of the East Pacific Ocean through the Panama seaway. Convergent evolution of adaptive traits, such as in the case of repeatedly evolved protective dermal spines suggests that developmental-genetic pathways were recruited several times independently and presumably in response to predation pressure. Methods Diversification rate estimation in the Syngnathidae DNA sequences of 138 species of the Syngnathidae family and one outgroup were obtained from previous studies 48 , 49 . After sequence alignment using Clustal Omega (v1.2.4) 50 , a concatenated phylogenetic tree was obtained with RAxML (v8) using a best-scoring maximum likelihood tree search method (option -a) using a GTRGAMMA model and including 1,000 bootstrap replicates 51 . Relative divergence was estimated with the wLogDate python program 52 . Diversification rates (i.e., speciation minus extinction) were estimated using BAMM 2.5 48 . We accounted for non-random incomplete taxon sampling by including the proportion of missing taxa per genus (sample probabilities in Supplementary Data 5 ) as well as the overall sampled genera (=0.84). Priors were generated using setBAMMpriors in BAMMtools 48 . Analyses were run for 5 × 10 6 generations, sampling every 1000 generations and with a 25% burn-in. DNA sequences and the estimated phylogenetic tree are available at Figshare (Dataset 1). Long-read sequencing and assembly of the Hippocampus erectus genome A mature, male H. erectus bred in the aquatic farm in Fujian province, China, was used for the de novo genome assembly. Genomic DNA was extracted from tail muscles using a standard phenol/chloroform extraction protocol. Single-Molecule, Real-Time (SMRT) sequencing was performed using a total of 5 μg of genomic DNA to generate a 20 kb library according to the manufacturer’s instructions (Pacific Biosciences, USA). Subreads were obtained after size selection on a BluePippin system (Sage Science, USA). SMRT genome sequencing was performed on a PacBio Sequel platform (Pacific Biosciences, USA) to an approximate coverage of 113-fold. Reads with the quality lower than 0.75 and length shorter than 500 bp were excluded and 6.01 M subreads comprising a total of 47.88 Gb were retained for the assembly (longest subread = 71.17 kb, average length = 7.97 kb). The draft genome was assembled using WTDBG ( ). Sequence contigs were then error-corrected using Pilon 53 . Evaluation of the integrity of assembled sequences, genome size estimation, transposable element predictions and genome annotation are described in the Supplementary Information (Supplementary Methods, Supplementary Tables 2 – 4 , Supplementary Data 1 ). High-throughput chromosome conformation capture (Hi-C) based genome scaffolding An adult farmed male H. erectus was used for the Hi-C analysis. The library was prepared following a standard in situ Hi-C protocol for blood samples 54 , using DpnII (NEB, Ipswich, USA) as the restriction enzyme. A standard circularization step was carried out, followed by DNA nanoballs (DNB) preparation according to the standard protocol of the BGISEQ-500 sequencing platform 55 . The library was then sequenced with a PE100 strategy using the BGISEQ-500 platform. Quality control and library evaluation is described in the Supplementary Methods. For Hi-C alignment and chromosome orientation, we first constructed an interaction matrix based on the valid reads. Then, the ICE software was used to correct for any preference of the enzyme-cut loci due to an uneven distribution in GC content 56 . The retrieved valid pairs (319,356,098) were then used to orientate and anchor the PacBio contigs into superscaffolds (chromosomes) applying the 3D-DNA pipeline with the key parameter of ‘-m haploid -s 4 -c 22’ 57 . The contact maps were subsequently generated with the Juicer pipeline 58 , and the boundaries for each chromosome were manually rectified by visualizing the inter.hic file in Juicebox 59 , combining linkage information from the agp file. Re-sequencing sample preparation, mapping, and variant calling We sampled a total of 358 seahorse specimens from 21 species representing the major lineages of the genus Hippocampus (Fig. 1a , Supplementary Data 2 ), including 13 to 22 individuals per species, except for H. cassisio , H. capensis , and H. camelopardalis , represented by 8, 7, and 2 individuals, respectively. The classification of each specimen was based on morphological and genetic evidence 16 . Genomic DNA was extracted from tail muscles using a standard phenol/chloroform extraction method and used to construct an approximately 350 bp-insert-size sequencing library. Paired-end libraries were sequenced on an Illumina HiSeq 4000 platform. One random sample for each species was sequenced at ~20-fold coverage, and the rest were sequenced at ~10-fold coverage. After the removal of adapters and low-quality reads (Supplementary Methods), clean reads for each individual were mapped to both the PacBio genome sequence of H. erectus and the Illumina genome sequence of H. comes using BWA-MEM with default parameters (v0.7.17) 60 . We calculated mapping rates, depth, and genome coverage using SAMtools (v1.6) after sorting and removal of duplicates 61 . The assembled Hippocampus erectus PacBio genome was then used as the reference genome. By assigning 21 species, we then performed variant calling for all 358 individuals using FreeBayes v9.9.2 62 . Mapping and base quality filters were used as default in FreeBayes (–standard-filters flag). Details are shown in Supplementary Methods. The filtered dataset was then annotated according to the H. erectus genome using the package ANNOVAR 63 . Analysis of genetic diversity and divergence Inter-species genomic divergence was calculated for each pair of the 21 seahorse species, using the specimen with the highest sequencing fold coverage per species. We also calculated pairwise genetic distances among all 358 specimens using PLINK (v1.9) with the main parameter ‘–distance 1-ibs flat-missing’ 64 . A neighbor-joining (NJ) tree was then constructed using MEGA7 65 . Principal component analyses (PCA) were performed using smartPCA program within EIGENSOFT (v6.1.4) 66 . We furthermore analyzed intra-specific nucleotide diversity using ANGSD (v 0.924) 67 using sliding-window approach as stated in Supplementary Methods. Both Watterson ( θw ) 68 and pairwise ( θπ ) 69 estimators of theta were used for nucleotide diversity analysis (Figshare: Dataset 2 ). R packages ‘vioplot’ 70 and ‘circlize’ 71 were employed to explore nucleotide diversity among the different species and chromosomes. Global colonization patterns For our phylogenetic analyses, first gene families for Syngnathus scovelli , H. erectus , and H. comes were identified using Treefam 72 . After filtering low-quality genes with a premature termination codon or in which the base number of the coding region was not a multiple of three, gene family analyses were carried out and identified 5,475 single-copy orthologs 10 . Pair-wise alignments for H. erectus and S. scovelli were conducted using prank v.140603 73 and CDS sequences for 2,000 orthologs (randomly selected from the above mentioned) were then extracted for each specimen based on the SNP dataset (Figshare: Dataset 3 ). A coalescent-based phylogenetic tree was constructed using ASTRAL-III v5.6.1 74 , 75 , with a total of 2,000 independent gene trees and including one to five specimens for each of the species (103 specimens in total). Loci selected have an average length of 1,548 (± 1,325), average segregating sites of 18% (± 4%) and average missing data of 1%. Gene trees were generated using RAxML (v8) using the rapid bootstrap analysis and searched for the best-scoring maximum likelihood tree (option a) under a GTR + G substitution model and including 100 bootstrap replicates 51 . The DNA matrices, gene trees, and ASTRAL inference are available at Figshare (Dataset 4). To obtain divergence time estimates of the nodes in the Hippocampus species tree, 100 loci were randomly subsampled for the same one to five individuals per species (from the above list; 103 individuals in total), using the package starBEAST2 implemented in BEAST v2.4 76 . Loci selected for this analysis had an average of 1,579 bp (± 1,060), average segregating sites of 18% (± 4%) and 1% missing data. For calibration points, we used data from the paleontological work of Hippocampus 77 and other related groups of pygmy pipehorses and pipehorses 77 , 78 , 79 . Thus, using a lognormal distribution as hyperprior, we first calibrated the origin of Hippocampus genus to the youngest possible age of 11.6 Ma for which Hippocampus fossils were recorded as well as the existence of pipehorses and pygmy pipehorses has been shown 77 , 78 , 79 (Supplementary Table 9 ). Thus, this prior assumes that Hippocampus genus originated before the occurrence of the oldest known fossil of Hippocampus ( H. samarticus ) 77 , and we also relax a wide 95% HPD interval to accommodate uncertainty (95% HPD: 14.4-31.8). Second, we incorporated the information of the H. sarmaticus fossil from the Miocene as an ancestor of H. trimaculatus using a lognormal distribution with a mean 11.8 Ma to lead the median close to 11.6 Ma and the standard deviation was set to model uncertainty, covering the complete Middle Miocene upper bound to the Late Miocene (95% HPD: 8.32-16.1 Ma) 77 . Finally, following Teske and Beheregaray 17 , we set the divergence between H. reidi and H. ingens to a minimum of 2.8 Ma, in correspondence to the last connection between the Caribbean and the Pacific Ocean 80 . Although it has been argued that Colombian sediments supported the existence of Miocene temporal closures of the Panama seaway 80 , 81 , for this study we used a conservative prior by setting the minimal possible divergence time between these lineages to 2.8 Ma and also allowed the hyperprior to cover older dates, with 95% HPD: 3.07-4.64 Ma, given that O’Dea et al. suggested that a connection between the Atlantic and Pacific Oceans allowing gene flow likely existed until 3.2 Ma (gradually reduced in time) 80 . All remaining settings were used as default, including unlinked strict clocks and unlinked JC69 substitution models among loci. We fixed the ASTRAL tree topology and ran two independent analyses during the 160 ×10 8 steps of the MCMC chain and sampled at every 80,000 generations. Convergence was diagnosed using Tracer v1.7 82 . The two independent runs were combined using LogCombiner (included in BEAST v2.4 package) with a 10% burn-in. The maximum credibility tree was obtained using TreeAnnotator (also included in BEAST v2.4 package). The DNA matrices and BEAST xml input file and outputs are available at Figshare (Dataset 5). The topology and branch lengths (divergence times) of the species tree were used to reconstruct the geographic diversification under two different models of diversification in space: diffusion 83 , 84 and heterogeneous landscape 85 . Both models were run in BEAST v2.4, using a lognormal clock and tip coordinates matching the sampling points and current distribution of the species. For the heterogeneous model, we included a deformation in the continental areas by increasing the friction in an external kml file, to decrease the probability of migration through continents and nearby seaways. We ran different values of friction and deformation, including deformation = 10, 20, 50, and 100 and valued each polygon at 2 (higher deformation, higher friction). Due to the high similarity in the results, we only presented the results with deformation = 20; value = 2 (Supplementary Fig. 9a ). Convergence was diagnosed using Tracer, and Tree Annotator was used to export the final tree with a 10% burn-in. Finally, we used SPREAD (v1.0.6) 83 to generate a kml file and Google Earth Pro to plot and animate the diversification of the Hippocampus genus in space and time. The BEAST xml input file and outputs are available at Figshare (Dataset 6). Demographic inference with G-PhoCS A total of 102 representative specimens (2-5 specimens for each species) were used to infer the demographic history of seahorses. Neutral loci were used to run the demographic analysis 86 . The filtering strategy is summarized in Supplementary Methods. 52.2% of the genome remained after filtering, from which we selected 6102 ‘neutral loci’ by identifying contiguous intervals of 1 kb that passed the filters. We used the default settings chosen by Gronau et al . 86 : a Gamma distribution (α = 1.0, β = 10,000) for the mutation-scaled population sizes (θ) and divergence times (τ), and a Gamma (α = 0.002, β = 0.00001) prior for the mutation-scaled migration rates ( m ). The Markov Chains exploring the space of parameter values were run for 100,000 burn-in iterations with an additional 200,000 iterations. The mean sampled value and the 95% Bayesian credible interval of each parameter were calculated by Tracer v1.7.1 82 . We assumed an average mutation rate ( μ ) of 4.33 × 10 −10 per nucleotide per generation 10 and an average generation time of one year for the Hippocampus species. The population size estimates ( Ne ) were obtained from the mutation-scaled samples ( θ ) based on the formula Ne = θ / 4 μ . Gene flow was measured by the total migration rate, which is the per-generation rate times the number of generations in which migration was allowed (Fig. 3a , Supplementary Table 10 ). Inference of demographic history from PSMC analysis Pairwise sequentially Markovian coalescence analyses (PSMC) 87 were used for one individual (with the highest genome coverage) per species for interspecific comparisons. Genotype information of the selected individual was retrieved from the alignment BAM files using SAMTOOLS 61 . Variants with sequencing depth less than a third of the average depth or greater than 2.5 times were removed. The program fq2psmcfa was used to convert the diploid consensus sequence to a FASTA-like format where the characters indicated heterozygous positions in consecutive bins of 100 bp. The program psmc was then used to infer the population size history 87 , where the parameters were set as -N 30 -t 15 -r 5 -p 4 + 25*2 + 4 + 6. We assumed a generation time of 1 year and a mutation rate ( μ ) of 4.33×10 −10 per nucleotide per generation 10 . The genetic basis for the spine trait Four seahorse species used in this study, including H. spinosissimus , H. jayakari , H. histrix , and H. barbouri , typically show well developed spines 16 (Fig. 3a , Supplementary Fig. 11 ). To detect positively selected genes (PSGs) potentially related to bony spines, we reconstructed gene sequences for 20 seahorse species (excluding H. camelopardalis with extremely low sequencing depth) using both SNPs and invariant sites ( H. erectus genome as reference). The aligned codon sequences for each gene were further analyzed using codeml program in PAML 88 to calculate the d N /d S and we detected positive selection on particular branches considering the phylogenetic relationships among these 20 species (obtained using ASTRAL; described in Phylogenetic analysis Section). The ‘one-ratio’ and ‘two-ratio’ codon substitution models were considered. ‘One-ratio’ model assumes the same d N /d S across all the branches in the phylogeny of species, which was termed as the ‘null hypothesis’. The ‘Two-ratio’ model presumes diverged d N /d S for the branches of spiny and non-spiny lineages, as ‘alternative hypothesis’. Likelihood ratio tests were conducted to compare the above-mentioned models by calculating the corresponding likelihoods, χ 2 critical values, and p values for each gene. We adopted a relatively strict threshold of 0.001 for the original p values to initially obtain a set of 37 putative genes under positive selection with significantly accelerated d N /d S on the branches of spiny seahorse lineages (Supplementary Data 3 ). To further characterize the functional genes potentially relevant to spine development for the 37 candidate genes, we performed canonical and generalized MKT to detect the signature of natural selection based on population genomic sequences. For the canonical MKT, the number of nonsynonymous (d N ) and synonymous (d S ) variants between three pairwise sister species with divergent spiny and non-spiny features, containing H. spinosissimus and H. kelloggi , H. jayakari and H. mohnikei , and H. barbouri and H. comes , and those nonsynonymous (Pn) and synonymous (Ps) variants within species were estimated, where H. kuda and H. kuda & H. histrix were considered as outgroups, respectively. According to these tests, the neutrality index (NI = (Pn/Ps)/(d N /d S )) were calculated, and a Chi-square test was implemented. NI < 1 indicated high divergence between species due to positive selection. We performed generalized MKTs, where, d N and d S , were estimated as the derived nonsynonymous and synonymous variations for one of the sister species with divergent spine status contrasted with the ancestral and sister species, which were then compared to the Pn and Ps, putatively neutral, in this lineage. We also implemented the ‘Free-ratio model’ to estimate the variable d N /d S ratio on each phylogenetic branch based on the aligned codon sequences for each of the 37 genes through the maximum likelihood method using CODEML in PAML 88 . Distribution of d N /d S values of the 37 putative PSGs in 20 seahorse species are available at Figshare (Dataset 8). By integrating the results from abovementioned analyses, the genes simultaneously showing significance in PAML and MKT, and consistently presenting accelerated d N /d S from ‘Free-ratio model’ on the branches of spiny seahorse lineages in comparison with those of non-spiny lineages, especially with those of sister non-spiny lineages, were considered as confident candidates for further experimental confirmation. Additionally, we reconstructed the CDS sequences of 37 PSGs for 21 seahorse species (one specimen with the highest sequence fold coverage for each species) with the filtered SNP dataset and translated them into protein sequences using in-house scripts. We estimated the protein trees using RAxML (v8) 51 using the rapid bootstrap analysis and search of best-scoring maximum likelihood tree (option a) under a PROTGAMMAGTR substitution model and including 100 bootstrap replicates. The protein trees of these 37 PSGs from 21 seahorse species are deposited at Figshare (Dataset 9). Multiple sequence alignment analysis was then performed for bmp3 based on the generated protein sequences. Only private amino acid substitutions that were polymorphic or fixed in spiny seahorses were retrieved. Private, polymorphic substitutions refer to amino acid substitutions that were segregating exclusively in one or more of the four spiny seahorses, while private, fixed substations refer to amino acid substitutions that were fixed exclusively in one or more of the four spiny seahorses. Whole-mount in situ hybridization of bmp3 was performed with embryos of the lined seahorse H. erectus at different developmental stages, including approximately four, three, two, and one day prior to birth (for the latter, three independent replicates were performed with coinciding expression patterns). Embryos were dissected in RNase-free 1X phosphate-buffered saline and fixed in 4% paraformaldehyde (PFA) at 4 °C overnight. For bmp3 , a specific antisense RNA probe was synthesized 87 : digoxigenin-labeled UTPs (Roche, item-nr. 11277073910) and SP6 RNA Polymerase (Roche, item-nr. RPOLSP6-RO) were used to synthesize antisense RNA probes from plasmids in which a bmp3 PCR fragment was cloned behind a SP6 RNA Polymerase promoter (Supplementary Table 11 ). Hybridization procedures mostly followed previously described protocols 89 : firstly, embryos were bleached and cleared in 1.5% H 2 O 2 in 1% KOH until pigmentation was removed (was only done for the sample presented in Fig. 4 ), then permeabilized using 10 µg/ml proteinase K in Tris-buffered saline with 0.1% Tween-20 (TBS-T) for 15-20 min, then endogenous alkaline phosphatase (AP) activity was deactivated using a solution of 0.2 M triethanolamine (pH 7.5) with 2.5% acetic anhydride added directly before treatment (for 20 min), and a refixation using 4% PFA for 20 min was performed. In between steps, washes were performed with TBS-T. Subsequently, samples were equilibrated with the hybridization mix at 68 °C for 4 h, followed by overnight hybridization using hybridization mix with 100 ng probe/ml at 68 °C. Samples were then repeatedly washed using a mix from 5x saline sodium citrate (SSC), 50% formamide and 2%Tween-20 at 68 °C, followed by washes in 2x SSC with 0.2% Tween-20 at room temperature. After washes with TBS-T, samples were blocked using blocking buffer for 1.5 h, and then treated with the anti-DIG-AP antibody (1:4000 concentration; Roche, item-nr. 11093274910) in blocking buffer for 5 h at room temperature. After repeated washing of samples for 2 days with maleic acid buffer, they were kept in AP buffer (with Levamisol) for 20 min, after which they were moved to BM-Purple until desired color intensity was reached (Roche, item-nr. 11442074001), and finally photographed. To investigate the phenotypic consequences of bmp3 loss in a teleost, we used a CRISPR/Cas9 strategy to generate a bmp3 mutant zebrafish line according to Miguel et al. 90 . The bmp3 guide RNA (gRNA) was designed online ( ) targeting the first exon of zebrafish bmp3 . The gRNAs was constructed by overlapping PCR. This method requires a target-specific DNA oligo (top-strand oligo) and a generic DNA oligo for the guide RNA (Supplementary Table 11 ). The target-specific oligo contains a T7 promoter, the target sequence and finally a 20-nt sequence complementary to the guide RNA (Supplementary Table 11 ). The two oligos are annealed and extended with DNA polymerase, and the resulting product serves as a template for in vitro transcription using the mMESSAGE mMACHINE™ T7 Transcription Kit (Thermo Fischer Scientific AM1344) and the transcripted production was purified using the RNA Clean & Concentrator™-5 (Zymo Research R1014). The pT3TS-nCas9n vector was synthesized using the XbaI restriction enzyme (NEB R0145S) and performed in vitro transcription and purification using mMESSAGE mMACHINE™ T3 Transcription Kit (Thermo Fischer Scientific AM1348). The transgenic zebrafish parent labeled with green fluorescent protein for osteoblast-specific transcription factor (Osterix GFP) used in this experiment were cultured at 26–28 °C under a controlled light cycle (14 h light, 10 h dark) to induce spawning. Purified sgRNAs (80 ng/μl) were co-injected with Cas9 mRNA (400 ng/μl) into zebrafish embryos at the one-cell stage. These founders (F0) fish were raised to maturity and the genotyping primers (Supplementary Table 11 ) were used to screen out F0 with site mutations by the fin clipping, DNA extraction, PCR spanning the target site and sequencing. The adult F0 with mutation were outcrossed with wild-type fish to obtain F1 fish, which were subsequently genotyped. The F1 fishes with the same mutant genotype transmitting a frameshift mutation were inbred to obtain homozygous F2 fish, which were used for further phenotypic observation. Osterix GFP-labeled mutant and wild specimens were observed and photographed under a Leica M205 FA Fluorescent Stereo Microscope (Wetzlar, Germany). All experiments were performed in accordance with approved Institutional Animal Care and Use Committee protocols of the scientific ethic committee of the Huazhong Agricultural University (HZAUFI-2018-018). As results, we didn’t observe allele mutation for dre-bmp3-gRNA1, so no stable line was generated for this CRISPR. But for dre-bmp3-gRNA2, two bmp3 nonsense alleles with 14 bp insertion ( bmp3 +14 ) and 2 bp deletion ( bmp3 −2 ) in the first exon were generated (Supplementary Fig. 12b ), which both caused frame-shift mutations at the 69th AA, and premature transcription termination event at the 161th and 94th AA, respectively. In the F2 mutant bmp3 fish, we observed a series of scale defects, such as decrements in scale numbers, rearrangements, and irregular shapes. The F2 bmp3 +14 mutant fishes gave 4/29 fish with scale defects whereas 3/31 had scale defects for F2 bmp3 −2 mutant fish. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All sequencing data generated in this project are available at NCBI under BioProjects PRJNA613175 (PacBio, ), PRJNA613176 (Hi-C, ) and PRJNA612146 (Re-sequencing, ). In addition, processed datasets including custom codes (Datasets 1 – 9 ) are available at Figshare ( ). Source data are provided with this paper. Code availability Custom scripts employed for the analysis of the sequencing data are available at Figshare ( ).
Seahorses are extremely poor swimmers. Surprisingly, however, they can be found in all of the world's oceans. On the basis of almost 360 different seahorse genomes, a group of researchers studied how these special fish were able to spread so suc-cessfully worldwide. Based on an evolutionary tree of 21 species it was possible to reconstruct the dispersal routes of seahorses worldwide and to explain where and when new species emerged. The international research collaboration involving the research team led by evolutionary biologist Professor Axel Meyer at the University of Konstanz and researchers from China and Singapore was able to identify factors that led to the success of the seahorse from a developmental biology perspective: its quickness to adapt by, for example, repeatedly evolving spines in the skin and its fast genetic rates of evolution. The results will be published on 17 February 2021 in Nature Communications. Seahorses of the genus Hippocampus emerged about 25 million years ago in the Indo-Pacific region from pipefish, their closest relatives. And while the latter usually swim fairly well, seahorses lack their pelvic and tail fins and evolved a prehensile tail instead that can be used, for example, to hold on to seaweed or corals. Early on, they split into two main groups. "One group stayed mainly in the same place, while the other spread all over the world," says Dr. Ralf Schneider, who is now a postdoc-toral researcher at the GEOMAR Helmholtz Centre for Ocean Research Kiel, and participated in the study while working as a doctoral researcher in Axel Meyer's re-search team. In their original home waters of the Indo-Pacific, the remaining species diversified in a unique island environment, while the other group made its way into the Pacific Ocean via Africa, Europe and the Americas. Traveling the world by raft The particularly large amount of data collected for the study enabled the research team to create an especially reliable seahorse tree showing the relationships be-tween species and the global dispersal routes of the seahorse. Evolutionary biologist, Dr. Schneider, says: "If you compare the relationships between the species to the ocean currents, you notice that seahorses were transported across the oceans." If, for example, they were carried out to sea during storms, they used their grasping tail to hold on to anything they could find, like a piece of algae or a tree trunk. These are places where the animals could survive for a long time. The currents often swept these "rafts" hundreds of kilometers across the ocean before they landed someplace where the seahorses could hop off and find a new home. Since seahorses have been around for more than 25 million years, it was important to factor in that ocean currents have changed over time as tectonic plates have shift-ed. For example, about 15 million years ago, the Tethys Ocean was almost as large as today's Mediterranean Sea. On the west side, where the Strait of Gibraltar is lo-cated today, it connected to the Atlantic Ocean. On the east side, where the Arabian Peninsula is today, it led to the Indian Ocean. Tectonic shifts change ocean currents The researchers were able to underscore, for example, that the seahorses were able to colonize the Tethys Ocean via the Arabian Sea just before the tectonic plates shifted and sealed off the eastern connection. The resulting current flowing westward towards the Atlantic Ocean brought seahorses to North America. A few million years later, this western connection also closed and the entire Tethys Ocean dried out. Ralf Schneider: "Until now it was unclear whether seahorses in the Atlantic all traced their lineage to species from the Arabian Sea that had traveled south along the east coast of Africa, around the Cape of Good Hope and across the southern Atlantic Ocean to reach South America. We found out that a second lineage of seahorses had done just that, albeit later." Since the research team gathered 20 animal samples from each habitat, it was also possible to measure the genetic variation between individuals. And this generally revealed: The greater the variation, the larger the population. "We can reconstruct the age of a variation based on its type. This makes it possible to calculate the size of the population at different points in time," the evolutionary biologist explains. This calculation reveals that the population that crossed the Atlantic Ocean to North America was very small, supporting the hypothesis that it have come from just a few animals brought there by the ocean's currents while holding on to a raft. The same data also showed that, even today, seahorses from Africa cross the southern Atlantic Ocean and introduce their genetic material into the South American population. Fast and flexible adaptation Seahorses not only spread around the world by traveling with the ocean currents, but they were also surprisingly good at settling in new habitats. Seahorses have greatly modified genomes and, throughout their evolution, they have lost many genes, emerged with new ones or gained duplicates. This means: Seahorses change very quickly in comparison to other fish. This is probably why different types of "bony spines" evolved quickly and independently of each other that protect seahorses from predation in some habitats. Some of the genes have been identified that exhibit particular modifications for cer-tain species, but they are not the same for all species. Multiple fast and independent selections led to the development of spines, and although the same genes play a role in this development, different mutations were responsible. This shows that the slower, sessile seahorses were particularly able to adapt quickly to their environments. This is one of the main reasons the research team gives for seahorses being so successful in colonizing new habitats.
10.1038/s41467-021-21379-x
Physics
Researchers present new multifunctional topological insulator material with combined superconductivity
Binghai, Y., Jansen, M. and Felser, C. A large-energy-gap oxide topological insulator based on the superconductor BaBiO3, Nature Physics, 22 September 2013. DOI: 10.1038/nphys2762 Journal information: Nature Physics
http://dx.doi.org/10.1038/nphys2762
https://phys.org/news/2013-09-multifunctional-topological-insulator-material-combined.html
Abstract Topological insulators are a new class of quantum materials that are characterized by robust topological surface states (TSSs) inside the bulk insulating gap 1 , 2 , which hold great potential for applications in quantum information and spintronics as well as thermoelectrics. One major obstacle is the relatively small size of the bulk bandgap, which is typically around 0.3 eV for the known topological insulator materials (ref. 3 and references therein). Here we demonstrate through ab initio calculations that a known superconductor BaBiO 3 (BBO) with a T c of nearly 30 K (refs 4 , 5 ) emerges as a topological insulator in the electron-doped region. BBO exhibits a large topological energy gap of 0.7 eV, inside which a Dirac type of TSSs exists. As the first oxide topological insulator, BBO is naturally stable against surface oxidization and degradation, distinct from chalcogenide topological insulators 6 , 7 , 8 . An extra advantage of BBO lies in its ability to serve as an interface between TSSs and superconductors to realize Majorana fermions for future applications in quantum computation 9 . Main Mixed-valent perovskite oxides based on BBO (refs 4 , 5 ) are, like cuprates, well-known superconductors. The parent compound BBO crystallizes in a mononclinic lattice 10 that is distorted from the perovskite structure, and this distortion is attributed to the coexistence of two valence states, Bi 3+ (6 s 2 ) and Bi 5+ (6 s 0 ), due to charge disproportion of the formal Bi 4+ . Octahedral BiO 6 breathes out and in for Bi 3+ and Bi 5+ , respectively 10 . Under hole-doping conditions, such as in Ba 1− x K x BiO 3 ( x ∼ 0.4; ref. 5 ) and BaBi 1− x Pb x O 3 ( x ∼ 0.3; refs 4 , 11 ), the breathing distortion is suppressed, resulting in a simple perovskite lattice 12 in which superconductivity emerges. Recent ab initio calculations 13 have assigned the higher T c superconductivity to a correlation-enhanced electron–phonon coupling mechanism, stimulating the prediction and synthesis of new superconductor candidates among mixed-valent thallium perovskites 14 , 15 , 16 . The existing superconductivity has meant that research has mainly focused on hole-doped compounds, leaving electron-doped compounds relatively unexplored. In addition, the spin–orbit coupling (SOC) effect was not taken into account in previous theoretical study (ref. 13 and references therein), because the electronic states in the superconducting (hole-doped) region mainly result from Bi- 6 s and O- 2 p orbitals whose SOC effect is usually negligible. By including the SOC effect in density-functional theory (DFT) calculations of the BBO band structure, we discovered a band inversion between the first (Bi- 6 s state) and second (Bi- 6 p state) conduction bands, which is stable against lattice distortions. This inversion indicates that BBO is a three-dimensional topological insulator with a large indirect energy gap of 0.7 eV when doped by electrons instead of holes. The band structure of ideal cubic BBO reveals that the conduction bands are modified markedly when SOC is included owing to the presence of the Bi- 6 p states, as illustrated in Fig. 1a . The first conduction band crossing the Fermi energy ( E F ) has a considerable Bi- 6 s contribution over the whole Brillouin zone, except at the R momentum point where the Bi- 6 p contribution is dominant with the Bi- 6 s lying above it. Although one can see an inversion between Bi- 6 p and 6 s states here, there is a zero energy gap at R without SOC because of the degeneracy of the p states. In previous literature that did not employ SOC, actually, this feature was already revealed. When SOC is included, we found that the | p , j = 3/2〉 and | p , j = 1/2〉 states split, which results in the large indirect energy gap of 0.7 eV in the vicinity of the R point. We point out that the band inversion strength is as large as nearly 2 eV, which is the energy difference between Bi- 6 s and | p , j = 1/2〉 states at the R point, as shown in Fig. 1b . Unlike bulk HgTe (ref. 17 ), a well-known topological insulator, this inversion occurs between the | s , j = 1/2〉 state and the | p , j = 1/2〉 state, rather than the | p , j = 3/2〉 state. As the Bi atom is the inversion centre of the perovskite lattice, the Bi- 6 s and Bi- 6 p states have + and − parities, respectively. Thus, a topological insulator state can be obtained if E F is shifted up into this energy gap. The parities of all the valence bands below this gap were also calculated at all time-reversal invariant momenta, Γ, X, M and R, which yielded Z 2 topological invariants (1;111), confirming the topological non-trivial feature according to the parity criteria 18 . This is also consistent with a previous study of a topological insulator phase with Z 2 (1;111) in the perovskite lattice based on the model Hamiltonian 19 . At a doping rate of one electron per formula unit, E F shifts inside the s – p inversion gap, and all the Bi ions become Bi 3+ . Consequently, a cubic phase appears when the BiO 6 breathing distortion is suppressed, similar to the hole-doping case 12 . When the lone-pair Bi- 6 s state is fully occupied, we found that the new cubic lattice expands slightly in comparison with the undoped lattice. Although the s band becomes narrower in this case, the band inversion remains owing to the large s – p inversion strength (see Supplementary Fig. S1 ). Figure 1: Crystal structures and band structures of BBO. a , Ideal cubic perovskite lattice with the cubic Brillouin zone, and the band inversion process. The Bi atom is represented by the purple ball, O atoms by red balls and Ba atoms by green balls. Without SOC the Bi- s and Bi- p states are already inverted, resulting in two degenerate Bi- p bands at the R point. Subsequently, SOC splits this degeneracy and opens a large energy gap. b , Bulk band structure of the cubic lattice. The dispersions are shown along high-symmetry lines Γ(0 0 0)–X(0.5 0 0)–M(0.5 0.5 0)–R(0.5 0.5 0.5)– Γ, as labelled in the cubic Brillouin zone at the top of a . The Fermi energy is shifted to zero. The red and green dots indicate the Bi- s and Bi- p states, respectively, and corresponding parities are labelled. c , Surface band structure of an electron-doped BBO. The surface normal is along the (001) direction. Dispersions are along (0 0)– (0.5 0.5)– (0.5 0) in the surface Brillouin zone. The red lines highlight the topological surface states inside the bulk bandgap. On the right of c is the three-dimensional plot of the surface Dirac cone near the point with helical spin textures. d , Bulk band structure and lattice structure of the monoclinic BiBaO 3 . The original R point of a cubic Brillouin zone in a is projected to the Γ point of the monoclinic Brillouin zone in d . Thus the band inversion exists at the Γ point in the monoclinic band structure, where Bi- s and Bi- p states are indicated. The effect of SOC is included in band structures b – d . Full size image To illustrate the TSSs, we calculated the surface band structure using a slab model based on the Wannier functions extracted from the electron-doped bulk band structures. As an example, we take the surface to be oriented along the (001) direction on which the bulk R point is projected onto the point (0.5,0.5,0) of the surface Brillouin zone. The slab is 30 BBO units thick with the outermost atomic layers being Ba–O. The TSSs, shown in Fig. 1c , exhibit a simple Dirac-cone-like energy dispersion. The Dirac cone exhibits square warping at higher energies due to the cubic symmetry of the lattice. The Fermi surface below the Dirac point exhibits a right-hand helical spin-texture on the top surface, similar to that of Bi 2 Se 3 -type topological insulator materials 20 . The spin polarization orients dominantly inside the surface plane with negligible out-of-plane components. The Fermi velocity near the Dirac point is estimated to be approximately 0.75×10 5 m s −1 , and inside the large bulk energy gap, the TSSs are well localized on the surface atomic layers to about two BBO units or around 1 nm in thickness. On the other hand, to obtain the minimal effective model of the band topology, we derive a four-band Hamiltonian similar to that for Bi 2 Se 3 (ref. 6 ) in the basis of | p ; j = 1/2, m j = +1/2〉,| s ; j = 1/2, m j = +1/2〉,| p ; j = 1/2, m j = −1/2〉, and | s ; j = 1/2, m j = −1/2〉: where k = k 0 − k R (0.5, 0.5, 0.5) is centred at the R point, and k ± = k x ± i k y , ε 0 ( k ) = C + D k 2 and . The main difference from that of the Bi 2 Se 3 Hamiltonian is that equation (1) is isotropic to k owing to the cubic symmetry. We obtain the parameters of equation (1) by fitting the energy spectrum of the effective Hamiltonian to that of the ab initio calculations for the electron-doped cubic BBO using M = −0.625 eV, A = 2.5 eV Å, B = −9.0eV Å 2 and D = 1.5 eV Å 2 . Subsequently, the Fermi velocity of the TSSs is given by v = A / ℏ ≃ 0.5×10 5 m s −1 , which is consistent with the ab initio calculations. We can confirm that the topological insulator phase is stable against lattice distortions. The monoclinic phase of BBO, as shown in Fig. 1d , is related to the O-breathing and -tilting distortions. In the monoclinic Brillouin zone, the original R point of a cubic lattice is projected to the Γ point owing to band folding. One can see that the s – p band inversion at this Γ point is still present and the indirect gap is unchanged (0.7 eV) in the bulk band structure. In addition, traditional ab initio DFT calculations may overestimate the band inversion owing to the underestimation of the bandgap. Therefore, we performed band structure calculations using the hybrid functional method 21 , which is known to treat the dynamical correlation effect well for BBO (ref. 22 ). Here, we further validated the existence of band inversion for pristine cubic, electron-doped cubic and monoclinic distorted structures. (Details are described in the Supplementary Information .) Experimentally, electron-doped BBO may be achieved in BaBi(O 0.67 F 0.33 ) 3 by substituting F for O atoms. The O and F atom have comparable atomic radii and electronegativities, which can keep the octahedral BiO 6 stable. For example, F substitution for O was applied for the iron-based superconductor LaOFeAs to realize electron doping 23 . It is also possible to employ a state-of-the-art electrolyte gating technique to BBO to induce heavy electron doping, which has been realized for several mixed-valent compounds such as ZrNCl (ref. 24 ) and VO 2 (ref. 25 ). In particular, the electrolyte gating of VO 2 leads to the creation of oxygen vacancies, which induce a stable metallic phase even when removing the electrolyte 25 . As for the VO 2 case, we expect that electrolyte gating can also reach large electron doping by generating considerable oxygen vacancies, which were commonly observed as electron donors for BBO in previous experiments 26 . On the other hand, although the TSSs are unoccupied in pristine BBO compounds, it may be possible to monitor these states directly through monochromatic two-photon photoemission, as was recently employed to monitor the empty TSSs of Bi chalcogenides 27 . Thus far we can state that BBO becomes a superconductor with hole doping and a potential topological insulator with electron doping. If pn-junction-type devices are fabricated with BBO, an interface between the TSSs and the superconductor may be realized, which is necessary for the realization of the Majorana fermion proposal 9 for quantum computation. Here, we outline a double-gated thin-film configuration, as illustrated in Fig. 2 . If the bottom and top regions of the film are predoped as p and n type, respectively, the double-gated structure may feasibly induce a hole-rich bottom surface and an electron-rich top surface, resulting in TSSs and superconductivity states on the top and bottom surfaces, respectively. In the middle region of the slab, the TSSs overlap with the bulk bands and penetrate the bulk. These TSSs can then become superconducting as a result of the proximity effect with the bottom superconducting regime. Such a structure is likely to be attainable as high-quality BBO thin films, which have been successfully grown on SrTiO 3 (refs 28 , 29 , 30 ) and MgO (ref. 30 ) substrates. Moreover, the O-tilting lattice distortion was recently found to be suppressed in a BBO(001) thin film on MgO (ref. 30 ), which is very close to our required cubic structure. Figure 2: Schematic of the interface between the topological insulator and superconducting state in a double-gated thin-film device. The top and bottom surfaces are the topological insulator and superconducting regions, respectively. The position of the Fermi energy (dashed lines) shifts down from the top to bottom surfaces in the band structure. In the middle region, TSSs are interfaced with superconducting states and become superconducting owing to the proximity effect. Full size image The band structure of BaBiO 3 can act as a prototype for designing new perovskite topological insulators. Sc, Y or La can be substituted for Ba to obtain new compounds as analogues of an electron-doped BBO. We found in calculations that a similar band inversion exists in this case. However, these compounds are semimetals (the Sc/Y/La- d orbitals are lower in energy than the Bi- p states) and induce topological semimetals (see Supplementary Information ). In contrast, CsTlCl 3 -type halide perovskites, which are predicted to be superconductor candidates 14 , 15 , 16 , have band structures that are similar to BBO. However, we did not observe s – p inversion for ATlX 3 (A = Cs, Rb, X = F, Cl, Br, I), because the SOC of Tl is not strong enough. When we can substitute Sn or Pb for Tl, we find that heavier members of this family, such as CsPbI 3 , are near the boundary of a topological trivial–non-trivial phase transition. Compressive pressure is necessary to drive these boundary materials into the topological insulator region, which is consistent with recent theoretical calculations of these halides 31 . Methods In band structure calculations, we employed ab initio DFT with the generalized gradient approximation. We employed the Vienna ab initio simulation package with a plane wave basis 32 . The core electrons were represented by the projector-augmented-wave potential. For hybrid-functional calculations, we adopted the HSE06 (ref. 21 ) type of functionals and interpolated the band structures using Wannier functions 33 , where the DFT wavefunctions were projected to Bi- s p , Ba- d and O- p orbitals. We adopted the lattice constants from their experimental values for both the cubic 34 ( a = 4.35 Å) and monoclinic structures 10 .
Most materials show one function, for example, a material can be a metal, a semiconductor, or an insulator. Metals such as copper are used as conducting wires with only low resistance and energy loss. Superconductors are metals which can conduct current even without any resistance, although only far below room temperature. Semiconductors, the foundation of current computer technology, show only low conduction of current, while insulators show no conductivity at all. Physicists have recently been excited about a new exotic type of materials, so-called topological insulators. A topological insulator is insulating inside the bulk like a normal insulator, while on the surface it shows conductivity like a metal. When a topological insulator is interfaced with a superconductor, a mysterious particle called Majorana fermion emerges, which can be used to fabricate a quantum computer that can run much more quickly than any current computer. Searching for Majorana fermions based on a topological insulator–superconductor interface has thus become a hot race just very recently. Computer-based materials design has demonstrated its power in scientific research, saving resources and also accelerating the search for new materials for specific purposes. By employing state-of-art materials design methods, Dr. Binghai Yan and his collaborators from the Max Planck Institute for Chemical Physics of Solids and Johannes Gutenberg University Mainz (JGU) have recently predicted that the oxide compound BaBiO3 combines two required properties, i.e., topological insulator and superconductivity. This material has been known for about thirty years as a high-temperature superconductor of Tc of nearly 30 Kelvin with p-type doping. Now it has been discovered to be also a topological insulator with n-type doping. A p-n junction type of simple device assisted by gating or electrolyte gating is proposed to realize Majorana fermions for quantum computation, which does not require a complex interface between two materials. In addition to their options for use in quantum computers, topological insulators hold great potential applications in the emerging technology of spintronics and thermoelectrics for energy harvesting. One major obstacle for widespread application is the relatively small size of the bulk band gap, which is typically around 0.3 electron-volts (eV) for previously known topological insulator materials. Currently identified material exhibits a much larger energy-gap of 0.7 eV. Inside the energy-gap, metallic topological surface states exist with a Dirac-cone type of band structures. The research leading to the recent publication in Nature Physics was performed by a team of researchers from Dresden and Mainz around the theoretical physicist Dr. Binghai Yan and the experimental chemists Professor Martin Jansen and Professor Claudia Felser. "Now we are trying to synthesize n-type doped BaBiO3," said Jansen. "And we hope to be soon able to realize our idea."
10.1038/nphys2762
Biology
Large number of stem cell lines carry significant DNA damage, say researchers
Serena Nik-Zainal, Substantial somatic genomic variation and selection for BCOR mutations in human induced pluripotent stem cells, Nature Genetics (2022). DOI: 10.1038/s41588-022-01147-3. www.nature.com/articles/s41588-022-01147-3 Journal information: Nature Genetics
https://dx.doi.org/10.1038/s41588-022-01147-3
https://phys.org/news/2022-08-large-stem-cell-lines-significant.html
Abstract We explored human induced pluripotent stem cells (hiPSCs) derived from different tissues to gain insights into genomic integrity at single-nucleotide resolution. We used genome sequencing data from two large hiPSC repositories involving 696 hiPSCs and daughter subclones. We find ultraviolet light (UV)-related damage in ~72% of skin fibroblast-derived hiPSCs (F-hiPSCs), occasionally resulting in substantial mutagenesis (up to 15 mutations per megabase). We demonstrate remarkable genomic heterogeneity between independent F-hiPSC clones derived during the same round of reprogramming due to oligoclonal fibroblast populations. In contrast, blood-derived hiPSCs (B-hiPSCs) had fewer mutations and no UV damage but a high prevalence of acquired BCOR mutations (26.9% of lines). We reveal strong selection pressure for BCOR mutations in F-hiPSCs and B-hiPSCs and provide evidence that they arise in vitro. Directed differentiation of hiPSCs and RNA sequencing showed that BCOR mutations have functional consequences. Our work strongly suggests that detailed nucleotide-resolution characterization is essential before using hiPSCs. Main In regenerative medicine, human induced pluripotent stem cells (hiPSCs) and latterly organoids have become attractive model systems because they can be propagated and differentiated into many cell types. Specifically, hiPSCs have been adopted as a cellular model of choice for in vitro disease modeling as well as being considered for cell-based therapies 1 , 2 , 3 . The genomic integrity and tumorigenic potential of human pluripotent stem cells have been explored previously, but systematic large-scale, whole-genome assessments of mutagenesis at single-nucleotide resolution have been limited 4 , 5 , 6 , 7 , 8 . Human embryonic stem cells (hESCs) cultured in vitro have been reported to harbor TP53 mutations and recurrent chromosomal-scale genomic abnormalities ascribed to selection pressure 9 , 10 , 11 , 12 , 13 , 14 . However, in contrast, a recent study showed a low mutation burden in clinical-grade hESCs, and no cancer driver mutations were detected 15 . The mutational burden in any given hiPSC comprises mutations that were preexisting in the parental somatic cells from which it was derived and mutations that have accumulated over the course of reprogramming, cell culture and passaging 7 , 16 , 17 , 18 , 19 , 20 , 21 , 22 . Several small-scale genomic studies have shown that in some cell lines, preexisting somatic mutations make up a substantial proportion of the total burden 22 , 23 , 24 , 25 , 26 , 27 , 28 . With the advent of clinical trials using hiPSCs (e.g., NCT04339764 ) comes the need to gain in-depth understanding of the mutational landscape and potential risks of using these cells 29 , 30 . Here we contrast skin-derived (F-hiPSCs) and blood-derived (B-hiPSCs) from one individual. We then comprehensively assess hiPSCs from one of the world’s largest stem cell banks, HipSci, and an alternative cohort called Insignia. All lines had been karyotypically prescreened and deemed as chromosomally stable. We utilized combinations of whole-genome sequencing (WGS) and whole-exome sequencing (WES) of 555 hiPSC samples and 141 B-hiPSC-derived subclones (Supplementary Table 1 ) to understand the extent and origin of genomic damage and the possible implications. Results Genomic variations in skin and blood derived hiPSCs To first understand the extent to which the source of somatic cells used to make hiPSCs impacted on mutational load, we compared genomic variation in two independent F-hiPSCs and two independent B-hiPSCs from a 22-year-old healthy adult male (S2) (Fig. 1a ). F-hiPSCs were derived from skin fibroblasts, and B-hiPSCs were derived from peripheral blood endothelial progenitor cells (EPCs). Additionally, we derived F-hiPSCs and B-hiPSCs from six healthy males (S7, oaqd, paab, yemz, qorq and quls) and four healthy females (iudw, laey, eipl and fawm) (Fig. 1a ). Fig. 1: Comparison of mutation burden in EPC-derived and F-hiPSCs. a , Source of hiPSCs. Multiple hiPSC lines created from patient S2 contrasted to fibroblast- and EPC-derived hiPSCs created from ten other individuals. b , Mutation burden of substitutions, double substitutions (first row), substitution types (second row), skin-derived signatures (third row), indel types (fourth row) and rearrangements (lowest row). Supplementary Table 2 provides source information. UV-specific features, such as elevated CC>TT double substitutions and UV mutational signatures, were enriched in F-hiPSCs. Full size image WGS analysis revealed a greater number of mutations in F-hiPSCs as compared to B-hiPSCs in the individual S2 (~4.4 increase), and in lines derived from the other ten donors (Fig. 1b and Supplementary Table 2 ). There were very few structural variants (SVs) observed; thus, chromosomal-scale aberrations were not distinguishing between F-hiPSCs and B-hiPSCs (Supplementary Table 2 ). We noted considerable heterogeneity in the total numbers of mutations between sister hiPSCs from the same donor, S2; one F-hiPSC line (S2_SF3_P2) had 8,171 single substitutions, 1,879 double substitutions and 226 indels, whereas the other F-hiPSC line (S2_SF2_P2) had 1,873 single substitutions, 17 double substitutions and 71 indels (Fig. 1b ). Mutational signature analysis demonstrated striking predominance of UV-associated substitutions (Reference Signature/COSMIC) Signature 7 (ref. 31 ) in the F-hiPSCs, characterized by C>T transitions at T C A, T C C and T C T (Fig. 1b and Extended Data Fig. 1 ). This finding is consistent with previously published work that attributed UV signatures in hiPSCs to preexisting damage in parental skin fibroblasts 8 , 32 . In contrast, EPC-derived B-hiPSCs did not show any evidence of UV damage but showed patterns consistent with possible oxidative damage (signature 18, characterized by C>A mutations at T C T, G C A and A C A; Fig. 1 and Extended Data Fig. 1 ). Consistent with in vitro studies 33 , 34 , double substitutions were enriched in UV-damaged F-hiPSCs (Fig. 1b ). In all, we concluded that F-hiPSCs carry UV-related genomic damage as a result of sunlight exposure in vivo that does not manifest in EPC-derived B-hiPSCs. Importantly, screening for copy-number aberrations underestimated the substantial substitution/indel-based variation that exists in hiPSCs. High prevalence of UV-associated DNA damage in F-hiPSCs We asked whether these findings were applicable across F-hiPSC lines generally. Therefore, we interrogated all lines in the HipSci stem cell bank, comprising 452 F-hiPSCs generated from 288 healthy individuals (Fig. 2a and Supplementary Table 3 ). These F-hiPSC lines were generated using Sendai virus, cultured on irradiated mouse embryonic fibroblast feeder cells, which have been reported to help reduce genetic instability 20 and were expanded before sequencing (range, 7–46 passages; median, 18 passages; Extended Data Fig. 2 ) 35 . WGS data were available for 324 F-hiPSC and matched fibroblast lines (Fig. 2a ). We sought somatic mutations and identified 1,365,372 substitutions, 135,299 CC>TT double substitutions and 54,390 indels. Large variations in mutation distribution were noted, including 692–37,120 substitutions, 0–7,864 CC>TT double substitutions and 17–641 indels per sample, respectively (Fig. 2b and Supplementary Table 4 ). Fig. 2: Mutation burden and mutational signatures in F-hiPSCs. a , Summary of HipSci F-hiPSC dataset. A total of 452 F-hiPSCs were generated from 288 healthy donors. A total of 324 hiPSCs were whole-genome sequenced (coverage, 41×), 381 were whole-exome sequenced (coverage, 72×), and 106 had their matched fibroblasts whole-exome sequenced with high-coverage (hcWES) (coverage, 271×). Supplementary Table 3 provides source information. Numbers within black circles denote the number of F-hiPSCs that had data from multiple sequencing experiments. b , Mutation burden of substitutions (subs), CC>TT double substitutions and indels in F-hiPSCs from WGS. Data summary is provided in Supplementary Table 4 . Black dots and error represent mean ± standard deviation of hiPSC observations, n = 324 WGS of hiPSCs. c , Distribution of mutational signatures in 324 F-hiPSC lines. The inset figure shows the relative exposures of mutational signatures types. d , Relationships between mutation burdens of CC>TT double substitutions and UV-caused mutation burden of substitutions in F-hiPSCs, n = 324 WGS of hiPSCs. e , f , Histograms of aggregated mutation burden on transcribed (red) and nontranscribed (cyan) strands for C>T ( e ) and CC>TT ( f ), n = 324 WGS of hiPSCs. Transcriptional strand asymmetry across replication timing regions was observed. g , h , Distribution of substitution burden of F-hiPSCs with respect to donor’s age ( g ) and gender ( h ). Full size image Mutational signature analysis 31 revealed that 72% of F-hiPSCs carried detectable substitution signatures of UV damage (Fig. 2c ). hiPSCs with greater burden of UV-associated substitution signatures showed strong positive correlations with UV-associated CC>TT double substitutions 36 , 37 (Fig. 2d and Extended Data Fig. 3 ) and demonstrated clear transcriptional strand bias with an excess of C>T and CC>TT on the nontranscribed strand, enriched more in early replication timing domains 38 , 39 , 40 , 41 than in late ones (Fig. 2e,f ). These findings are consistent with previous reports of UV-related mutagenesis observed in fibroblasts 36 , 37 , 38 , 39 , 40 , 41 . Of note, similar to findings of UV damage in skin 42 , there was no correlation between total mutation burden of F-hiPSCs and donor age or gender (Fig. 2g,h ). Substantial genomic heterogeneity between F-hiPSCs clones F-hiPSCs comprise the majority of hiPSCs in stem cell banks globally and are a prime candidate for use in disease modeling and cell-based therapies. Yet, we and others observed substantial heterogeneity between hiPSC sister lines generated from one reprogramming experiment (Fig. 1b , subject S2) 5 , 6 , 23 , 32 . It has been postulated that this heterogeneity may result from the presence of genetically diverse clones within the fibroblasts. To explore this further, we compared mutational profiles of 118 pairs of F-hiPSCs present in HipSci, each pair having resulted from the same reprogramming experiment. In all, 54 pairs (46%) of hiPSCs shared more than ten mutations and had similar substitution numbers and profiles (cosine similarity >0.9; Fig. 3a ). The remaining 64 hiPSC pairs (54%) shared ten or fewer substitutions and were dissimilar in burden and profile (cosine similarity <0.9; Fig. 3a ). We found some striking differences; for example, the F-hiPSC line HPSI0314i-bubh_1, derived from donor HPSI0314i-bubh, had 900 substitutions with no UV signature, whereas HPSI0314i-bubh_3 had 11,000 substitutions with representation mostly from UV-associated damage (>90%). Analysis of the parental fibroblast line HPSI0314i-bubh showed some, albeit reduced, evidence of UV-associated mutagenesis. Hence, we postulate that F-hiPSCs from the same reprogramming experiment could show considerable variation in mutation burden because of different levels of sunlight exposure to each parental skin fibroblast. Fig. 3: Genomic heterogeneity in F-hiPSCs. a , Comparison of substitutions carried by pairs of hiPSCs that were derived from fibroblasts of the same donor and in the same reprogramming experiment. Each circle represents a single donor. Hollow circles indicate that two hiPSCs from the same donor share fewer than ten mutations, whereas filled circles indicate that they share ten or more mutations. Colors denote cosine similarity values between mutation profiles of two hiPSCs. A very high score (purple hues) indicates a strong likeness. n = 164 fibroblasts with two derived hiPSCs. b , Correlations between number of mutations in hiPSCs and their matched fibroblasts, n = 324 WGS of hiPSCs. c , Summary of subclonal clusters in fibroblasts ( n = 204 WGS of fibroblasts) and hiPSCs ( n = 324 WGS of hiPSCs). Kernel density estimation was used to smooth the distribution. Local maximums and minimums were calculated to identify subclonal clusters. Each dot represents a cluster that has at least 10% of total mutations in the sample. Most fibroblasts are polyclonal with a cluster VAF near 0.25, whereas hiPSCs are mostly clonal, with VAF near 0.5. d , Schematic illustration of genomic heterogeneity in F-hiPSCs. Through bulk sequencing of fibroblasts, all of these cells will carry all the mutations that were present in the gray cell, their most recent common ancestor. The individual cells will also carry their unique mutations depending on the DNA damage received by each cell. Each hiPSC clone is derived from a single cell. Subclone 1 and subclone 2 cells are more closely related and could share a lot of mutations in common, because they share a more recent common ancestor. However, they are distinct cells and will create separate hiPSC clones. Subclone 6 (orange cells) and subclone 12 (red cells) are not closely related to the green cells and have received more DNA damage from UV, making them genomically divergent from the green cells. They could still share some mutations in common, because they shared a common ancestor at some early point, but will have many of their own unique mutations (largely due to UV damage). Full size image To investigate further, we analyzed bulk-sequenced skin fibroblasts, which revealed high burdens of substitutions, CC>TT double substitutions and indels (Extended Data Fig. 4a ) consistent with UV exposure in the majority of fibroblasts (166/204) (Extended Data Fig. 4b ). Mutation burdens of F-hiPSCs were positively correlated with their matched fibroblasts (Fig. 3b ), directly implicating the fibroblast population as the root cause of F-hiPSC mutation burden and heterogeneity. Investigating variant allele frequency (VAF) distributions of somatic mutations in F-hiPSCs and fibroblasts demonstrated that most F-hiPSC populations were clonal (VAF = 0.5), whereas most fibroblast populations showed oligoclonality (VAF < 0.5) (Fig. 3c and Extended Data Fig. 5 ). At least two peaks were observed in VAF distributions of fibroblasts: a peak close to VAF = 0 (representing the neutral tail due to accumulation of mutations, which follows the power law distribution) and a peak close to VAF = 0.25 (representing the subclones in the fibroblast population). From the mean VAF of the cluster, we can estimate the relative size of the subclones (e.g., VAF = 0.25, indicating that the subclone occupies half of the fibroblast population 43 ). According to this principle, we found that ~68% of fibroblasts contain oligoclonal populations with a VAF < 0.25. The mutational burden of F-hiPSCs is thus dependent on which specific cells they were derived from in that oligoclonal fibroblast population. F-hiPSCs derived from the same subclone will be more similar to each other, whereas hiPSCs derived from different subclones within the fibroblast population could have hugely different mutation burdens (Fig. 3d ). There are important implications that arise from subclonal heterogeneity observed in fibroblasts. First, when detecting somatic mutations, it is preferable to compare the F-hiPSC genome to a matched germline sample if possible; otherwise, F-hiPSC mutations that are also present in a prominent fibroblast subclone will be dismissed as germline variants, giving a false sense of low DNA damage in F-hiPSCs (Supplementary Fig. 1 ). Indeed, ~95% of HipSci F-hiPSCs had some shared mutations with matched fibroblasts (Extended Data Fig. 6 ) that demonstrated a strong UV signature (Extended Data Fig. 7 ). Second, some F-hiPSC mutations may be present in the parental fibroblast population but not detected through lack of sequencing depth. In comparing WES data of the originating fibroblasts, at standard and at high coverage (hcWES), we found that an increased sequencing depth uncovered additional coding mutations that had been acquired in vivo; WES data showed 47% of coding mutations detected in hiPSCs were shared with matched fibroblasts, compared to 64% using hcWES (Extended Data Fig. 8 ). The additional 17% of mutations identified only in hcWES exhibited a strong UV substitution signature (Extended Data Fig. 8 ). suggesting that they may have been acquired in vivo and have been present within the parental fibroblast population but undetected at standard sequence coverage. Given recent sequencing studies that have demonstrated a high level of cancer-associated mutations in normal cells 37 , 44 , 45 , 46 , 47 , 48 , it is therefore probable that some mutations identified in hiPSCs but not detected in corresponding fibroblasts were still acquired in vivo and not during cell culture. Third, this work highlights the need for careful clone selection and comprehensive genomic characterization, as reprogramming can produce hiPSC clones with vastly different genetic landscapes. Strong selection for BCOR mutations in hiPSCs When we mapped hiPSC coding mutations to the COSMIC Cancer Gene Census (a proxy for genes that are selected for), we found a total of 272 mutations in 145 of these cancer genes, across 177 lines (177/452, 39%) from 137 donors. However, it is possible that some of these mutations are passenger mutations, given the heavy mutagenesis from UV damage in some hiPSCs. Hence, to determine potential selective advantage of coding mutations to F-hiPSCs, we conducted an agnostic analysis of selective pressure based on the ratio of divergence of non-synonymous to synonymous substitutions or dN/dS and hotspot analysis across the HipSci F-hiPSCs cohort by examining all genes in the genome 49 . Although several cancer genes were hit across multiple F-hiPSC lines (Supplementary Table 5 ), only BCOR was found to demonstrate statistically significant positive selection ( q = 3.64 × 10 −8 ; Fig. 4a and Supplementary Table 6 ) using dNdScv 49 . To increase the statistical power for detection of selection, a restricted hypothesis test of known cancer genes 49 still revealed only BCOR mutations as significant. Interestingly, all the BCOR mutations found in 11 F-hiPSCs were truncating mutations and predicted to be pathogenic (Fig. 4b ). No reads containing these BCOR mutations were seen in any founding fibroblast, even when some fibroblasts were sequenced to high coverage (>150×, Fig. 4b ), indicating that BCOR mutations observed in F-hiPSCs were more likely to have arisen in vitro. Fig. 4: Selection analysis revealed positive selective forces on BCOR mutations in F-hiPSCs and B-hiPSCs. a , Driver discovery workflow. Each BCOR mutation was curated using a genome browser. b , Number of reads reporting BCOR mutations in 11 F-hiPSCs and their founding fibroblasts. Colors are indicative of whether the sequencing read contains a mutation or otherwise (green, reads with reference (REF) allele; orange, reads with alternate (ALT) allele). c , BCOR mutation status of Insignia B-hiPSCs. Not shown are 50 of 78 B-hiPSCs that do not carry any BCOR variants in parental hiPSCs or their corresponding subclones. Rows indicate B-hiPSCs derived from various donors, including patients with genetic defects and healthy controls. Columns indicate parental hiPSC (iPSC) and daughter subclones (s1–s4). BCOR mutation status is shown in brown if at least one read contains a BCOR mutation in the sample or blue if there are no reads containing BCOR mutations in that sample. Gray indicates that a sample was not available (N/A). Total count of substitutions and indels of each iPSC is shown in histogram on the right. d , Schematic illustration of positions of mutations in BCOR protein in all F-hiPSCs (blue) and B-hiPSCs (purple). e , Exploring relationship between hiPSC mutation burden and BCOR status (two-sided Mann–Whitney test). hiPSCs with BCOR hits are highlighted with a darker shade of purple (B-hiPSCs) and blue (F-hiPSCs). MT, mutant; WT, wild-type. Full size image We extended our analysis to B-hiPSCs in the HipSci cohort. These B-hiPSCs were generated from erythroblasts derived from peripheral blood. Three of 17 sequenced B-hiPSCs carried mutations in BCOR (Supplementary Table 7 ), a higher proportion than F-hiPSCs in HipSci ( P = 0.0076, binomial test). However, many B-hiPSCs in the HipSci cohort did not have matched germline samples to perform subtraction of germline variation, rendering it possible (even if unlikely) for the BCOR mutations to be germline in origin. Therefore, we sought alternative cohorts of hiPSCs. Blood derivation methods for generating hiPSCs have been gaining popularity due to the ease of sample collection but are much less common than F-hiPSCs, and large cohorts of genomically characterized B-hiPSCs do not exist. Nevertheless, we accessed erythroblast-derived B-hiPSCs created from 78 individuals who were part of the Insignia project 50 , comprising 53 patients with inherited DNA repair defects, 5 patients with exposure to environmental agents and 20 healthy controls (Supplementary Table 8 and Supplementary Note ); WGS was performed, and somatic mutations were identified. dN/dS analysis on all coding variants of the 78 B-hiPSCs showed that only BCOR was under significant positive selection 49 ( q = 0; Fig. 4a and Supplementary Tables 9 and 10 ), consistent with F-hiPSCs, but present at a much higher prevalence in 21 (26.9%) B-hiPSC lines. Hotspot analysis did not find any recurrently mutated sites in BCOR in B-hiPSC lines. Source of recurrent BCOR mutations BCOR encodes for the BCL6-corepressor protein and is a member of the ankyrin repeat domain containing gene family. The corepressor expressed by BCOR binds to BCL6, a DNA-binding protein that acts as a transcription repressor for genes involved in regulation of B cells. Somatic mutations in BCOR have been reported in hematological malignancies, including acute myeloid leukemia and myelodysplastic syndromes 51 , 52 , 53 , and has also been reported at a low prevalence in other cancers, including lung, endometrial, breast and colon cancers 54 . The high prevalence of BCOR mutations in B-hiPSCs, but not F-hiPSCs, led us to ask whether these could have been derived from hematopoietic stem cell clones that were present in the donors. Clonal hematopoiesis (CH) has been reported in older individuals 55 , 56 . Some of the most common genes that are mutated in CH include DNMT3A, TET2, ASXL1, JAK2 and TP53 57 , 58 , 59 . BCOR is not one of the frequently mutated CH genes but has been reported in rare cases of aplastic anemia 53 . However, DNMT3A , reported once in our cohort, could represent a B-hiPSC derived from a CH clone, as the mutation p.G543V has also been reported as a CH variant 60 . Another possibility is that BCOR dysregulation is selected for in the culture process particularly in erythroblast-derived hiPSCs. To examine these possibilities, we asked whether the BCOR variants could be detected earlier in the B-hiPSC derivation process, either at the erythroblast population stage or in the germline DNA sample. We did not observe any of these BCOR variants in any of the sequencing reads in either erythroblast or germline DNA samples. It is nevertheless possible that CH clones escaped detection through lack of very deep sequencing depth. Notably, 6 of 21 BCOR mutations were present at lower VAFs (VAF < 0.3) in the B-hiPSCs and could represent subclonal BCOR variants within the parental B-hiPSC clone. This would suggest that BCOR mutations were arising because of ongoing selection pressure in culture following erythroblast derivation or after reprogramming. To investigate further, we cultured parental B-hiPSCs for 12–15 days. A minimum of two (and up to four) subclones were derived from the parental lines (Fig. 4c and Supplementary Table 11 ). A total of 141 subclones were assessed by WGS (Supplementary Table 12 ). In all parental B-hiPSC clones where BCOR mutations were identified and where daughter subclones were successfully generated, the BCOR variants were recapitulated in related daughter subclones, serving as an internal validation of those BCOR variants (Fig. 4c and Supplementary Table 10 ). Interestingly, 7 additional B-hiPSCs that did not have BCOR mutations in the parental B-hiPSC population developed new BCOR mutations in daughter subclones; MSH71 had p.P1229fs*5 BCOR mutations in both subclones but not the parent, MSH68 had p.D1118fs*22 BCOR mutations in one of two subclones, MSH13 had p.P1115fs*45 BCOR mutation in one of four subclones, MSH29 had a p.S1122fs*37 BCOR mutation in one of two subclones, MSH90 had a p.D1118fs*44 in BCOR in two subclones, MSH3 had a p.S158fs*28 BCOR mutation in all three subclones and MSH41 had p.R1398fs*4 in BCOR in two subclones (Fig. 4c and Supplementary Table 10 ). Given that these BCOR mutations are present in some, but not all, subclones and are not present at a detectable frequency in the parental population, this finding suggests that they have arisen late in culture of the parental B-hiPSC. All BCOR mutations in B-hiPSCs were predicted to be truncating variants distributed throughout the gene (Fig. 4d ). No correlation was found between mutation burden in parental line and BCOR status in either F-hiPSCs ( P = 0.77, Mann–Whitney test) or B-hiPSCs ( P = 0.55, Mann–Whitney test), indicating that BCOR mutations were not simply mutations in hypermutated samples (Fig. 4e ). Therefore, in this analysis, we found a high prevalence of recurrent BCOR mutations in two independent cohorts of erythroblast-derived B-hiPSCs (18% of HipSci and 27% of Insignia (~25% overall, across all B-hiPSCs)). We did not find BCOR mutations in originating erythroblasts or in germline DNA samples. Instead, single-cell-derived subclones variably carried new BCOR mutations, suggesting that there may be selection for BCOR dysfunction post-reprogramming in B-hiPSCs. We cannot definitively exclude CH as the source of BCOR mutations, as extremely high sequencing depths may be required to detect CH clones. However, BCOR is not frequently mutated in CH. The majority of donors were young (<45 yr), and some were healthy controls. All arguments against BCOR being due to CH, and more likely due to selection pressure in B-hiPSCs in culture. That BCOR mutations are seen at a lower frequency in F-hiPSCs and never in fibroblasts hints at a culture-related selection pressure (Fig. 4b ). Why BCOR mutations are more enriched in B-hiPSCs remains unclear but may be related to the process of transforming peripheral blood mononuclear cells (PMBCs) toward the myeloid lineage. Global transcriptional changes in BCOR -mutated B-hiPSCs To understand the functional impact of recurrent BCOR variants in B-hiPSCs, we performed RNA sequencing on B-hiPSCs from all 78 Insignia donors. Global transcriptomic analysis revealed two principal components (PCs) driving variance in the dataset (Fig. 5a ). The first PC distinguished two groups, with almost all the BCOR -mutated B-hiPSCs (VAF > 0, highlighted in yellow or orange in Fig. 5a ) restricted to one group, herewith termed BCOR -mut. The other group comprised B-hiPSCs with no BCOR mutations, BCOR -wt (VAF = 0, highlighted in gray or blue in Fig. 5a ). The second PC distinguished the donors by gender; BCOR is on the X chromosome. This result suggests that BCOR mutations are associated with important global transcriptional changes in B-hiPSCs. Differential gene expression analysis showed that 10,486 genes were differently regulated in the BCOR -mut lines (Fig. 5b and Supplementary Table 13 ). Furthermore, we found that UTF1 (implicated in maintenance of pluripotency through chromatin regulation), VENTX , IRX4 , PITX2 and MIXL1 (all homeobox-related proteins with various roles in embryonic patterning) and FOXC1 (important in the development of organs derived from the mesodermal-lineage) were strongly upregulated in BCOR-mut lines, whereas RAX (involved in development of the hypothalamus and retina) was strongly downregulated (Fig. 5b ). Fig. 5: Functional validation of the impact of BCOR mutations in hiPSCs and their differentiation potential. a , PC analysis showing hiPSCs distribution based on their transcriptome expression. Transcriptomics changes are correlated with BCOR mutation VAF values. 0* denotes that no BCOR mutations were found in hiPSC parental lines but were seen in their subclones. b , Volcano plot of differential gene expression analysis in hiPSCs with BCOR mutations ( BCOR -mut) and no BCOR mutations ( BCOR -wt) lines. FC, fold change. c , Heatmap of relative expression levels of members of noncanonical polycomb repressive complex 1.1 (PRC1.1), hiPSCs pluripotency and the three germ layers markers for two BCOR -wt (blue: MSH34i2, MSH30i3) and two BCOR -mut (orange: MSH40i2, MSH93i6) hiPSC lines. d , Immunofluorescence characterization of hiPSCs, neural stem cells (NSCs) and neurons at day 0, day 12 and day 27, respectively. Both BCOR -mut and BCOR -wt hiPSCs have similar undifferentiated morphology and express pluripotency markers OCT4/POU5F1 (red) and SSEA4 (green). BCOR -mut lines undergo inefficient neural induction, as highlighted by a reduced number of NSCs expressing PAX6 (green) at day 12 and a reduced number of neurons marked by TUBB3 (green) at day 27. Full size image BCOR -mutant hiPSCs have impaired differentiation capacity BCOR is a member of the polycomb group of proteins (PRC1.1) that regulates self-renewal and differentiation of stem cells and is critical for maintaining pluripotency 61 . BCOR depletion has been associated with decreased polycomb repressive activity, initiating differentiation toward the endodermal and mesodermal lineages 61 . Thus, we next asked whether the differentiation capacity was compromised in BCOR -mut lines, particularly for ectodermal lineages. We performed directed differentiation toward a neuronal lineage, contrasting two independent BCOR -wt B-hiPSC lines, MSH34i2 and MSH30i3, and two independent BCOR -mut B-hiPSC lines, MSH40i2 and MSH93i6 (Fig. 5c,d ), each with three biological replicates. To capture dynamic shifts through directed differentiation, we took samples during a time course, characterizing these lines morphologically and transcriptionally on day 0 as hiPSCs, day 6 and day 12 representative of early and late neural stem cell (NSC) induction stages respectively and on day 27 as neurons. There were no overt morphological disparities between BCOR -mut and BCO R-wt colonies at the hiPSC stage, with near-identical brightfield images and immunofluorescence characterization of pluripotent markers SSEA4 and OCT4 (day 0; Fig. 5d , Extended Data Figs. 9 and 10a ). However, upon NSC induction (days 6 and 12), BCOR -mut replicates showed inefficiency in differentiation confirmed by patchy expression of NSC marker PAX6 (Fig. 5d and Extended Data Fig. 10b ). At day 27, neuronal generation in BCOR -mut lines was markedly affected, as seen in depletion of neuronal marker TUBB3 (Fig. 5d and Extended Data Fig. 10c ). In keeping with the morphological characterization, global transcriptomics of each hiPSC line at each stage of differentiation revealed differences between BCOR -mut and BCOR -wt B-hiPSCs (Supplementary Fig. 2 ). At the pluripotent stage, BCOR -mut hiPSCs exhibited a modest reduction in BCOR expression but elevated levels of NANOG , KLF4 and NODAL compared to BCOR -wt (Fig. 5c ), similar to reports in other pluripotent models with BCOR -PRC1.1 defects 61 . At the NSC induction stage (days 6 and 12), transcriptional dynamics evolved to show that mesodermal markers such as PAX7 , TBX1 and PAX3 were substantially elevated in the BCOR -mut compared to BCOR -wt B-hiPSCs, implicating a drive toward mesodermal lineages in BCOR compromised lines (Fig. 5c ). This difference was maintained at late neuronal differentiation (day 27), where neuronal markers such as TUBB3 , DCX and FOXG1 were upregulated in BCOR -wt but not BCOR -mut B-hiPSCs, underscoring the failure of neuronal differentiation in the latter (Fig. 5c ). In all, these results demonstrate that B-hiPSC lines that acquire BCOR mutations may have compromised differentiation potential. BCOR -mutated lines seem to be less efficient at differentiation toward neuronal lineages and transcriptionally appear primed to differentiate toward mesodermal lineages. Long-term culture increases 8-oxo-dG DNA damage Finally, we examined the genomic effects of in vitro hiPSC culture. Mutational signature analyses revealed that all hiPSCs, regardless of primary cell of origin, have imprints of substitution signature 18, previously hypothesized to be due to oxidative damage in culture 22 (Figs. 1b and 2c ). Recently, knockouts of OGG1 , a gene encoding a glycosylase in base excision repair that specifically removes 8-oxoguanine (8-oxo-dG), have been shown to result in mutational signatures characterized by C>A at A C A>A A A, G C A>G A A, G C T>G A T, identical to signature 18 (ref. 62 ). Of note, signature 18 has also been reported in other cell culture systems such as in ES cells 15 , near haploid cell lines 63 and human tissue organoids in which the contribution to the overall mutation burden was reported to increase with in vitro culture 64 , 65 . We examined mutations shared between hiPSCs and their matched fibroblasts, representing in vivo and/or early in vitro mutations, and private mutations that are only present in hiPSCs, most (but not all) of which are likely representative of mutations acquired in vitro. We observed that signature 18 is enriched among private mutations of nearly all hiPSCs (313 or 97%). By contrast, the majority of hiPSCs (262 or 80%) showed no evidence of signature 18 among shared variants (Fig. 6a–c and Supplementary Fig. 3 ). We then investigated the relationship between signature 18 and passage number in the HipSci cohort. We found that there was a positive correlation between signature 18 and passage number (correlation = 0.327; P = 5.013 × 10 −9 ; Fig. 6d ), reinforcing the notion that prolonged time in culture is likely to be associated with increased acquisition of somatic mutations through elevated levels of DNA damage from 8-oxo-dG (Fig. 6e ). Fig. 6: Culture-associated mutagenesis in hiPSCs. a , Mutations in F-hiPSCs were separated into two groups: (1) mutations shared between F-hiPSCs and their founding fibroblasts (left) and (2) mutations that are private to hiPSCs (right). Proportional graphs show substitution signatures of shared and private mutations. Culture-related signature (signature 18) is enriched in private mutations. b , Box plot of exposures of substitution signatures of shared (blue) and private (orange) mutations. Culture signatures (signature 18) account for ~50% of private substitutions, in contrast to nearly zero among shared mutations. Box plots denote median (horizontal line) and 25th to 75th percentiles (boxes). The lower (minima) and upper (maxima) whiskers extend to 1.5× the interquartile range. n = 324 WGS of hiPSCs. c , Number of samples carrying each signature within shared or private mutations. Most samples acquired the culture signature late in the hiPSC cloning process. d , Relationship between exposure of culture-related signature (signature 18) and passage number (Pearson’s correlation test, two sided). e , Schematic illustration of the mutational processes in fibroblasts and F-hiPSCs. Full size image Discussion hiPSCs are on the verge of entering clinical practice. Therefore, there is a need to better understand the breadth and source of mutations in hiPSCs to minimize the risk of harm. Crucially, our work shows that the choice of a starting material for hiPSC derivation is complex. We demonstrate copious UV-associated genomic damage and substantial variation in genomic integrity between different clones from the same reprogramming experiment. In all, 39% of F-hiPSCs carried at least one mutation in a cancer driver gene. However, only BCOR mutations showed evidence of statistically significant positive selection, both in F-hiPSCs and in B-hiPSCs (prevalence of 25%), despite lower rates of genome-wide mutagenesis in the latter. The lack of BCOR mutations in the founder somatic cell populations (fibroblasts and erythroblasts) and the finding of new BCOR mutations arising in subclones during propagation experiments provides support for strong selection for BCOR variants in hiPSC culture. Furthermore, our work demonstrates that these mutations are associated with transcriptomic changes and influence differentiation of hiPSCs. Finally, both F-hiPSCs and B-hiPSCs exhibit oxidative damage that is increased by long-term culture. Critically, all of these lines had been previously screened for large-scale aberrations, and therefore, our work shows the value of WGS to fully capture variation at the resolution of nucleotides in hiPSCs. Further work will be necessary to investigate the functional significance of these mutations; for example, do they predispose cells toward malignant transformation or alter the fate of differentiated progeny? Our work highlights best practice points to consider when establishing hiPSC cellular models: an originating somatic cell type with low levels of preexisting genomic damage and minimizing duration of cell culture. Ultimately, however, comprehensive genomic characterization is indispensable to fully understand the magnitude and significance of mutagenesis in all cellular models. Methods Samples The research project complied with all relevant ethical regulations, and the protocols were approved by research ethics committees (details below). All participants were recruited voluntarily; provided written, informed consent; and were not financially compensated. The procedures used to derive fibroblasts and EPCs from two donors (S2 and S7) were previously described 22 . Two F-hiPSCs were obtained from S2. Four and two EPC B-hiPSCs were obtained from S7 and S2, respectively. A total of 452 F-hiPSCs were obtained from 288 donors and 17 erythroblast B-hiPSCs were obtained from 9 donors from the HipSci project. A total of 78 erythroblast B-hiPSCs and 141 subclones were obtained from 78 donors from the Insignia project. The details of HipSci and Insignia iPSC lines are described below. HipSci hiPSC line generation and growth All lines were incubated at 37 °C and 5% CO 2 (ref. 35 ). Primary fibroblasts were derived from 2-mm punch biopsies collected from organ donors or healthy research volunteers recruited from the NIHR Cambridge BioResource under ethics for hiPSC derivation 35 (Cambridgeshire and East of England Research Ethics Committee REC 09/H306/73, REC 09/H0304/77-V2 04/01/2013, REC 09/H0304/77-V3 15/03/2013). Biopsy fragments were mechanically dissociated and cultured with fibroblast growth medium (knockout DMEM with 20% fetal bovine serum; 10829018, ThermoFisher Scientific) until outgrowths appeared (within 14 days, on average). Approximately 30 days following dissection, when fibroblasts cultures had reached confluence, the cells were washed with phosphate-buffered saline (PBS), passaged using trypsin into a 25-cm 2 tissue-culture flask and then again to a 75-cm 2 flask upon reaching confluence. These cultures were then split into vials for cryopreservation and those seeded for reprogramming, with one frozen vial later used for DNA extraction for WES or WGS. For erythroblast derivation, two blood samples were obtained for each donor. All samples were scanned and checked for ethical approval. Peripheral blood mononuclear cell (PBMC) isolation, erythroblast expansion and hiPSC derivation were done by the Cellular Generation and Phenotyping facility at the Wellcome Sanger Institute, Hinxton. Briefly, whole-blood samples collected from consented patients were diluted with PBS, and PBMCs were separated using standard Ficoll Paque density gradient centrifugation method. Following the PBMC separation, cells were cultured in expansion media containing StemSpan H3000, stem cell factor, interleukin-3, erythropoietin IGF-1 and dexamethasone for a total of 9 days 66 . The EPCs were isolated using Ficoll separation of 100 ml peripheral blood from organ donors (REC 09/H306/73) and the buffy coat transferred onto a 5 μg/cm 2 collagen (BD Biosciences, 402326)-coated T-75 flask. The EPCs were grown using EPC media (EGM-2MV supplemented with growth factors, ascorbic acid plus 20% Hyclone serum; CC-3202, Lonza and HYC-001-331G; ThermoFisher Scientific Hyclone respectively) 67 . EPC colonies appeared after 10 days, and these were passaged using trypsin in a 1 in 3 ratio and eventually frozen down using 90% EPC media and 10% dimethyl sulfoxide. Fibroblasts and erythroblasts were transduced using nonintegrating Sendai viral vectors expressing human OCT3/4, SOX2, KLF4 and MYC51 (CytoTune, Life Technologies, A1377801) according to the manufacturer’s instructions and cultured on irradiated mouse embryonic fibroblasts (MEFs; CF1). The EPCs were transduced using four Moloney murine leukemia retroviruses containing the coding sequences of human OCT4, SOX2, KLF4 and C-MYC and also cultured on irradiated MEFs. Following all reprogramming experiments, the medium was changed to hiPSC culture medium 35 containing advanced DMEM (Life Technologies), 10% knockout serum replacement (Life Technologies), 2 mM l -glutamine (Life Technologies), 0.007% 2-mercaptoethanol (Sigma-Aldrich) and 4 ng ml −1 recombinant zebrafish fibroblast growth factor 2 (CSCR, University of Cambridge) and 1% penicillin/streptomycin (Life Technologies). Cells with an iPSC morphology first appeared approximately 14 to 28 days after transduction, and undifferentiated colonies (six per donor) were picked between days 28 and 40, transferred onto 12-well MEF-CF1 feeder plates and cultured in hiPSC medium with daily medium changes until ready to passage. Successful reprogramming was confirmed via genotyping array and expression array 35 . Pluripotency quality control (QC) was performed based on the HipSci QC steps, including the PluriTest using expression microarray data from the Illumina HT12v4 platform and copy number variation and loss of heterozygosity (CNV/LOH) detection using the HumanExome BeadChip Kit platform. Pluripotent hiPSC lines were transferred onto feeder-free culture conditions, using 10 µg ml −1 Vitronectin XF (Stemcell Technologies)-coated plates and Essential 8 (E8) medium (DMEM/F12 (HAM), E8 supplement (50×) and 1% penicillin/streptomycin; Life Technologies) 35 . The media was changed daily, and cells were passaged every 5–7 days, depending on the confluence and morphology of the cells, at a maximum 1:3 split ratio until established, usually at passage five or six. The passaging method involved washing the confluent plate with PBS and incubating with PBS-EDTA (0.5 mM) for 5–8 min. After removing the PBS-EDTA, cells were resuspended in E8 media and replated onto Vitronectin-coated plates 35 . Once the hiPSCs were established in culture, lines were selected based on morphological qualities (undifferentiated, roundness and compactness of colonies) and expanded for banking and characterization. DNA from fibroblasts and hiPSCs was extracted using Qiagen Chemistry on a QIAcube automated extraction platform. HipSci hiPSC line whole-exome genome library preparation and sequencing A 96-well plate containing 500 ng genomic DNA in 120 µl was cherry-picked and an Agilent Bravo robot used to transfer the gDNA into a Covaris plate with glass wells and adaptive focused acoustics (AFA) fibers. This plate was then loaded into the LE220 for the shearing process. The sheared DNA was then transferred out of this plate and into an Eppendorf TwinTec 96 plate using the Agilent Bravo robot. Samples were then purified ready for library prep. In this step, the Agilent NGS Workstation transferred AMPure-XP beads and the sheared DNA to a Nunc deep-well plate, then collected, and washed the bead-bound DNA. The DNA was eluted and transferred along with the AMPure-XP beads to a fresh Eppendorf TwinTec plate. Library construction comprised end repair, A-tailing and adapter ligation reactions, performed by a liquid handling robot. In this step, the Agilent NGS Workstation transferred PEG/NaCl solution and the adapter ligated libraries containing AMPure-XP beads to a Nunc deep-well plate and size-selected the bead-bound DNA. The DNA was eluted and transferred to a fresh Eppendorf TwinTec plate. Agilent Bravo and Labtech Mosquito robotics were used to set up a 384-well quantitative polymerase chain reaction (qPCR) plate, which was then ready to be assayed on the Roche Lightcycler. The Bravo was used to create a qPCR assay plate. This 384-well qPCR plate was then placed in the Roche Lightcycler. A Beckman NX08-02 was used to create an equimolar pool of the indexed adapter ligated libraries. The final pool was then assayed against a known set of standards on the ABI StepOne Plus. The data from the qPCR assay was used to determine the concentration of the equimolar pool. The pool was normalized using Beckman NX08-02. All paired-end sequencing was performed using a range of Illumina HiSeq platforms as the lines were generated over many years (HiSeq 2000 onwards). The sequencing coverage of WGS, WES and hcWES in hiPSC lines are 41×, 72× and 271×, respectively. HipSci hiPSC sequence alignment, QC and variant calling Reads were aligned to the human genome assembly GRCh37d5 using bwa version 0.5.10 (ref. 68 ) (‘bwa aln -q 15’ and ‘bwa sampe’) followed by quality score recalibration and indel realignment using GATK version 1.5-9 (ref. 69 ) and duplicate marking using biobambam2 version 0.0.147. VerifyBamID version 1.1.3 was used to check for possible contamination of the cell lines, and all but one passed (Supplementary Fig. 4 ). Variable sites were called jointly in each fibroblast and hiPSC sample using BCFtools/mpileup and BCFtools/call version 1.4.25. The initial call set was then prefiltered to exclude germline variants that were above 0.1% minor allele frequency in 1000 Genomes phase 3 (ref. 70 ) or ExAC 0.3.1 (ref. 71 ). For efficiency we also excluded low coverage sites that cannot reach statistical significance and for subsequent analyses considered only sites that had a minimum sequencing depth of 20 or more reads in both the fibroblast and hiPSC and at least 3 reads with a nonreference allele in either the fibroblast or hiPSC sample. At each variable site a Fisher’s exact test was performed on a two-by-two contingency table, with rows representing the number of reference and alternate reads and the columns the fibroblast or hiPSC sample. This approach for mutation calling is implemented through BCFtools/ad-bias, and we have adopted it preferentially instead of existing tumor-normal somatic-variant calling tools because, by definition, tools developed for the analysis of tumor-normal data assume that mutations of interest are absent from the normal tissue. However, in our experiment, many mutations were present, albeit at low frequency, in the source tissue fibroblasts. More information on bcftools ad-bias can be found on the online bcftools-man page at . The ad-bias protocol is distributed as a plugin in the main bcftools package, which can be downloaded from . Bcftools ad-bias implements a Fisher test on a 2 × 2 contingency table that contains read counts of reference/alternate alleles found in either the iPSC or fibroblast sample. We ran bcftools ad-bias with default settings as follows: $$\begin{array}{l}{{{\mathrm{bcftools}}}} + {{{\mathrm{ad}}}} - {{{\mathrm{bias}}}}\,{{{\mathrm{exome}}}}.{{{\mathrm{bcf}}}} - - \, - {{{\mathrm{t}}}}1 - {{{\mathrm{s}}}}\,{{{\mathrm{sample}}}}.{{{\mathrm{pairs}}}}.{{{\mathrm{txt}}}} - {{{\mathrm{f}}}}\\ ^{\prime} \% {{{\mathrm{REF}}}}\backslash {{{\mathrm{t}}}}\% {{{\mathrm{ALT}}}}\backslash {{{\mathrm{t}}}}\% {{{\mathrm{CSQ}}}}\backslash {{{\mathrm{t}}}}\% {{{\mathrm{INFO}}}}/{{{\mathrm{ExAC}}}}\backslash {{{\mathrm{t}}}}\% {{{\mathrm{INFO}}}}/{{{\mathrm{UK1KG}}}}^{\prime} \end{array}$$ where ‘exome.bcf’ was the BCF file created by our variant calling pipeline, described in Methods , and sample.pairs.txt was a file that contained matched pairs of the iPSC and corresponding fibroblast sample, one per line, as follows: $$\begin{array}{l}{{{\mathrm{HPSI}}}}0213{{{\mathrm{i}}}} - {{{\mathrm{koun}}}}\_2\quad {{{\mathrm{HPSI}}}}0213{{{\mathrm{pf}}}} - {{{\mathrm{koun}}}}\\ {{{\mathrm{HPSI}}}}0213{{{\mathrm{i}}}} - {{{\mathrm{nawk}}}}\_55\quad {{{\mathrm{HPSI}}}}0213{{{\mathrm{pf}}}} - {{{\mathrm{nawk}}}}\\ {{{\mathrm{HPSI}}}}0313{{{\mathrm{i}}}} - {{{\mathrm{airc}}}}\_2\quad {{{\mathrm{HPSI}}}}0313{{{\mathrm{pf}}}} - {{{\mathrm{airc}}}}\end{array}.$$ We corrected for the total number of tests (84.8 M) using the Benjamini–Hochberg procedure at a false discovery rate of 5%, equivalent to a P value threshold of 9.9 × 10 −4 , to call a mutation as a significant change in allele frequency between the fibroblast and iPSC samples. Furthermore, we annotated sites from regions of low mappability and sites that overlapped with copy-number alterations previously called from array genotypes 35 and removed sites that had greater than 0.6 alternate allele frequency in either the fibroblast or hiPSC, as these sites are likely to be enriched for false positives. Dinucleotide mutations were called by sorting mutations occurring in the same iPSC line by genomic position and marking mutations that were immediately adjacent as dinucleotides. Mutation calling of hiPSCs derived from S2, S7 and ten HipSci lines by using fibroblasts as germline controls Single substitutions were called using CaVEMan (Cancer Variants Through Expectation Maximization; ) algorithm 72 . To avoid mapping artefacts, we removed variants with a median alignment score <90 and those with a clipping index >0. Indels were called using cgpPindel ( ). We discarded indels that occurred in repeat regions with repeat count >10 and variant call format (VCF) quality <250. Double substitutions were identified as two adjacent single substitutions called by CaVEMan. The ten HipSci lines are HPSI0714i-iudw_4, HPSI0914i-laey_4, HPSI0114i-eipl_1, HPSI0414i-oaqd_2, HPSI0414i-oaqd_3, HPSI1014i-quls_2, HPSI1013i-yemz_3, HPSI0614i-paab_3, HPSI1113i-qorq_2 and HPSI0215i-fawm_4. Mutational signature analysis Mutational signature analysis was performed on S7 EPC-hiPSCs, S2 F-hiPSCs, S2 EPC-hiPSCs and the HipSci F-hiPSC WGS dataset. All dinucleotide mutations were excluded from this analysis. We generated 96-channel single substitution profiles for 324 hiPSCs and 204 fibroblasts. We fitted previously discovered skin-specific substitutions to each sample using an R package (signature.tools.lib) 31 . Function SignatureFit_withBootstrap() was used with default parameters. In downstream analysis, the exposure of two UV-caused signatures Skin_D and Skin_J were summed up to represent the total signature exposure caused by UV (signature 7). A de novo signature extraction was performed on 324 WGS HipSci F-hiPSCs to confirm that the UV-associated skin signatures (Skin_D and Skin_J, signature 7) and culture-associated one (Skin_A, signature 18) are also the most prominent signatures identified in de novo signature extraction (Supplementary Fig. 5 ). Analysis of C>T/CC»TT transcriptional strand bias in replication timing regions Reference information of replication timing regions were obtained from Repli-seq data of the ENCODE project ( ) 73 . The transcriptional strand coordinates were inferred from the known footprints and transcriptional direction of protein coding genes. In our dataset, we first orientated all G>A and GG>AA to C>T and CC>TT (using pyrimidine as the mutated base). Then, we mapped C>T and CC>TT to the genomic coordinates of all gene footprints and replication timing regions. Lastly, we counted the number of C>T/CC>TT mutations on transcribed and nontranscribed gene regions in different replication timing regions. Identification of fibroblast-shared mutations and private mutations in HipSci F-hiPSCs We classified mutations (substitutions and indels) in HipSci F-hiPSCs into fibroblast-shared mutations and private mutations. Fibroblast-shared mutations in hiPSCs are the ones that have at least one read from the mutant allele found in the corresponding fibroblast. Private mutations are the ones that have no reads from the mutant allele in the fibroblast. Mutational signature fitting was performed separately for fibroblast-shared substitutions and private substitutions in hiPSCs. For indels, only the percentage of different indel types was compared between fibroblast-shared indels and private indels. Clonality of samples We inspected the distribution of VAFs of substitutions in HipSci fibroblasts and HipSci F-hiPSCs. Almost all hiPSCs had VAFs distributed around 50%, indicating that they were clonal. In contrast, all fibroblasts had lower VAFs, which distributed around 25% or lower, indicating that they were oligoclonal. We computed kernel density estimates for VAF distributions of each sample. Based on the kernel density estimation, the number of clusters in a VAF distribution was determined by identification of the local maximum. Accordingly, the size of each cluster was estimated by summing up mutations having VAF between two local minimums. Variant consequence annotation Variant consequences were calculated using the Variant Effect Predictor 74 and BCFtools/csq 75 . For dinucleotide mutations, we recorded only the most impactful consequence of either of the two members of the dinucleotide, where the scale from least to most impactful was intergenic, intronic, synonymous, 3′ untranslated region, 5′ untranslated region, splice region, missense, splice donor, splice acceptor, start lost, stop lost and stop gained. We identified overlaps with putative cancer driver mutations using the COSMIC ‘All Mutations in Census Genes’ mutation list (CosmicMutantExportCensus.tsv.gz) version 92, 27 August 2020. dNdScv analysis To detect genes under positive selection, we used dN/dS ratios as implemented in the dNdScv R package ( ) 49 . dNdScv uses maximum likelihood models to calculate the ratio of nonsynonymous to synonymous mutations per gene, normalized by sequence composition, trinucleotide substitution rates and the local mutability of each gene based on epigenetic covariates. Three analyses were run for 452 F-hiPSCs and 78 B-hiPSCs sequencing data: (1) default dNdScv (exome-wide, looking at all genes in the genome or exome for selection); (2) restricted hypothesis testing of known cancer genes (to increase the statistical power on known drivers, using the gene list from Martincorena et al. 49 ); and (3) detection of mutational hotspots (using the sitednds function in dNdScv on hotspots detected in The Cancer Genome Atlas). Insignia B-hiPSC line generation, growth, QC and sequencing Erythroblasts were derived from PBMCs, following appropriate ethics committee approvals (REC 13/EE/0302), and reprogrammed using the nonintegrating CytoTune Sendai virus reprogramming kit (OCT3/4, SOX2, KLF4 and C-MYC) by the Cellular Generation and Phenotyping facility at the Wellcome Sanger Institute in the same way as for the HipSci lines (described above). After establishment of B-hiPSCs lines that had passed all QC steps (described above) and at cell passage equivalent to about 30 doublings, expanded clones were single-cell subcloned to generate two to four daughter subclones for each B-hiPSC line. WGS was run on germline, erythroblasts, B-hiPSC parental clones and B-hiPSC subclones. The average sequencing coverage of WGS was 38× (Supplementary Table 14 ). Single-nucleotide polymorphism genotyping was performed as a QC measure to ensure matches between all hiPSCs and respective original starting sample. RNA sequencing was run on 78 iPSC parental clones. Insignia B-hiPSC mutation calling by using blood as germline controls Single substitutions were called using CaVEMan (Cancer Variants Through Expectation Maximization; ) algorithm 72 . To avoid mapping artefacts, we removed variants with a median alignment score <140 and those with a clipping index >0. Indels were called using cgpPindel ( ). We discarded indels that occurred in repeat regions with repeat count >10 and VCF quality <250. Double substitutions were identified as two adjacent single substitutions called by CaVEMan. Mutation calls were obtained for erythroblasts, iPSC parental clones and subclones. Differentiation of Insignia B-hiPSCs ( BCOR mutant and BCOR wild-type) and RNA sequencing The BCOR -mutant B-hiPSCs (MSH40i2, MSH93i6) and the BCOR -wild-type B-hiPSCs (MSH34i2, MSH30i3) were maintained in feeder-free conditions cultured in Essential E8 medium (ThermoFisher Scientific, A1517001) on Vitronectin FX (Stemcell Technologies, 07180)-coated plates. hiPSC medium was changed daily, and the cells were monitored to ensure there were no signs of spontaneous differentiation. hiPSCs were expanded every 3 or 4 days as small clumps using 0.5 mM UltraPure EDTA (ThermoFisher Scientific, 1557020) diluted in Dulbecco’s phosphate buffered saline (DPBS) (ThermoFisher Scientific, 14190342). Before neural induction, three independent replicates of hiPSC from each donor line were generated and cultured for 1 week as described above. Healthy, nondifferentiating hiPSCs colonies were dissociated into single-cell suspension using TrypLE Express Enzyme (ThermoFisher Scientific, 12605010) and plated on Vitronectin FX-coated plates at 50,000 cells/cm 2 density in the presence of RevitalCell Supplement (ThermoFisher Scientific, A26445-01, lot 2170092). The cells were cultured for another 2 days until they reached 60–75% confluence. At day 0, the culture medium was switched to neural induction medium (NIM) containing V/V DMEMF12 HEPES (ThermoFisher Scientific, 11330032, lot 2186798) and neurobasal medium (ThermoFisher Scientific, 21103-049, lot 2161553), 1× B-27 Supplement (ThermoFisher Scientific, 17504-044, lot 2188886), 1× N2 Supplement (ThermoFisher Scientific, 17502-048; Lot: 2193551), MEM NEAA (ThermoFisher Scientific, 11140-035, lot 2202923), 1× Glutamax-I (ThermoFisher Scientific, 35050-061, lot 2085268), 1× penicillin/streptomycin in the presence of 10 µM SB431542 (Tocris, 1414/10) and 200 nM LDN193189 (Tocris, 6053/10) with an addition of 1× RevitaCell. Starting from day 1, NIM without RevitaCell was changed every day until day 12. At the end of the neural induction process (day 12), the cells were dissociated into single-cell suspension using TrypLE Express Enzyme and plated at high cell density (200,000 cells/cm 2 ) in double-coated plates of PDL (ThermoFisher Scientific, A3890401, lot 881772E) and 15 µg ml −1 Cultrex mLaminin I Pathclear (Biotechne, 3400-010-02; Lot: 1594368). The NIM was switched to neuron differentiation medium (NDM) containing BrainPhys Neural Medium (Stemcell Technologies, 05790, batch 1000031535), 1× B-27 Supplement (ThermoFisher Scientific, 17504-044, lot 2188886), 1× N2 Supplement (ThermoFisher Scientific, 17502-048, lot 2193551), 50 µM dibutyryl-cAMP, sodium salt (Tocris, 1141/50), 200 nM l -ascorbic acid (Tocris, 4055), 20 ng ml −1 BDNF (Cambridge Bioscience, GFH1-100), 20 ng ml −1 GDNF (Cambridge Bioscience, GFM37-100) in presence of 10 µM Y-27632 (Tocris, 1254/10). On day 13, the medium was changed to NDM without Y-27632, and the cells were allowed to differentiate for another 14 days. Two thirds of the medium was changed three times a week. During the cell differentiation process, cell pellets from all culture replicates were harvested at days 0, 6, 12 and 27 (endpoint) for an RNA-sequencing serial time study. Immunostaining characterization was performed at days 0, 12 and 27 of differentiation to assess the differentiation efficiency. Total RNA was extracted using PureLink RNA Mini Kit (ThermoFisher Scientific, 12183018 A) following the manufacturer’s recommendations. The RNA was quality controlled; the cDNA libraries were prepared and sequenced using Illumina NovaSeq 6000 technology. Each sequenced sample had ≥20 million read pairs of 150-bp paired-end reads. Processing RNA-sequencing data Splice-aware STAR v2.5.0a 76 was used to map RNA-sequencing data to the reference genome. For the human decoy reference genome hs37d5.fa.gz, a genome index was first generated. Then, using the splice junction information from Gencode GTF annotation file v19, fastq files were mapped. The fragments of reads linked with the gene features were then counted using featureCounts v2.0.1 (ref. 77 ). The samples’ raw counts matrices were then analyzed in R version 4.0.4. Differential gene expression was performed using the DESeq2 R package. Immunofluorescence staining The expression of pluripotency markers at day 0 (hiPSC) of differentiation was assessed using a commercially available PSC (OCT4, SSEA4) Immunocytochemistry Kit (ThermoFisher Scientific, A25526 , lot 2194558). NSCs at day 12 and neurons at day 27 of differentiation were stained as already described before with minor modifications 78 . Briefly, the medium was discarded from the plates, and the cells were rinsed gently with DPBS. A 4% solution of paraformaldehyde was used to fix the cells for 20 min at room temperature. The cells were rinsed twice with DPBS and permeabilized for 20 min with 0.1% Triton X-100 (Sigma-Aldrich, T8787-50ML). Nonspecific epitopes were blocked with 0.5% BSA solution for 1 h at room temperature. Cells were incubated overnight at 4 °C with the primary antibodies as follows: on day 12, cells were incubated with an anti-PAX6 antibody (ThermoFisher Scientific, 14-9914-82, dilution 1:100); on day 27, cells were incubated with an anti-Tubulin beta III (TUBB3) (Millipore, MAB1637, dilution 1:400). The cells were then rinsed three times with 1× DPBS and incubated with the secondary antibody Alexa Fluor 488 donkey anti-mouse (ThermoFisher Scientific, A21202 , dilution 1:500). The cells were rinsed three times with DPBS and the nuclei counterstained with NucBlue Fixed Cell Stain ReadyProbes (ThermoFisher Scientific, R37606 ). The images were acquired within 48 h using EVOS FL Auto 2 microscope (ThermoFisher Scientific, AMAFD2000), and the figures were made using the FigureJ plugin in ImageJ software. Statistics and reproducibility All statistical analyses were performed in R 79 . The effects of age and sex on mutation burden of F-hiPSCs were estimated using Mann–Whitney test, ‘wilcox.test()’ in R. Tests for correlation in the study were performed using ‘cor.test()’ in R. For cancer driver mutations identified in HipSci F-hiPSCs, a two-sided Fisher test was used to call a mutation as a significant change in allele frequency between the fibroblast and iPSC samples (Supplementary Table 5 ). A Benjamini–Hochberg procedure for multiple hypothesis testing was used. For the differentiation and immunostaining experiments, each BCOR -mut and BCOR -wt cell line had three independent biological replicates differentiated, and for each of these replicates, three wells were stained and imaged using immunofluorescence. At every stage of the neural differentiation (day 0, day 12 and day 27), a total of 36 images were analyzed for both BCOR -mut and BCOR -wt cell lines. Differential gene expression analysis of Insignia B-hiPSCs was performed using DESeq2, which fits each gene’s negative binomial generalized linear model. The default DESeq2 Wald test was used for significance testing, and a threshold of <0.05 of the adjusted P value was applied (Supplementary Table 13 ). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All datasets generated as part of this study are included in this data availability statement with no omissions. Links to raw sequencing data produced in this study are available from the HipSci project website (WGS data: ; WES data ). Raw data have been deposited in the European Nucleotide Archive under accession numbers ERP006946 (WES, open access samples) and ERP017015 (WGS, open access samples) and the European Genotype-phenotype Archive under accession number EGAS00001000592 (WES, managed access samples involving healthy donors, following completion and approval of a data Access Agreement via eDAM at the Welcome Sanger Institute; ). The raw sequence files of Insignia samples are deposited at the European Genome-phenome Archive with accession number EGAD00001007029 (WGS, open access samples, ). The open access data samples are freely available to download, whereas the managed access data are available following a request and a data access agreement (via Wellcome Sanger Institute electronic Data Access Mechanism). The variant call sets are deposited at Mendeley ( ) 80 . Code availability All code used for analysis is detailed in Methods and the Reporting summary. The code of bespoke software pertaining to data processing and analysis is on GitHub ( The code of statistical analysis and figures is on GitHub ( Differential gene expression analysis was performed using the DESeq2 R package ( ) 81 . Signature fitting was conducted using the signature.tools.lib R package ( ) 31 . dN/dS ratios were calculated using the dNdScv R package ( ) 49 .
DNA damage caused by factors such as ultraviolet radiation affect nearly three-quarters of all stem cell lines derived from human skin cells, say Cambridge researchers, who argue that whole genome sequencing is essential for confirming if cell lines are usable. Stem cells are a special type of cell that can be programmed to become almost any type of cell within the body. They are currently used for studies on the development of organs and even the early stages of the embryo. Increasingly, researchers are turning to stem cells as ways of developing new treatments, known as cell-based therapies. Other potential applications include programming stem cells to grow into nerve cells to replace those lost to neurodegeneration in diseases such as Parkinson's. Originally, stem cells were derived from embryos, but it is now possible to derive stem cells from adult skin cells. These so-called induced pluripotent stem cells (iPSCs) have now been generated from a range of tissues, including blood, which is increasing in popularity due to its ease of derivation. However, researchers at the University of Cambridge and Wellcome Sanger Institute have discovered a problem with stem cell lines derived from both skin cells and blood. When they examined the genomes of the stem cell lines in detail, they found that nearly three quarters carried substantial damage to their DNA that could compromise their use both in research and, crucially, in cell-based therapies. Their findings represent the largest genetic study to date of iPSCs and are published today in Nature Genetics. DNA is made up of three billions pairs of nucleotides, molecules represented by the letters A, C, G and T. Over time, damage to our DNA, for example from ultraviolet radiation, can lead to mutations—a letter C might change to a letter T, for example. "Fingerprints" left on our DNA can reveal what is responsible for this damage. As these mutations accumulate, they can have a profound effect on the function of cells and in some cases lead to tumors. Dr. Foad Rouhani, who carried out the work while at the University of Cambridge and the Wellcome Sanger Institute, said: "We noticed that some of the iPS cells that we were generating looked really different from each other, even when they were derived from the same patient and derived in the same experiment. The most striking thing was that pairs of iPS cells would have a vastly different genetic landscape—one line would have minimal damage and the other would have a level of mutations more commonly seen in tumors. One possible reason for this could be that a cell on the surface of the skin is likely to have greater exposure to sunlight than a cell below the surface and therefore eventually may lead to iPS cells with greater levels of genomic damage." The researchers used a common technique known as whole genome sequencing to inspect the entire DNA of stem cell lines in different cohorts, including the HipSci cohort at the Wellcome Sanger Institute and discovered that as many as 72% of the lines showed signs of major UV damage. Professor Serena Nik-Zainal from the Department of Medical Genetics at the University of Cambridge said: "Almost three-quarters of the cell lines had UV damage. Some samples had an enormous amount of mutations—sometimes more than we find in tumors. We were all hugely surprised to learn this, given that most of these lines were derived from skin biopsies of healthy people." They decided to turn their attention to cell lines not derived from skin and focused on blood derived iPSCs as these are becoming increasingly popular due to the ease of obtaining blood samples. They found that while these blood-derived iPSCs, too, carried mutations, they had lower levels of mutations than skin-derived iPS cells and no UV damage. However, around a quarter carried mutations in a gene called BCOR, an important gene in blood cancers. To investigate whether these BCOR mutations had any functional impact, they differentiated the iPSCs and turned them into neurons, tracking their progress along the way. Dr. Rouhani said: "What we saw was that there were problems in generating neurons from iPSCs that have BCOR mutations—they had a tendency to favor other cell types instead. This is a significant finding, particularly if one is intending to use those lines for neurological research." When they examined the blood samples, they discovered that the BCOR mutations were not present within the patient: instead, the process of culturing cells appears to increase the frequency of these mutations, which may have implications for other researchers working with cells in culture. Scientists typically screen their cell lines for problems at the chromosomal level—for example by checking to see that the requisite 23 pairs of chromosomes are present. However, this would not be sufficiently detailed to pick up the potentially major problems that this new study has identified. Importantly, without looking in detail at the genomes of these stem cells, researchers and clinicians would be unaware of the underlying damage that is present with the cell lines they are working with. "The DNA damage that we saw was at a nucleotide level," says Professor Nik-Zainal. "If you think of the human genome as like a book, most researchers would check the number of chapters and be satisfied that there were none missing. But what we saw was that even with the correct number of chapters in place, lots of the words were garbled." Fortunately, says Professor Nik-Zainal, there is a way round the problem: using whole genome sequencing to look in detail for the errors at the outset. "The cost of whole genome sequencing has dropped dramatically in recent years to around £500 per sample, though it's the analysis and interpretation that's the hardest bit. If a research question involves cell lines and cellular models, and particularly if we're going to introduce these lines back into patients, we may have to consider sequencing the genomes of these lines to understand what we are dealing with and get a sense of whether they are suitable for use." Dr. Rouhani adds: "In recent years we have been finding out more and more about how even our healthy cells carry many mutations and therefore it is not a realistic aim to produce stem cell lines with zero mutations. The goal should be to know as much as possible about the nature and extent of the DNA damage to make informed choices about the ultimate use of these stem cell lines. "If a line is to be used for cell based therapies in patients for example, then we need to understand more about the implications of these mutations so that both clinicians and patients are better informed of the risks involved in the treatment."
10.1038/s41588-022-01147-3
Physics
New tiny atomic beam clock could bring stable timing to places GPS can't reach
Gabriela D. Martinez et al, A chip-scale atomic beam clock, Nature Communications (2023). DOI: 10.1038/s41467-023-39166-1 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-39166-1
https://phys.org/news/2023-06-tiny-atomic-clock-stable-gps.html
Abstract Atomic beams are a longstanding technology for atom-based sensors and clocks with widespread use in commercial frequency standards. Here, we report the demonstration of a chip-scale microwave atomic beam clock using coherent population trapping (CPT) interrogation in a passively pumped atomic beam device. The beam device consists of a hermetically sealed vacuum cell fabricated from an anodically bonded stack of glass and Si wafers in which lithographically defined capillaries produce Rb atomic beams and passive pumps maintain the vacuum environment. A prototype chip-scale clock is realized using Ramsey CPT spectroscopy of the atomic beam over a 10 mm distance and demonstrates a fractional frequency stability of ≈1.2 × 10 −9 / \(\sqrt{\tau }\) for integration times, τ, from 1 s to 250 s, limited by detection noise. Optimized atomic beam clocks based on this approach may exceed the long-term stability of existing chip-scale clocks, and leading long-term systematics are predicted to limit the ultimate fractional frequency stability below 10 −12 . Introduction The development of low-power, chip-scale atomic devices including clocks and magnetometers has been enabled by advances in the optical interrogation of atoms confined in microfabricated vapor cells 1 . These miniaturized devices commonly use coherent population trapping (CPT) resonances in alkali atoms, which generate a coherent dark state between hyperfine atomic ground states using two optical fields in a Λ-scheme 2 . Optical probing of the microwave transition avoids the need for bulky microwave cavities, providing a compact and low-power method for probing the atoms and enabling battery-powered operation 3 , 4 . Buffer gases are commonly used to reduce the decoherence rate from wall collisions and narrow the atomic line. As a result, devices such as the chip-scale atomic clock (CSAC) can realize ≈10 −11 fractional frequency stability at 1000 s of averaging while consuming only 120 mW of power 5 . Thermal drifts and aging of the buffer gas environment, along with light shifts and other systematics, contribute to the long-term instability of buffer gas systems and degrades clock performance in existing CSACs beyond 1000 s of averaging with a drift rate of ~10 −9 per month. Clocks based on atomic beams and laser-cooled gases operate in ultra-high vacuum (UHV) environments and avoid shifts from buffer gases, allowing for higher frequency stability and continuous averaging over periods of days or weeks. Laser-cooling technology underpins the most advanced atomic clocks 6 , and while recent efforts in photonic integration 7 , 8 and vacuum technology 9 , 10 , 11 have advanced the state of the art, significant hurdles to miniaturization and low-power operation remain 12 . Atomic beams have played a significant role throughout the history of frequency metrology, serving as commercial frequency standards since the 1960s and as national frequency standards for realization of the SI second 13 , 14 . Miniaturized atomic beams 15 , 16 , 17 , 18 , 19 offer a path for exceeding the long-term stability of existing chip-scale devices while circumventing the complexity and power needs of more advanced laser-cooled schemes. In this work, we demonstrate a chip-scale atomic beam clock built using a passively pumped Rb atomic beam device as shown in Fig. 1 . The beam device contains a Rb reservoir that feeds a microcapillary array and generates Rb atomic beams in an internal, evacuated cavity. Fabrication of the device is realized using a stack of lithographically defined planar structures which are anodically bonded to form a hermetic package. Spectroscopic measurements of the atomic flux and beam collimation are presented to demonstrate the successful realization of the atomic beam device. The atom beam device presents a pathway for realizing low-power, low-drift atomic sensors using microfabricated components and supports further integration with advanced thermal and photonic packaging to realize highly manufacturable quantum sensors. Fig. 1: Overview of chip-scale atomic beam device. a Image of atomic beam device with labeled components (peanut for scale). Rb vapor in the source cavity feeds a buried microcapillary array and forms an atomic beam (indicated by a red-to-blue arrow) in the drift cavity. Non-evaporable getters (NEGs) and graphite maintain the vacuum environment in the device. b Expanded view of the beam device showing component layers as well as Rb pill dispensers, graphite rods, and NEG pumps. c Schematic of the microcapillary array etched in a Si wafer. Each capillary has a 100 µm × 100 µm square cross-section. The array collimates the atomic beam and provides differential pumping between the source and drift regions. d The final anodic bond which hermetically seals the device occurs in an ultra-high vacuum (UHV) chamber. Full size image The microwave atomic beam clock is demonstrated using Ramsey CPT interrogation in the atomic beam device. Ramsey spectroscopy of the 87 Rb ground-state hyperfine transitions is measured across a 10 mm distance and demonstrates quantum coherence across the device. The magnetically insensitive m F = 0 hyperfine transition is used to realize the atomic beam clock signal, and a clock short-term fractional frequency stability of ≈1.2 × 10 −9 is achieved at 1 s of integration in this prototype device. The performance of this beam clock is limited by technical noise, and an optimized cm-scale device is expected to achieve stability better than 10 −10 at 1 s of integration with a stability floor below 10 −12 , supported by a detailed analysis of the sources of drift in atomic beam clocks. Results The passively pumped atom beam device is fabricated from a multi-layer stack of Si and glass wafers as shown in Fig. 1 . The layers are anodically bonded to form a hermetically sealed vacuum cell with dimensions of 25 mm × 23 mm × 5 mm and ≈0.4 cm 3 of internal volume. Internal components include Rb molybdate Zr/Al pill-type dispensers for generating Rb vapor in an internal source cavity 20 as well as graphite and Zr/V/Fe non-evaporable getters (NEGs) in a separate drift cavity which maintain the vacuum environment. A series of microcapillaries connect the two internal cavities and produce atomic beams which freely propagate for 15 mm in the drift cavity. The device is heated to generate Rb vapor in the source cavity, and the atomic beam flux and divergence are defined by the capillary geometry 21 . Microfabrication allows for arbitrary modification of the shape, continuity, and divergence of the capillaries to control the atomic beam properties 18 , 19 . The arrangement of alternating glass and Si layers and internal components which comprise the beam device are shown in Fig. 1b . The features etched in Si are created using deep reactive ion etching (DRIE), and the cavity in the central glass layer is conventionally machined. The two transparent encapsulating glass layers are low-He-permeation aluminosilicate glass 22 with 700 µm thickness. The microcapillary array is etched into a 2 mm-thick Si layer which houses the internal components used for passive pumping and sourcing Rb. Two additional layers, a 600 µm-thick Si layer, and a 1 mm-thick borosilicate glass layer, act as a spacer to position the microcapillary array near the center of the device thickness and provide volume into which the atomic beam can expand in the drift cavity. The device is assembled by first anodically bonding the four upper layers under ambient conditions (see Methods) to create a preform structure. The four-layer preform is populated with the getters and Rb pill dispensers and topped with the final glass wafer. The stack is placed in an ultra-high vacuum (UHV) chamber and baked at 520 K for 20 h to degas the components, and the NEGs are thermally activated using laser heating to remove their passivation layer before sealing the device. The final interface is then anodically bonded to hermetically seal the vacuum device (see Fig. 1d ). Rb atomic beams are generated in the drift cavity as vapor from the source cavity flows through the microcapillary array 18 . The atomic flux is determined by the source region Rb density and the geometry of the capillary array, which consists of 10 straight capillaries with 100 µm × 100 µm square cross section, 50 µm spacing, and 3 mm length. The flux through the capillaries and angular profile of the atomic beam are well described by analytic molecular flow models based on the capillary’s aspect ratio L/a , where L is the capillary length and a is its width 21 . The near-axis flux is similar to that of a “cosine” emitter for angles θ less than a/L from normal. The total flux through the capillary array \({F}_{n}=\frac{1}{4}{{w\; n}}_{{{{{{\rm{Rb}}}}}},1}\bar{v}{A}_{c}\) , where \({w}=\,1/(1\,+\,3L/4a)\) is the transmission probability of the channel, \({n}_{{{{{{\rm{Rb}}}}}},1}\) is the Rb density in the source region, \(\bar{v}\) is the mean thermal speed of the atoms, and \({A}_{c}\) is the cross-sectional area of the capillary array (here \(10{a}^{2}\) ). More complex capillary geometries such as the non-parallel or cascaded collimators can be used to further engineer the beam profile or reduce off-axis flux 18 , 19 . The performance of the atomic beam device is measured using optical spectroscopy on the Rb D 2 line at ~780 nm. The atoms are probed using a 5 µW elliptical laser beam with w y ≈ 2100 µm, w z ≈ 350 µm (1/e 2 radius) normally incident on the device surface (propagating along the x axis). The total density n Rb,1 of 85 Rb and 87 Rb (including all spin states) in the source cavity is measured using absorption spectroscopy with device temperature varying between 330 K and 385 K. A representative spectrum measured in the source cavity at 363 K (Fig. 2a ) shows a Doppler broadened spectrum consistent with thermal Rb vapor ( \(\bar{v}\) ≈ 300 m/s) and a density of n Rb,1 ≈ 2.4 × 10 18 m −3 . The measured \({n}_{{{{{{\rm{Rb}}}}}},1}\) is consistent within experimental uncertainty with published values for the vapor pressure of liquid Rb metal across the temperature range probed. Fig. 2: Spectroscopic beam cell characterization. a Source cavity absorption (gray) and drift cavity fluorescence at z = 1 mm (red) and z = 11 mm (blue) measures the Rb number density, flux, and the velocity distribution normal to the device surface at 363 K. b Fluorescence at z = 11 mm includes narrow peaks from the atomic beam signal as well as a broad signal corresponding to background Rb vapor (light blue curve). Passive and differential pumping generates a large (≈6500×) Rb partial pressure differential between the source and drift cavities. c The measured atomic beam flux F meas and spectral FWHM are plotted versus distance from the capillary array at 363 K. Flux prediction based on n Rb,1 (black dashed lines) and the geometrical FWHM limit set by the fluorescence imaging (red dotted line) are shown. Inset shows the estimated total capillary flux F tot versus device temperature and comparison to the total expected capillary array flux \({F}_{n}\) (dashed line). Error bars represent 68% confidence intervals. Full size image The flux and angular divergence of the atomic beams are measured using fluorescence spectroscopy in the drift cavity. Fluorescence is collected using a 1:1 imaging system with ≈1.9% collection efficiency mounted at 45° from the beam axis in the x-z plane. The imaged area corresponds to a 1 mm × 1.4 mm region in the x-y plane. Fluorescence spectra scanning around the 85 Rb F = 3 → F’ = 4 transition (labeled as zero optical detuning) are measured at varying distances along z from the capillary array. Example spectra at z = 1 mm and z = 11 mm at 363 K (shown in Fig. 2a, b ) demonstrate narrow spectral features corresponding to the atomic beam signal and broader features arising from thermal background Rb vapor. The measured atomic beam flux is calculated from the number of detected atoms in the imaged volume N det (see methods) as \({F}_{{{{{{\rm{meas}}}}}}}={N}_{{{{{{\rm{det }}}}}}}{v}_{{{{{{\rm{beam}}}}}}}/L,\) where \({v}_{{{{{{\rm{beam}}}}}}}\) is the most probable longitudinal velocity of the atomic beam and L is the length over which the atoms interact with the probe beam. At 363 K and z = 1 mm, \({F}_{{{{{{\rm{meas}}}}}}}\) = 5 × 10 11 s −1 and the FWHM of the fluorescence lines is ≈150 MHz, corresponding to a transverse velocity FWHM of ≈120 m/s. At this distance, ≈65% of the total capillary array is probed and the total atomic beam flux is estimated to be \({F}_{{{{{{\rm{tot}}}}}}}\) = 7.7 × 10 11 s −1 , consistent with the measured density in the source cavity and molecular flow predictions through the capillaries (see Fig. 2c inset). Near the end of the drift cavity (z = 10 mm) F meas = 3.0 × 10 10 s −1 or ≈3.9% of the total flux due to the divergence of the atomic beam. This value matches the theoretical expectation (Fig. 2c dashed black line) of 3.2 × 10 10 s −1 based on \({n}_{{{{{{\rm{Rb}}}}}},1}\) , the detected area of ≈1.4 mm 2 , and the angular distribution function of our capillaries under molecular flow 21 . This agreement indicates that atomic beam loss due to collisions is consistent with zero within our level of systematic uncertainty. We note that the relatively strong divergence of this beam is typical of microcapillary collimation due to the presence of two atomic flux components, one that is direct (line-of-sight to the source) and the other indirect (diffuse scatter from capillary walls). For measurements of direct atoms ( \(\theta < a/L\) ), the atomic flux within this range is \(\approx {{{{{{\rm{sin }}}}}}}^{2}\left(\theta \right){F}_{{{{{{\rm{tot}}}}}}}/w\) or ≈ 2.6% of \({F}_{{{{{{\rm{tot}}}}}}}\) for the presented capillary geometry. The beam fluorescence FWHM is ≈40 MHz at this distance, set primarily by the range of x -velocities collected in the imaging system. At \({F}_{{{{{{\rm{tot}}}}}}}\) = 7.7 × 10 11 s −1 , 10 years of sustained operation would require 20 mm 3 of metallic Rb. Reported flux and density values have an estimated statistical uncertainty of 15% and systematic uncertainty of 30%. From the lack of measured collisional loss over a 10 mm distance in the drift cavity, we estimate an upper bound on the background pressure of ~1 Pa. For collisional loss to be significant given our systematic uncertainty in the absolute value of the atomic flux, the transport mean free path of Rb would need to be <1 cm, and this would require partial pressures of >12 Pa of H 2 or >3 Pa of N 2 , which are common vacuum contaminants 23 . The true background pressure may be significantly lower due to the high gettering efficiency of the NEGs for most common background gases including H 2 , N 2 , O 2 , CO, and CO 2 . The pumping speeds are estimated to be ≈1.4 L/s for H 2 and ≈0.14 L/s for CO at room temperature 24 . He is not pumped by the passive getters and the steady state He partial pressure will approach the ambient value of ≈0.5 Pa. However, this equilibration may be slowed by our use of low-He-permeation aluminosilicate glass 22 . Operation of the atomic beam device over many months indicates the rate of oxidation of deposited Rb is negligible. Fluorescence from thermal background Rb vapor in the drift cavity is evident in all measured spectra, and the measured background Rb density is estimated as \({n}_{{{{{{\rm{Rb}}}}}},2}\) = 3.7 × 10 14 m −3 at z ≈ 11 mm (see Fig. 2b ), equivalent to a partial pressure of ≈ 2 × 10 −6 Pa. This density is ≈ 6500× lower than the Rb density in the source cavity due to differential pumping from the microcapillary array and passive pumping primarily from the graphite getters. Graphite getters are commonly used in atomic clocks due to graphite’s affinity for intercalating alkali vapor, and the pressure differential implies a Rb pumping speed of ≈2 L/s for the ≈0.9 cm 2 of graphite surface area used. The graphite getters used (Entegris CZR-2) have high porosity and low strength relative to other commonly available graphites 25 , 26 . Recent work has demonstrated that graphite can also act as a solid-state reservoir for alkali-metal 27 , 28 and highly oriented pyrolytic graphite (HOPG) can serve both as an alkali getter and source depending on the operating temperature 29 . The beam device has been operated intermittently (≈1000 operation hours) over a period of 15 months without observed degradation of the vacuum environment. The deposited Rb metal in the source cavity is slowly consumed during normal device operation, and partial laser-thermal activation of the pill dispensers has been performed 9 times to deposit additional Rb metal in this cavity. Normal operation of the beam device is observed within minutes after activation and thermal equilibration of the device, and no period of excessive background pressure is observed. Complete activation of the pill dispensers in a single process is likely achievable but has not been attempted in this device. Saturation of the NEGs or graphite has not been observed, indicating that any real or virtual leaks are small, although absolute measurements of the pressure in the presented device have not been made. The potential utility of the chip-scale atomic beam device is demonstrated using CPT Ramsey spectroscopy of the 87 Rb ground state hyperfine splitting ( \({\nu }_{{HF}}\) ≈ 6.835 GHz) over a 10 mm distance, similar to previous laboratory CPT atomic beam clocks 15 , 30 , 31 . We address the D 1 F = 1,2 → F’ = 2 Λ-system (Fig. 3a ) using two circularly polarized, ≈250 µW laser beams propagating along the x- direction ( w y ≈ 550 µm, w z ≈ 150 µm). The laser light is phase modulated at \({\nu }_{{{{{{\rm{mod}}}}}}}\) using a fiber-based electro-optical modulator, and the carrier frequency and a 1st order sideband address the CPT Λ-system with approximately equal optical powers. The modulated light is split into two equal-length, parallel paths which intersect the atomic beam at z = 1 mm and z = 11 mm to perform two-zone Ramsey spectroscopy. Fluorescence from atoms in the 2nd zone (1.4 mm 2 imaged area) is collected on a Si photodiode with ≈1.9% efficiency. A magnetic field of ≈2.8 × 10 −4 T is applied along the x -direction and separates the Zeeman-state dependent transitions. Fig. 3: Ramsey CPT spectroscopy of the atomic beam. a Schematic of two-zone Ramsey interrogation of the atomic beam. b Level diagram illustrating the CPT Λ-systems. c Spectroscopic signal observed in 2nd Ramsey zone shows three, MHz-width CPT features. d The central CPT feature contains the clock signal Ramsey fringe with ≈15 kHz width and ≈15% fluorescence contrast. Full size image CPT spectra measured in the second Ramsey zone at 363 K (Fig. 3c ) demonstrate three, MHz-wide CPT resonances corresponding to m F = −1, 0, and 1 Λ-systems from ≈1 μs long interaction with the optical fields in the 2nd Ramsey zone. At the center of each of these resonances (Rabi pedestals) are narrower Ramsey fringes arising from interaction with light in both Ramsey zones. The central Ramsey fringe (Fig. 3d ) serves as our clock signal and has a fringe width of ≈15 kHz arising from the 30 μs transit time between the two zones. The signal height is ≈3.6 pW and the contrast relative to the one-photon fluorescence is ≈15%. Contrast is limited in our probing scheme by the spread of atoms among m F levels and optical pumping out of the Λ-system. Other probing schemes involving pumping with both circular polarizations can reduce loss outside of the desired m F level and increase fringe contrast 32 , 33 , 34 . The clock fringe is offset by ≈4.5 kHz from the vacuum value of \({\nu }_{{HF}}\) due to the second-order Zeeman effect. Optical path length uncertainty of 0.5 mm between the two Ramsey zones limits comparison to the vacuum value of \({\nu }_{{HF}}\) at the ≈400 Hz level. An atomic beam clock is realized using the central Ramsey fringe to stabilize the CPT microwave modulation frequency. For this measurement, the beam device is heated to 392 K and the observed peak-to-valley height of the clock Ramsey fringe signal is ≈16 pW using 200 μW of optical power in each Ramsey zone. An error signal is formed using 150 Hz square wave modulation of the clock frequency at an amplitude of 11 kHz, and feedback is used to steer the microwave synthesizer’s center frequency \({\nu }_{{{{{{\rm{clock}}}}}}}\) with a bandwidth of ≈1 Hz. The synthesizer is referenced to a hydrogen maser, and a time series of \({\nu }_{{{{{{\rm{clock}}}}}}}\) is recorded. The measured overlapping Allan deviation (ADEV) of the fractional frequency stability of \({\nu }_{{{{{{\rm{clock}}}}}}}\) (Fig. 4 ) demonstrates a short-term stability of ≈ 1.2 × 10 −9 / \(\sqrt{\tau }\) from 1 s to 250 s, limited by the signal height and the ≈13.5 fW/ \(\sqrt{{{{{{\rm{Hz}}}}}}}\) noise equivalent power of the amplified Si detector used. Straightforward improvement in fluorescence collection efficiency and fringe contrast could improve the short-term stability below 1 × 10 −10 / \(\sqrt{\tau }\) , similar to the performance of existing chip-scale atomic clocks 1 . Quantum projection noise limits the potential stability of the presented measurement to ≈9 × 10 −12 / \(\sqrt{\tau }\) , assuming a thermal 87 Rb beam at 392 K, a detectable flux of 1.8 × 10 10 s −1 , and a fringe contrast of 25%. Fig. 4: Beam clock stability measurement. The Allan deviation (ADEV) of the chip-scale atomic beam clock frequency (black points) is measured for integration times τ < 250 s. The short-term fractional frequency stability σ y ( τ ) is ≈ 1.2 × 10 −9 / \(\sqrt{\tau }\) (red line) over this range. Error bars represent 68% confidence intervals. Full size image Discussion We have demonstrated a chip-scale atomic clock based on miniaturized atomic beams. The key components of the passively pumped atomic beam device are planar, lithographically defined structures etched in Si and glass wafers, compatible with volume microfabrication. The 10-channel microcapillary array etched into one of the Si device layers provides a total atomic flux of ≈7.7 × 10 11 s −1 at 363 K, and ≈3.9% of the atoms pass through a 1.4 mm 2 detection area 10 mm downstream. The measured performance of the atomic beam matches expectations based on molecular flow through the collimator array with no free parameters, indicating that collisions with background gases are minimal and the background pressure is ~1 Pa or lower. Passive and differential pumping of the Rb vapor supports the ≈ 6500 × Rb partial pressure differential between the source and drift cavities and enables high beam flux while minimizing the background Rb pressure in the drift cavity. The presented beam system has been operated intermittently for 15 months without degradation of the vacuum environment or saturation of the passive pumps. The realization of a microwave Ramsey CPT beam clock demonstrates the potential utility of the atomic beam device. CPT Ramsey fringes are measured using the atomic beam over a 10 mm distance, demonstrating atomic coherence across the drift cavity. A clock signal is formed using the magnetically insensitive, m F = 0 transitions between the 87 Rb hyperfine ground states, and Ramsey fringes at an operating temperature of 392 K are measured to be 15 kHz-wide with 16 pW of CPT signal. This clock signal is used to stabilize the microwave oscillator driving the CPT transitions, and the clock demonstrates a short-term fractional frequency stability of ≈ 1.2 × 10 −9 / \(\sqrt{\tau }\) from 1 s to 250 s. The presented short-term stability is limited by the available signal-to-noise ratio (SNR) of ≈ 1200 at 1 s, and improvement of the short-term stability below 1 × 10 −10 / \(\sqrt{\tau }\) appears feasible, competitive with existing buffer gas-based miniature atomic clocks, by straightforward improvement to the collection optics and use of a higher contrast pumping scheme 32 , 33 , 35 . The presented beam clock approach has the potential to exceed existing chip-scale atomic clocks in both long-term stability and accuracy 36 . Commercial atomic beam clocks based on microwave excitation of the clock transition using a Ramsey length of ≈15 cm achieve a stability at 5 days of 10 −14 and an accuracy of 5 × 10 −13 . Many of the key systematics in beam clocks scale with the clock transition linewidth, and hence inversely with the Ramsey distance, implying that a 15 mm beam clock could achieve stability at the 10 −13 level. Work on CPT atomic beam clocks using Na and a 15 cm Ramsey length achieved a stability of 1.5 × 10 −11 at 1000 s without evidence of a flicker floor 31 , 37 . Projected to Cs with a Ramsey length of 15 mm implies an achievable stability at the level of 1.0 × 10 −11 at 1000 s, equivalent to less than 100 ns timing error at 1 day of integration. Realization of this stability will depend on managing drift in the optical, vacuum, and atomic environments in a fully miniaturized beam clock system. The leading systematic shifts that will impact a compact beam clock include Doppler shifts, Zeeman shifts, end-to-end cavity phase shifts, collisional shifts, and light shifts. Each of these shifts has been studied extensively in conventional microwave atomic beam frequency standards 13 , 38 and in CPT beam clocks 31 . We evaluate these requirements assuming a stability goal of 10 −12 , equivalent to ≈6.8 mHz stability of \({\nu }_{{{{{{\rm{clock}}}}}}}\) , for a 1.5 cm Ramsey length. Optical path length instability of the CPT laser beam can lead to both Doppler shifts and end-to-end cavity phase shifts. Doppler shifts arise from CPT laser beam pointing drift (thermal or aging) along the atomic beam axis and shifts the clock frequency at ≈7.5 kHz rad −1 , requiring µrad beam pointing stability to reach 10 −12 frequency stability. Optical path length stability at the 10 nm level is needed to minimize end-to-end cavity phase shifts. This shift can be minimized using symmetrical Ramsey beam paths, which make thermal expansion common mode along the two Ramsey arms and largely eliminates the bias. Asymmetrical path length variation can arise from thermal gradients along the beam paths and will induce clock shifts. For 15 mm beam paths fabricated using glass or Si substrates, 100 mK temperature uniformity is sufficient to achieve the desired stability. Collisional shifts place limits on the vacuum stability required in the atomic beam device. Common background gases such as H 2 and He induce collisional shifts of \({\nu }_{{{{{{\rm{HF}}}}}}}\) at the level of 5 Hz Pa −1 , and 1 mPa pressure stability is needed to achieve 10 −12 fractional frequency stability 23 , 39 . Given the inferred pressure of ~1 Pa in our device at 363 K, 100 mK temperature stability is sufficient assuming zero background pressure variation. Stabilizing the He partial pressure in passively pumped devices is challenging due to the high He diffusivity in many materials and insufficient getter material for He. We use low-He-permeation aluminosilicate glass to reduce the rate of He ingress, stabilizing He partial pressure variations 22 , 40 , 41 , 42 . Collisions with background Rb atoms along the drift cavity generate spin-exchange shifts of \({\nu }_{{{{{{\rm{HF}}}}}}}\) , and the magnitude of the Rb-Rb collisional shift depends on the occupancy ratio between ground state hyperfine levels before CPT interrogation. Assuming optical pumping into the F = 2 ground state, \({\nu }_{{{{{{\rm{HF}}}}}}}\) shifts at ≈3400 Hz Pa −1 . The total clock shift is estimated to be ≈6.8 mHz for our demonstrated ≈2 × 10 −6 Pa Rb background partial pressure 43 and places lax requirements on the background pressure stability. Intra-beam collisional shifts also exist at approximately the same level and can be reduced using cascaded collimators to reduce the atomic beam density 18 . Light shifts are another significant source of clock instability and can arise from both the ac-Stark effect 44 as well as incomplete optical pumping into the CPT dark state 45 , 46 . The magnitude and sign of the light shifts can depend strongly on the intensity ratio of the CPT driving fields and the pumping scheme used 47 , and the shift scales inversely with the Ramsey time. We estimate the light shift sensitivity in the proposed geometry at the level of 1 × 10 −12 for a 0.1% change in the CPT field intensity ratio based on measured light shifts in a cold-atom clock using \({\sigma }_{+}\) / \({\sigma }_{-}\) optical pumping, nominally equal CPT field intensities, and a 4 ms Ramsey time 47 . This level of stability may require use of active monitoring of the optical modulation used to generate the CPT fields 42 , 48 , 49 . Several methods have been developed to manage light shifts in atomic clocks using multiple measurements of the clock frequency, such as auto-balanced Ramsey spectroscopy 50 , 51 or power-modulation spectroscopy 52 , 53 . Zeeman shifts arise from variations in the quantization magnetic field, and at a field of ≈10 −4 T (sufficient to separate the magnetically sensitive m F ≠ 0 transitions from the clock transition), the Zeeman effect shifts \({\nu }_{{HF}}\) by ≈575 Hz. This dictates a field stability at six parts-per-million (ppm), which can be achieved using intermittent interrogation of a m F ≠ 0 transition to correct the field strength. Considering each of the common sources of drift summarized in Table 1 , an ultimate fractional frequency stability at or below the level of 10 −12 appears feasible in a chip-scale atomic beam clock. The presented beam clock presents a path for realizing low size, weight, and power (low-SWaP) atomic clocks. Future efforts will focus on realizing this long-term clock stability goal using integration with micro-optical and thermal packaging to produce a fully integrated device at the size- and power-scale of existing CSACs. Such a device should achieve sub-µs timing error at a week of integration and would contribute to low-SWaP timing holdover applications. The chip-scale beam device presented here is a general platform for quantum sensing, and future work using this system could explore applications including inertial sensing using atom interferometry, electrometry using Rydberg spectroscopy, and higher-performance compact clocks using optical transitions. Table 1 Expected chip-scale clock systematics Full size table Methods Fabrication of atom beam device Features including two internal cavities and the microcapillary array are etched into Si wafers using DRIE. The beam cell is assembled by first anodically bonding the Si layers, intermediate glass layer, and one encapsulating glass layer to create a preform structure. The preform bonds are performed in air at 623–673 K using a bonding voltage of 800−900 V for several hours. The Rb pills, graphite rods, and NEGs are loosely held in cavities etched in the 2-mm-thick Si wafer, leaving room for expansion during operation. The final bond, which hermetically seals the package, occurs in a UHV chamber using an electrode to press the final encapsulating glass layer onto the preform. The device is vacuum baked at ≈1 × 10 −5 Pa and 520 K for 20 h to reduce volatile contaminants, and the NEGs are thermally activated using a few W of 975 nm laser light focused onto each getter for ≈600 s. The NEGs are observed to glow a red color during activation, consistent with a temperature ~1170 K. After NEG activation, the final anodic bond hermetically seals the device. Pressure in the UHV chamber is maintained at ≈3.5 × 10 −5 Pa for the bond with the cell at ≈640 K and bonding voltage of −1800 V for 21 h. The sealed device is removed from the UHV bonding chamber and mounted on a temperature-controlled plate for operation in air. Pill dispensers (1 mm diameter, 0.6 mm thickness, 0.4 mg total Rb content) are activated using laser heating with a few W of 975 nm light to a temperature of ≈950 K until a stable Rb density is observed in the source cavity. Atom beam flux characterization The atomic beam flux is measured using fluorescence spectroscopy on the Rb D 2 transitions as shown in Fig. 2 19 , 54 . Spectroscopy is performed using a 5 µW elliptical laser beam traveling normal to the beam device surface with 1/e 2 radii of w y ≈ 2100 µm and w z ≈ 350 µm. The peak intensity is ≈0.1 mW/cm 2 , and the low intensity limits optical pumping during transit through the probing beam. Atomic fluorescence is imaged using a 1:1 imaging system mounted at 45 degrees in the x-z plane to collect ≈1.9% of the atomic fluorescence onto a 1 mm × 1 mm Si photodiode. The effective probed volume at constant interrogating intensity is ≈1 mm × 1.4 mm × \({w}_{z}\sqrt{\pi /2}\) or ≈0.6 mm 3 , and this volume can be translated across the drift region. The factor \({{L}_{z}=w}_{z}\sqrt{\pi /2}\) accounts for the Gaussian intensity variation along the z-direction. The flux through this volume \({F}_{{{{{{\rm{meas}}}}}}}={N}_{{{{{{\rm{meas}}}}}}}{v}_{{{{{{\rm{beam}}}}}}}/{L}_{z}\) where \({N}_{{{{{{\rm{meas}}}}}}}\) is the number of atoms measured in the probed volume, \({v}_{{{{{{\rm{beam}}}}}}}=\sqrt{3{k}_{B}T/m}\) is the most probable longitudinal velocity, k B is Boltzmann’s constant, and m is the atomic mass. We measure N det using the integrated spectrum of radiated power \({\Phi }_{{{{{{\rm{Total}}}}}}}=\int P\left(\omega \right)d\omega\) across the 85 Rb F = 3−> F ’ = 4 transition where P (ω) is the measured optical power at angular detuning ω. At low saturation intensity the integrated spectral power per atom \({\Phi }_{0}=\frac{{hc}\pi }{4\lambda }s{\Gamma }^{2}\) and \({N}_{{{{{{\rm{meas}}}}}}}=\frac{{\Phi }_{{Total}}}{{\Phi }_{0}}\) , where h is Plank’s constant, c is the speed of light, λ is the D 2 transition wavelength, s is the transition saturation parameter, and \(\Gamma\) ≈ 2π × 6.067 MHz is the natural linewidth of the D 2 transition. Data availability Data underlying the results of this study are available from the authors upon request. Code availability The codes that support the findings of this study are available from the authors upon request.
A new type of miniature atomic clock could provide better timing over the span of weeks and months compared with current systems. Researchers at the National Institute of Standards and Technology (NIST), in collaboration with researchers from Georgia Tech, have made the first-of-its-kind chip-scale beam clock. Their work has been published in Nature Communications. Atomic clocks take many forms, but the oldest and one of the most prominent designs is built using atomic beams. These clocks send a beam of atoms through a vacuum chamber. At one end of the chamber, the atoms are set in a specific quantum state, and they start "ticking." At the other end their ticking rate is measured or "read out." Using the atoms' precise ticking rate, other clocks can be compared to atomic beam clocks, and adjusted to match their timing. NIST has been using atomic beams for timekeeping since the 1950s. For decades, beam clocks were used to keep the primary standard for the second, and they are still part of NIST's national timekeeping ensemble. Beam clocks are precise, stable and accurate, but they're currently not the most portable. The vacuum chambers where the atoms travel are key to the success of these clocks, but they're bulky in part due to the size of the microwave cavity used to probe the atomic "ticking." The vacuum chamber for NIST-7, the last beam clock used for the primary frequency standard in the U.S., was more than 2.5 meters or 8 feet long. Smaller commercial clocks about the size of a briefcase are common, but they still require a significant amount of power (about 50 watts) to run. For comparison, smartphones require about a third of a watt for typical operation. Chip-scale atomic clocks (CSACs) were developed by NIST in 2001. Advances in microfabrication techniques let NIST make vapor cells, tiny chambers where the clock's atoms are held and measured, the size of a grain of rice; the entire clock is about the size of a piece of sushi. These clocks consume very little power and can run on batteries to provide timing in critical situations where GPS can't reach. CSACs have found numerous applications in underwater oil and gas exploration, military navigation, and even telecommunications. However, the clocks' timekeeping tends to drift when temperatures shift and the gas surrounding the atoms degrades. "The CSAC is low-power and has high performance given its size. It's a wonderful device, but it does drift after running for a few thousand seconds," said William McGehee, a physicist at NIST. "Beam clocks have been around since the 1950s and are stable, but still need a lot of power. What if we could combine the best aspects of these two systems?" Using microfabrication techniques learned from the CSAC, the group fabricated a chip-scale atomic beam device using a stack of etched silicon and glass layers. This device is a highly miniaturized version of the chambers that have been used in atomic beam clocks like NIST-7 and is about the size of a postage stamp. Atomic vapor cell construction techniques developed at NIST and etched microcapillary arrays developed at Georgia Tech were key to shrinking the vacuum chambers of larger beam clocks. In the device, one chamber contains a small pill of rubidium. That chamber heats up, releasing a stream of rubidium atoms through microcapillaries, channels only 100 micrometers wide. Those tiny channels connect to another chamber with materials that can absorb—or collect—individual gas molecules, called non-evaporable getters, or NEGs, which pull the rubidium atoms along and collect them, keeping the vacuum in the microcapillaries clean. Tiny rods of graphite also help collect stray atoms through the process. Right now, this chip-scale beam device is a prototype for a miniature atomic beam clock. Initial tests of the chip-scale beam clock showed performance at a level slightly worse than existing CSACs, but the team sees a path toward improved stability. The researchers hope to push their precision by another factor of 10, and to exceed the stability of existing CSACs by 100 times over week time scales.
10.1038/s41467-023-39166-1
Medicine
Study illuminates sugar's role in common kidney disease
Sienna R. Li et al, Glucose absorption drives cystogenesis in a human organoid-on-chip model of polycystic kidney disease, Nature Communications (2022). DOI: 10.1038/s41467-022-35537-2 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-35537-2
https://medicalxpress.com/news/2023-01-illuminates-sugar-role-common-kidney.html
Abstract In polycystic kidney disease (PKD), fluid-filled cysts arise from tubules in kidneys and other organs. Human kidney organoids can reconstitute PKD cystogenesis in a genetically specific way, but the mechanisms underlying cystogenesis remain elusive. Here we show that subjecting organoids to fluid shear stress in a PKD-on-a-chip microphysiological system promotes cyst expansion via an absorptive rather than a secretory pathway. A diffusive static condition partially substitutes for fluid flow, implicating volume and solute concentration as key mediators of this effect. Surprisingly, cyst-lining epithelia in organoids polarize outwards towards the media, arguing against a secretory mechanism. Rather, cyst formation is driven by glucose transport into lumens of outwards-facing epithelia, which can be blocked pharmacologically. In PKD mice, glucose is imported through cysts into the renal interstitium, which detaches from tubules to license expansion. Thus, absorption can mediate PKD cyst growth in human organoids, with implications for disease mechanism and potential for therapy development. Introduction Autosomal dominant polycystic kidney disease (PKD) is commonly inherited as a heterozygous, loss-of-function mutation in either PKD1 or PKD2 , which encode the proteins polycystin-1 (PC1) or polycystin-2 (PC2), respectively 1 , 2 . PKD is characterized by the growth of large, fluid-filled cysts from ductal structures in kidneys and other organs, and is among the most common life-threatening monogenic diseases and kidney disorders 3 . Tolvaptan (Jynarque), a vasopressin receptor antagonist that decreases water absorption into the collecting ducts, was recently approved for treatment of PKD in the United States, but only modestly delays cyst growth and has side effects that limit its use 4 , 5 . At the molecular level, PC1 and PC2 form a receptor-channel complex at the primary cilium that is poorly understood but possibly acts as a flow-sensitive mechanosensor 6 , 7 , 8 , 9 , 10 , 11 . Loss of this complex results in the gradual expansion and dedifferentiation of the tubular epithelium, including increased proliferation and altered transporter expression and localization 12 , 13 , 14 . As mechanisms of PKD are difficult to decipher in vivo, and murine models do not fully phenocopy or genocopy the human disease, we have developed a human model of PKD in vitro 15 , 16 , 17 . We, together with other groups around the world, have invented methods to derive kidney organoids from human pluripotent stem cells (hPSC), which contain podocyte, proximal tubule, and distal tubule segments in contiguous, nephron-like arrangements 17 , 18 , 19 , 20 . Differentiation of these organoids is highly sensitive to the physical properties of the extracellular microenvironment 21 . Organoids derived from gene-edited hPSC with biallelic, truncating mutations in PKD1 or PKD2 develop cysts from kidney tubules, reconstituting the pathognomonic hallmark of the disease 15 , 16 , 17 . Interestingly, culture of organoids under suspension conditions dramatically increases the expressivity of the PKD phenotype, revealing a critical role for microenvironment in cystogenesis 16 . Fluid flow is a major feature of the nephron microenvironment, which is believed to play an important role in PKD 4 , 5 , 7 , 8 , 22 . However, physiological rates of flow have not yet been achieved in kidney organoid cultures or PKD models. ‘Kidney on a chip’ microphysiological systems provide fit-for-purpose platforms integrating flow with kidney cells to model physiology and disease in a setting that more closely simulates the in vivo condition than monolayer cultures 23 , 24 , 25 , 26 , 27 . There is currently intense interest in integrating organ on chip systems with organoids, which can be derived from hPSC as a renewable and gene-editable cell source 28 , 29 , 30 , 31 , 32 . We therefore investigated the effect of flow on PKD in a human organoid on a chip microphysiological system. Results Flow induces cyst swelling in PKD organoids Prior to introducing flow, we first confirmed the specificity and timing of the PKD phenotype in static cultures. PKD1 −/− or PKD2 −/− hPSC were differentiated side-by-side with isogenic controls under static, adherent culture conditions to form kidney organoids. On day 18 of differentiation, prior to cyst formation, organoids were carefully detached from the underlying substratum and transferred to suspension cultures in low-attachment plates. Under these conditions, the majority of PKD1 −/− or PKD2 −/− organoids formed cysts within 1–2 weeks, whereas isogenic control organoids rarely formed cysts (Fig. 1a ). In repeated trials, the difference between PKD organoids and isogenic controls was quantifiable and highly significant (Fig. 1a ). Thus, PKD organoid formed cysts in a genotype-specific manner, strongly suggesting that this phenotype was specific to the disease state. This differs from other types of three-dimensional cultures of epithelial cells, in which hollow ‘cysts’ (spheroids) arise irrespective of PKD genotype and represent a default configuration of the epithelium rather than a disease-specific phenotype 17 , 33 , 34 , 35 . Fig. 1: Organoid PKD cysts expand under flow. a Representative images of organoids on days following transfer to suspension culture (upper), with quantification (lower) of cyst incidence as a fraction of the total number of organoids (mean ± s.e.m. from n ≥ 4 independent experiments per condition; **** p < 0.0001). b Schematic of workflow for fluidic condition. c Time-lapse phase contrast images of PKD organoids under flow (0.2 dynes/cm 2 ), representative of four independent experiments. d Average growth rates of control organoids (Ctrl org.), non-cystic compartments of PKD organoids (PKD org.), and cystic compartments of PKD organoids (PKD cysts) under flow (0.2 dynes/cm 2 ). Each experiment was performed for 6 h. Cyst growth rate was calculated on an individual basis as the maximal size of the cyst during the time course, divided by the time point at which the cyst reached this size (mean ± s.e.m. from n ≥ four independent experiments; each dot represents the average growth rate of organoids in a single experiment. **** p < 0.0001). Full size image To understand how flow affects PKD in organoids, we designed a microfluidic system that allows for live imaging of kidney organoids during the early stages of cyst formation (Fig. 1b ). hPSC were first differentiated into organoids under static, adherent culture conditions for 26 days, at which time point tubular structures had formed with small cysts in the PKD cultures. Organoids were then purified by microdissection using a syringe needle 16 , and transferred into gas-permeable, tissue culture-treated polymer flow chambers (0.4 mm height × 3.8 mm width), which were optically clear and large enough to comfortably accommodate organoids and cysts. The channels were pre-coated with a thin layer of Matrigel, and organoids were allowed to attach overnight. PKD and isogenic control organoids were subjected to fluid flow with a wall shear stress of 0.2 dynes/cm 2 , which approximates physiological shear stress within kidney tubules 27 , 36 , 37 , 38 . In these devices, we observed that cysts in PKD organoids increased in size rapidly under flow (change in area of ~20,000 μm 2 /hr, or ~160 μm/hr in diameter), compared to non-cystic compartments within these organoids, or isogenic control organoids lacking PKD mutations, which did not swell appreciably (Fig. 1c, d and Supplementary Movie 1 ). Diffusion can partially substitute for flow Having observed that cysts expand under microfluidic conditions, it was important to establish a corresponding static condition lacking flow as a negative control. Initially we utilized the same chambers and syringe pump in the absence of pump activation, which is a commonly used control format for microfluidic experiments. However, we observed that food dye contained within the syringe failed to enter the microfluidic chamber under these conditions (Supplementary Fig. 1a ). This indicated a lack of diffusion, which meant that organoids would be exposed only to the volume of media present within the channel of the microfluidic device (~200 µL), which was much lower than the volume they would encounter under fluidic conditions (~60 mL/6 h). Such a static condition could not be readily compared to fluidic conditions to determine the effects of flow, since other parameters such as volume and total solute mass would also be very different. To more accurately control for the effects of flow, we designed a diffusive static condition that exposed organoids to an equivalent volume of culture media as in the flow condition. This consisted of a reservoir of media (maximum volume of 25 mL) connected to the microfluidic chip by wider tubing to allow for efficient and uninhibited diffusion of small molecules into the microfluidic channel. In this static format, food dye diffused from the media reservoir into the channel after 2–3 h (Supplementary Fig. 1b ). Similarly, rhodamine-labeled dextran (10 kDa) diffused from the media reservoir into the channel and equilibrated with fluidic epifluorescence within 48 h (Fig. 2a ). Fig. 2: Volume can partially substitute for flow in cyst expansion. a Rhodamine dextran (10 kDa) epifluorescence in static (non-diffusive), diffusive static, and fluidic conditions. ‘Lane’ indicates channel interior, and b time lapse phase contrast images of cysts in these conditions. Images are representative of n ≥ 4 independent experiments. c Average growth rates (μm 2 /hr) of cysts in diffusive static condition with different volumes, compared to fluidic or non-diffusive static. Each experiment was performed for 6 h. Cyst growth rate was calculated on an individual basis as the maximal size of the cyst during the time course, divided by the time point at which the cyst reached this size. ( n ≥ 8 cysts (dots) pooled from two or more independent experiments; *** p < 0.05). d Schematic of experiment testing effect of volume vs. pressure on cyst growth. Elements of the image were illustrated using Biorender software under license. e Representative phase contrast images and ( f ) quantification of growth rate of cysts suspended in either 0.5 or 10 mL of media under equivalent hydrostatic pressures (mean ± s.e.m. of n ≥ 14 cysts per condition pooled from three independent experiments; ***, p < 0.05). g Growth profiles of individual cysts (lines) over time in microfluidic devices from 0–5 h. Measurements made every 5 min using ImageJ software. Cyst Area was normalized by dividing by the starting area. Data points are from three or more independent experiments. h Sum of Squares values from linear regression models were run on each individual cyst ( n ≥ 7 organoids per condition, pooled from four or more independent experiments; p = 0.0342 versus diffusive static and 0.0411 versus static). Error bars, standard error. Full size image To further validate this ‘diffusive static’ condition, we varied the volume of media in the reservoir and analyzed cyst growth over a period of 12 h. Cysts exposed to a reservoir containing 1 mL of media expanded at a rate of ~3,000 μm 2 /hr, whereas a reservoir containing 25 mL increased expansion to ~10,000 μm 2 /hr, approximately half the rate observed in the fluidic condition (Fig. 2b, c , Supplementary Movies 2 – 4 ). Using the equation Pressure = ρgh , the hydrostatic pressure on organoids with 1 mL and 25 mL media reservoirs was calculated to be 1174 Pa and 1956 Pa, respectively. As this represented a substantial pressure difference of 5.9 mmHg, we conducted experiments to distinguish between the effects of pressure versus volume on cyst growth. Cystic organoids were suspended in either 500 µl or 10 ml, with a constant fluid column height of 1 cm (Fig. 2d ). Cysts exposed to 10 mL of media grew significantly more than those exposed to 500 µL of media (Fig. 2e, f ). Thus, media volume was identified as a major determinant of expansion that could partially substitute for flow in this system. Not all aspects of the fluidic condition were replicated by the diffusive static condition. Time-lapse microscopy under continuous flow revealed that PKD cysts exhibited fluctuating growth profiles, expanding and constricting (deflating) in cyclical, “breath-like” movements. Constrictions occurred rapidly when the cysts appeared to be fully inflated, suggesting that they resulted from rupture of the epithelium, for instance in response to expansive fluid force (Fig. 2g ). Growth and constriction events occurred within hours after the initiation of flow, indicating a rapid physical mechanism rather than a slower one based on cell proliferation. This oscillatory behavior was unique to the fluidic condition, and was not observed in either the diffusive static or non-diffusive static conditions, nor in non-cystic controls (Fig. 2g and Supplementary Movie 1 ). Using the sum of squares method, we found that cyst dynamics (variance in size within an individual structure over time) were much greater in the fluidic condition, compared to either of the static conditions (Fig. 2h ). As solute exposure was likely to occur much more rapidly in the fluidic condition, we proceeded to examine solute uptake under these conditions. Cysts absorb glucose during flow-mediated expansion Kidneys are highly reabsorptive organs, retrieving ~180 L of fluid and solutes per day through the tubular epithelium back into the blood. Glucose is an abundant renal solute and transport cargo, which might explain the effects of media exposure on cyst expansion, but whether kidney organoids absorb glucose is unknown. We therefore studied glucose transport in cysts and organoids using a fluorescent glucose analog, NBD glucose (2-( N -(7-Nitrobenz-2-oxa-1,3-diazol-4-yl)Amino)−2-Deoxyglucose). The low height of the channels in our flow devices enabled continuous time lapse imaging of fluorescent molecules without high background fluorescence. Glucose was observed to infiltrate into the devices under both diffusive static as well as fluidic conditions. Epifluorescence of NBD glucose gradually increased and plateaued at similar levels after 12 h in both the diffusive static condition and the fluidic condition, but did not accumulate detectably within the channels in the non-diffusive static condition (Fig. 3a ). Fig. 3: PKD organoids absorb glucose under fluidic and static conditions. a NBD Glucose background levels in non-diffusive static, diffusive static, and fluidic conditions after 12 h (representative of three independent experiments). b Phase contrast and wide field fluorescence images of organoids in diffusive static and fluidic conditions, 5 h after introduction of NBD glucose (representative of three independent experiments). Arrows are drawn to indicate representative line scans. c Line scan analysis of glucose absorption in PKD cysts under static and fluidic conditions after 5 h (mean ± s.e.m. from n ≥ 7 cysts per condition pooled from three independent experiments; each n indicates the average of four line scans taken from a single cyst). Background fluorescence levels were calculated at each timepoint by measuring the fluorescence intensity of a square region placed in the non-organoid region of the image. d NBD Glucose absorption in the non-cystic compartment of PKD organoid, for Diffusive static 20 mL vs. 1 mL (110 µM NBD Glucose, mean ± s.e.m., n ≥ 4 independent experiments), and ( e ) diffusive static 25 mL vs. Fluidic (36.5 µM NBD Glucose, n ≥ 5 independent experiments). f Confocal fluorescence images of SGLT2 and ZO1 in PKD1 tubules (representative of three independent experiments). g Confocal fluorescent images of NBD Glucose in organoid tubules, fixed and stained with fluorescent cell surface markers (representative of three independent experiments). h Time-lapse images of NBD Glucose accumulation in a PKD organoid cyst, followed by washout into media containing unlabeled glucose after 24 h, all performed under continuous flow (representative of three independent experiments). Full size image When this assay was performed in channels seeded with organoids, PKD cysts absorbed glucose under fluidic and diffusive static conditions (Fig. 3b and Supplementary Movies 5 – 6 ). Line scan analysis of these images showed that there was no significant difference in absorption between the fluidic and diffusive static conditions (Fig. 3c ). Analysis of glucose absorption in organoid tubules over time confirmed that the volume of media in the static condition was a crucial factor in nutrient absorption (Fig. 3d ). Glucose absorption in organoids over time under the diffusive static condition followed an S-shaped absorption curve, whereas glucose levels in the fluidic condition increased rapidly and then plateaued, approximating an exponential curve, but both conditions plateaued at approximately the same maximal level of glucose absorption (Fig. 3e ). These studies suggested that flow has no additional effect on glucose absorption in organoids when compared to a static control presenting equivalent total glucose exposure. Glucose absorption was a general property of kidney organoids. In non-cystic structures, sodium-glucose transporter-2 (SGLT2) was expressed in organoid tubules and enriched at the apical surface, delineated by the tight junction marker ZO-1 (Fig. 3f ). Immunofluorescence confirmed that NBD glucose was absorbed into and accumulated inside organoid proximal and distal tubules (Fig. 3g ). Immunoblot analysis indicated similar levels of SGLT2 in control and PKD organoid cultures (Supplementary Fig. 2a, b ). Cyst-lining epithelia expressed SGLT2, and accumulated glucose both intracellularly as well as inside their lumens (Supplementary Fig. 2c ). Intracellular glucose levels were generally higher than extracellular levels, consistent with the tendency of NBD glucose to accumulate inside cells (Supplementary Fig. 3a–c ). Although cysts were much less cell-dense than attached non-cystic compartments, cystic and non-cystic compartments accumulated similar total levels of glucose, owing to the larger size of the cysts (Supplementary Fig. 3d ). When PKD organoids loaded with NBD glucose were switched into media containing only unlabeled glucose (washout), NBD glucose disappeared rapidly from these structures (Fig. 3h and Supplementary Movie 7 – 8 ). Thus, organoids continuously accumulated and released glucose in a dynamic fashion. Inhibition of glucose transport blocks cyst growth In animal models, inhibitors of glucose transport are suggested to have both positive and negative effects in PKD 39 , 40 . To test functionally whether cyst growth is linked to glucose transport in human organoids, cyst expansion was quantified in increasing concentrations of D-glucose under static conditions (96-well plate). Growth was maximal at 15–30 mM glucose, causing ~50% increase in cyst expansion, relative to lower or higher concentrations (Fig. 4a, b and Supplementary Fig. 4a ). Live/dead analysis of cysts treated with 60 mM glucose detected cytotoxicity, explaining the reduction in cyst growth at this higher concentration (Supplementary Fig. 4b–d ). Fig. 4: PKD cysts expand in response to glucose stimulation. a Representative time lapse brightfield images and ( b ) quantification of change in cyst size in PKD organoids in static suspension cultures containing with D-Glucose concentrations (mean ± s.e.m., n ≥ 6 pooled from four independent experiments, each dot indicates a single cyst). c Representative time lapse images and ( d ) quantification of PKD organoids in 15 mM D-Glucose treated with phloretin (mean ± s.e.m., n ≥ 10 cysts pooled from four independent experiments, p = 0.0231). e Quantification of maximum intensity projections of live/dead staining in organoids treated with phloretin (mean ± s.e.m., n ≥ 11, pooled from two independent experiments, each dot indicates a cystic organoid). f Images of live staining with Calcein AM (representative of three independent experiments). g Brightfield images and ( h ) quantification of size changes in cystic PKD organoids in 15 mM D-Glucose treated with probenecid (mean ± s.e.m., n ≥ 9 pooled from two independent experiments). Full size image The preceding findings, together with the rapid turnover of glucose in organoids described above, suggested that inhibition of glucose import might enable export mechanisms to dominate, resulting in blockade or even reversal of cyst growth due to osmotic effects. To test this hypothesis, we examined the effects of pharmacological transport inhibitors on cysts in static conditions. Phloretin, a broad spectrum inhibitor of glucose uptake, was tested in 15 mM glucose, and found to decrease cyst size by 77% at a concentration of 800 μM (Fig. 4c, d and Supplementary Fig. 5a ). Live-dead staining at 24 and 48 h of phloretin treatment revealed no significant toxicity (Fig. 4d, e and Supplementary Fig. 5b, c ). Treatment with either phloridzin, a non-selective inhibitor of both SGLT1 and SGLT2, or with dapagliflozin, a specific inhibitor of SGLT2, reduced cyst growth to baseline at non-toxic doses, further supporting the hypothesis (Supplementary Fig. 5d, e ). Net shrinkage of cysts was not observed with phloridzin or dapagliflozin, suggesting either decreased potency of these compounds relative to phloretin, or an off-target effect of phloretin beyond glucose transport that further reduces cyst size. In contrast to SGLT inhibitors, probenecid, an inhibitor of the OAT1 transporter on the basolateral membrane, had no effect on cyst growth compared to controls at non-toxic doses (Fig. 4f–h and Supplementary Fig. 5f ). Overall, these findings supported the hypothesis that pharmacological inhibitors of glucose uptake block cyst expansion in the PKD organoid model. Organoid cysts polarize outwards Some previous studies have suggested that cyst expansion may be due to increased secretory (basolateral-to-apical) solute transport 41 , 42 , 43 , 44 . However, glucose transport in the proximal tubule is predominantly reabsorptive (apical-to-basolateral) rather than secretory. To better understand the directionality of transport within organoids, we determined the apicobasal polarity of tubules and cysts using antibodies against tight junctions and cilia. In both PKD and control organoids, the ciliated surface of these tubules faced inwards (Fig. 5a ). Surprisingly, however, PKD cysts were polarized with the apical ciliated surface facing outwards towards the media and exposed to flow (Fig. 5a ). Thus, the external cyst surface resembled the apical surface of a tubule in this system. Line scan analysis confirmed this inverted polarization, with primary cilia and tight junction intensity profiles reversed in organoids vs. cysts (Fig. 5b ). Fig. 5: PKD cysts form via expansion of outwards-facing epithelium. a Confocal immunofluorescence images of cilia (acetylated α-tubulin, abbreviated AcT) and tight junctions (ZO-1) in proximal tubules (LTL) of PKD and non-PKD organoids, as well as in PKD cyst lining epithelial cells. Dashed arrow indicates how line scans were drawn. Images are representative of three independent experiments. b ZO1 and AcT intensity profiles in cysts vs. organoids. Line scans were drawn through cilia from lumen to exterior of structures. (mean ± s.e.m. from n = 5 line scans pooled from three organoids or cysts per condition from three independent experiments). c Fluorescent images of stromal markers in PKD organoids compared to human kidney tissue from a female patient 50 years of age with autosomal dominant PKD. Scale bars 20 µm. d Fluorescent images of cysts after having been overlaid with collagen. Images are representative of three independent experiments. e Z-stack confocal images of early (day 30) PKD organoid cyst in adherent culture. Zoom shows boxed region. White arrow indicates a podocyte cluster continuous with the peripheral epithelium. Images are representative of three independent experiments. f Close-up image showing peripheral epithelium of control (non-PKD) organoid in adherent culture. Yellow arrowhead indicates region of epithelial invagination. Images are representative of three independent experiments. g Phase contrast time-lapse images showing formation of PKD cysts from non-cystic structures in adherent cultures. Red arrows indicates tubular structures internal to the peripheral cyst. Images are representative of three independent experiments. h Schematic model of absorptive cyst expansion in organoids. Fluid flow (blue arrows) is absorbed into outwards-facing proximal tubular epithelium, which generates internal pressure that drives expansion and stretching of the epithelium (red arrows). A simplified organoid lacking podocytes or multiple nephron branches is shown for clarity. Full size image Close examination of PKD organoid cysts revealed that a subpopulation of these contained a layer of cells expressing alpha smooth muscle actin immediately beneath the cyst-lining epithelium, which formed a laminin-rich basement membrane (Fig. 5c ). In contrast, in human kidney tissue the basement membrane and myofibroblast-like cells surrounded cysts externally (Fig. 5c ). Thus, apical cell polarity aligned opposite the basement membrane in both systems. Simple spheroids of Madine-Darby Canine Kidney cells in suspension culture polarize outwards, but can reverse apicobasal polarity from outwards to inwards when embedded in collagen 34 . When PKD cysts in organoids were overlaid with collagen, however, cyst polarity remained inverted and did not repolarize with the ciliated surface facing away from the extracellular matrix, indicating that organoid cyst polarity was deeply entrenched and governed by more dominant, internal cues (Fig. 5d ). The observation that cysts polarized outwards seemed counter-intuitive, as tubule structures in human kidney organoids typically polarize inwards, with tight junctions and apical markers abutting one another from diametrically opposed epithelia (as shown in Fig. 5a ) 17 . To resolve this conundrum, we closely examined PKD organoids in three-dimensional confocal image z-stacks. Lotus tetragonolobus lectin (LTL), which is expressed more strongly in tubules than in cysts, was used to label the epithelium, while primary cilia and ZO-1 were used to indicate cell polarity. These experiments revealed that young cysts comprised epithelial spheroid structures (predominantly LTL + ) with underlying tubular infolds, which faced inwards (Fig. 5e ). We further examined organoids without cysts (controls) in confocal microscopy z-stacks. We noted that epithelium lining the periphery of these organoids faced outwards, whereas ‘tubules’ internal to organoids were invaginations of this peripheral epithelium (Fig. 5f and Supplementary Fig. 6a, b ). The innermost regions of these invaginated tubules were enriched for ECAD, a marker of distal tubule, whereas the external peripheral epithelia were enriched for LTL, a marker of proximal tubule (Fig. 5f and Supplementary Fig. 6a ). Thus, organoids constituted a continuous, proximal-to-distal epithelium, with the apical surface polarized outwards on the peripheral (more proximal) epithelium and inwards in the internal (more distal) epithelium of the structure. To observe the process of cyst formation in real time, we collected time-lapse images of young PKD organoids undergoing cystogenesis over eight days in culture. Consistently, cysts formed at the periphery of the organoids (Fig. 5g , Supplementary Fig. 7a, b , and Supplementary Movie 9 ). During the early stages of cystogenesis, tubular structures remained visible inside the cysts as they expanded (Fig. 5g , Supplementary Fig. 7a, b , and Supplementary Movie 9 ). Thus, time-lapse imaging supported the idea that cysts formed from the peripheral epithelium of the organoids that faced outwards towards the media, rather than from the internal tubular invaginations, which tended to stay anchored (Fig. 5h ). This was consistent with an absorptive mechanism mediated by the peripheral epithelium. Absorptive cysts form in vivo It is important to understand how these findings in organoids might relate to PKD cyst formation in vivo, where cyst-lining epithelia face inwards rather than outwards. Microcysts smaller than 1 mm diameter and undetectable by magnetic resonance imaging are numerous in kidney sections from patients with early stages of PKD, and are proposed to form as focal outpouchings of tubular epithelium 45 , 46 . If such an outpouching remained connected to a small segment of the original tubule via apical junctions, it could accumulate fluid through tubular reabsorption. The preceding suggested a possible model for cyst formation in vivo (Fig. 6a ). Absorption of glucose through the apical surface of the tubular epithelium is followed by water along the osmotic gradient via paracellular or transcellular routes to maintain balanced concentrations on either side of the epithelium. There is a lack of appropriate outlet for this absorptive activity, creating a pressure within the interstitium and leading to its detachment from neighboring tubules, which undergo deformation and expansion to fill the resultant interstitial space. This process continues as the cyst grows, and may be exacerbated by the gradual loss or detachment of associated peritubular capillaries (which reduces the absorptive sink), and by growth of interstitial mesenchymal stromal cells, which provide a scaffold and synthesize extracellular matrix to accommodate the expanding epithelium. Fig. 6: PKD cysts in vivo absorb glucose into the surrounding interstitium. a Hypothetical schematic of absorptive cyst formation in kidney tissue. Fluid (blue arrows) is absorbed through proximal tubules into the underlying interstitium, which partially detach from the epithelium. The tubules then expand and deform to fill the interstitial space, reaching a low-energy conformation in which the withheld volume is ultimately transferred back into the luminal space of the nascent microcyst. A simplified model is shown and represents one possible explanation of the findings. b PAS stains of 2-month-old and 6-month-old Pkd1 RC/RC mice (C57BL/6 J background). Scale bars 50 µm. Images are representative of 4 animals per condition (two male and two female). c Confocal images of stromal basement membrane (LAMA1) with cilia (AcT) or ( d ) endothelial cells (CD31) in Pkd1 RC/RC versus control ( Pkd1 +/+ ) 2-month-old mice. All mice were C57BL/6 J background. Yellow arrowheads indicate areas of detached or expanded interstitium surrounding the cyst. Images are representative of four animals per condition (two male and two female). e Schematic of glucose uptake assay, illustrated using Biorender software under license. f Representative images and ( g ) line scan analysis of PKD cysts after perfusion with fluorescent NBD glucose or unlabeled PBS control (mean ± s.e.m., n ≥ 17 cysts per condition pooled from a total of three female and two male Pkd1 RC/RC mice of C57BL/6J background). Dashed magenta arrows indicate how line scans were drawn. Full size image To investigate the plausibility of such a mechanism in vivo, we analyzed microcysts in the Pkd1 RC/RC mouse strain, which has a hypomorphic Pkd1 gene mutation orthologous to patient disease variant PKD1 p. R3277C, and manifest a slowly progressive PKD during adulthood over a period of several months 47 , 48 . Histology sections and confocal images of 2-month-old mouse tissue revealed continuous basement membranes between tubules and microcysts, consistent with the possibility that microcysts form from tubular outpouchings that remain capable of absorption through the wall of the neighboring tubule (Fig. 6b, c ). While much of these microcysts remained tightly associated with peritubular capillaries, suggesting that they continue to reabsorb, portions of the epithelium appeared to have detached from the endothelium, resulting in areas of fluid accumulation or interstitial expansion (Fig. 6b–d ). To determine whether PKD cysts absorbed glucose in vivo, we devised a methodology to inject mice with NBD glucose and immediately retrieve their kidneys (Fig. 6e ). Fluorescence microscopy analysis of kidney tissue sections revealed that cyst-lining epithelia and the surrounding interstitium readily took up NBD glucose (Fig. 6f–g ). Thus, cysts remained absorptive in vivo and PKD kidneys as a whole readily accumulated glucose. Discussion Coupling the structural and functional characteristics of organoids with the controlled, microfluidic microenvironments of organ-on-a-chip devices is a promising approach to in vitro disease modeling 28 . Our study combines CRISPR-Cas9 gene editing to reconstitute disease phenotype with organoid-on-a-chip technology to understand the effect of flow, which is difficult to assess in vivo (where it is constant) and has hitherto been absent from kidney organoid models at physiological strength. The ‘human kidney organoid on a chip’ microphysiological system described here incorporates organoids with PKD mutations in a wide-channel format, which allows liquid to flow over the organoids, similar in geometry to other recently described organoid flow systems 29 , 30 . At the core of this system are human organoids that strikingly recapitulate the genotype-phenotype correlation in PKD. This is fundamentally different from other other types of generic spheroids that form in vitro as a default configuration of the epithelium. While certain aspects of the organoid system differ from in vivo, we do not see a plausible explanation wherein the genotype-phenotype correlation is preserved, but the entire system is somehow irrelevant or opposite to the fundamental mechanism of PKD. Rather, the system is teaching us which aspects of PKD are most important for the phenotype. The system can be readily assembled from commercially available components, and produces a shear stress associated with the physiological range found in human kidney kidney tubules 27 , 36 , 37 , 38 . This is ~6-fold greater than the maximum rate of 0.035 dyn/cm 2 used in a previous kidney organoid-on-a-chip device, a shear stress that was nevertheless sufficient to stimulate expansion of vasculature within the device when compared to static conditions 29 , and to induce dilation of tubular structures derived from hPSC with mutations associated with autosomal recessive PKD (ARPKD) 49 . The physiological relevance of such low flow rates is not clear, and the cohort of ARPKD cell lines that was studied includes hPSC previously generated by our laboratory that we found to lack definitive ARPKD mutations 50 . It is nevertheless interesting and encouraging that flow over the organoids was capable of inducing swelling in both systems. Importantly, we have also developed a static module using the same basic chip that is capable of natural diffusion from a syringe reservoir. This enables us to distinguish the effects of flow from those of exposure to fluid volume and mass of reabsorbable solute, which is difficult to achieve in conventional systems with limited diffusion such as tightly connecting a reservoir to a Luer lock syringe. Our discovery that volume can partially substitute for flow is reminiscent of a recent study in which immersion in >100-fold volumes induced three-dimensional morphogenesis of intestinal epithelial cells similar to flow 51 . In contrast, increased volume was unable to substitute for flow in the aforementioned study of endothelial expansion in kidney organoid cultures. This may reflect a sensitivity of vascular cells to fluid shear stress, or alternatively the limited volumes possible in closed loop systems 29 . In addition to volume, hydrostatic pressure is increased in our diffusive static condition, which may play a role in PKD phenotype 52 . Of note, cysts in our diffusive static condition did not exhibit the dramatic oscillations in size observed under flow, indicating roles for flow-induced mechanoregulation that cannot be readily replicated by diffusion effects, for instance involving stretch-activated ion channels. Our findings indicate that flow, volume, and solute concentrations are positive regulators of cyst expansion. Cystogenesis can be enhanced through mechanisms of tubular absorption and glucose transport. A limitation of these systems is that the perfusion passes over the organoids, rather than through them as it does through tubules in vivo. However, as peripheral epithelia in our organoids face outwards towards the media, the net result is for the apical surface to be in contact with the directional flow, similar to the epithelium of a tubule in vivo. This fortuitously enables us to assess reabsorptive function, the primary characteristic of kidney tubular network, which fluxes ~180 L through its apical surface every day. In this regard, the arrangement in the organoid system may have greater functional relevance than spheroid systems in which cyst polarity faces inwards but the liquid is trapped inside with no possibility of perfusion (unlike the arrangement in the kidneys). The observation that PKD cysts can form inside-out, such that the secretion (basolateral-to-apical transport) would occur in the opposite direction from cysts in vivo, argues against secretion as the critical driver of cystogenesis in this system 43 . Our experiments in animals also demonstrate that kidney cysts remain reabsorptive even in advanced PKD. In our studies in vivo, we also made the interesting discovery that the tubular epithelium detaches focally from the underlying interstitium during pre-cystic stages of disease, which may reflect the consequences of a possible absorptive phenotype. Studies of PKD in living animals, however, carry significant constraints for studying mechanism. Kidneys are concealed within the body, preventing detailed time-lapse microscopy, and perturbing renal absorption is experimentally challenging and causes complex side effects. Demonstrating glucose absorption in cystic kidneys in vivo, and showing interstitial detachment, as we have done, required significant methods development and careful analysis. Further methods development and more detailed studies are required to causally link absorption, interstitial detachment, and cyst formation in vivo. Nevertheless, it is clear that renal cysts can continue to absorb glucose, even in vivo, and in organoids, glucose absorption is linked to the PKD phenotype, which is demonstrably specific to the genotype and thus mechanistically relevant. These findings are consistent with macropuncture studies showing that wall pressures inside PKD cysts in vivo resemble their originating nephron segments, and studies of excised cysts in vitro, which demonstrate that the epithelium is slowly expanding and absorptive under steady-state conditions 44 , 45 . In a more recent clinical analysis, patients with ADPKD demonstrated lower excretion of renally secreted solutes, rather than higher levels of secretion 53 . Drugs that activate CFTR, which is hypothesized to drive a secretory phenotype in PKD, have shown promise in treating PKD in mice, rather than exacerbating the disease, which is also inconsistent with a secretory hypothesis 54 . Indeed, a phenotype related to absorption is a much more natural fit for the specialized properties of kidney epithelia (which are predominantly absorptive) than secretion. This is not to say that secretion cannot be a causative mechanism in PKD cystogenesis, but rather that absorption can also play a critical role. In our model, absorption of fluid into the interstitium creates space for epithelia to expand and fill. During this process of expansion and space filling by the epithelium, which is triggered by changes within the microenvironment surrounding the tubules, it is conceivable that secretory processes play a role. Previously, we observed that transfer of PKD organoids from adherent cultures into suspension cultures was associated with dramatically increased rates of cystogenesis 16 . Our current findings add greatly to our understanding of this phenomenon. Upon release from the underlying substratum, the peripheral organoid epithelium grows out and envelops the rest of the organoid 16 . This forms an enclosed, outwards-facing structure in an ideal conformation to absorb fluid from the surrounding media and expand into a cyst. Although we did not detect differences in the levels of SGLT2, differences may exist in SGLT2 activity, or in the levels or activity of other transporters involved in absorption, resulting in increased absorptive flux in PKD epithelia, compared to non-PKD. Alternatively, there might exist a difference in the pliability of PKD epithelia versus non-PKD epithelia undergoing equivalent levels of absorptive flux. We note that polycystin-2 is a non-selective cation channel expressed at the apical plasma membrane 9 , 10 , which could conceivably play a role in transporter function and reabsorption. The polycystin complex may also possess force- or pressure-sensitive mechanoreceptor properties, which could regulate the epithelial response to fluid influx 4 , 5 , 7 , 8 , 22 , 52 . Although we favor a direct role for glucose absorption in driving cyst expansion, glucose transport could also function separately of water transport to impact cyst formation, for instance by altering mitochondrial metabolism or signaling changes to the actin cytoskeleton, which could promote cystogenesis regardless of which direction the cells face 55 , 56 , 57 , 58 . Of note, cysts form not only in the proximal tubules that are primarily responsible for glucose reabsorption, but also in the collecting ducts, where they can reach very large sizes. As cysts can and must originate from these very different epithelial cell types, the process of cystogenesis is not likely to be explained by a simple absorption/secretion ratio for any one solute. One goal for future development of our PKD organoid system is to incorporate collecting ducts, as this lineage is important to PKD cystogenesis but does not mature in human kidney organoid cultures 17 , 59 , 60 . A limitation of the current system is that the organoid phenotype is limited to biallelic mutants, in which disease processes are greatly accelerated 61 , 62 , 63 . In contrast, germline mutations in PKD patients are monoallelic, and phenotypes take decades to develop, likely due to the necessity of developing ‘second hit’ somatic mutations in the second allele 64 , 65 . The current system involving biallelic mutants may more closely phenocopy early-onset autosomal recessive PKD than late-onset autosomal dominant PKD, which should be considered when extrapolating these findings into a clinical context 16 . Generation of well-controlled allelic series of PKD organoids, together with methodologies to model the acquisition of somatic mutations, may ultimately produce human organoid models with greater fidelity to autosomal dominant PKD. Canagliflozin (Invokana), an inhibitor of SGLT2, has recently been approved for the treatment of type 2 diabetes, and appears to have a protective effect in the kidneys 66 , 67 . SGLT inhibitors have not yet been tried in patients with PKD. Our findings suggest that blocking SGLT activity could reduce proximal tubule cysts by preventing glucose reabsorption. However, this would also expose the collecting ducts downstream to higher glucose concentrations. Indeed, it was previously suggested that inhibition of glucose transport reduces PKD in the Han:SPRD rat because its cysts originate from proximal tubules, whereas the same treatments in the PCK rat worsen PKD because its cysts originate in more distal nephron segments 39 , 40 . Caution must therefore be exercised when considering how to conduct human clinical trials for PKD with SGLT inhibitors. In summary, we have developed a microfluidic kidney organoid module that enables detailed studies of renal tubular absorption and PKD cyst growth. The cyst-lining epithelium in this system is exposed to flow in a mirror image of the nephron structure in vivo. Using this system, we have identified glucose levels and its transport into cyst structures as a driver of cystic expansion in proximal nephron-like structures. Therapeutics that modulate reabsorption may therefore be beneficial in reducing cyst growth in specific nephron segments, with relevance for future PKD clinical trials 4 , 66 . Methods Ethics Research complied with all relevant ethical regulations. Human PKD kidney tissue (nephrectomy) was obtained with informed consent under a human subjects protocol approved by the University of Washington Institutional Review Board. No compensation was provided to study participants. Kidney organoid differentiation Work with hPSC was performed under the approval and auspices of the University of Washington Embryonic Stem Cell Research Oversight Committee. Specific cell lines used in this study are described below and are sourced from commercially available hPSC obtained with informed consent. hPSC stocks were maintained in mTeSR1 media with daily media changes and weekly passaging using Accutase or ReLeSR (STEMCELL Technologies, Vancouver). 5,000–20,000 hPSCs per well were placed in each 24-well plate pre-coated with 300 µL of DMEM-F12 containing 0.2 mg/mL Matrigel and sandwiched the following day with 0.2 mg/mL Matrigel in mTeSR1 (STEMCELL Technologies, Vancouver) to produce scattered, isolated spheroid colonies. 48 hrs after sandwiching, hPSC spheroids were treated with 12μM CHIR99021 (Tocris Bioscience) for 36 h, then changed to RB (Advanced RPMI + 1X Glutamax + 1X B27 Supplement, all from Thermo Fisher Scientific) after 48 h, and replaced with fresh RB every 3 days thereafter. Organoid perfusion in microfluidic chip Ibidi μ-Slide VI 0.4 were coated with 3.0% Reduced Growth Factor Geltrex (Life Technologies) and left at 37 °C overnight to solidify. Kidney organoids (21–40d) were picked from adhered culture plates, pipetted into the slide channels (2–3 per channel) with RB, and left for 24 hrs at 37 °C to attach. Organoids were distributed randomly within the channel. For the fluidic condition, 60 mL syringes filled with RB were attached to channels using clear tubing (Cole-Parmer, 0.02'' ID, 0.083'' OD). A clamp was used to close off the tubing, and the media in the syringe was changed to 25 mL RB + 36.5 μM 2-NBD-Glucose fluorescent glucose (Abcam ab146200). A Harvard Apparatus syringe infusion pump was used to direct media flow into microfluidic chip at 160 μL/min (0.2 dynes/cm 2 ). Media was collected at the outlet and filtered for repeated use. For the static condition, a 25 mL syringe containing RB was attached to the channel using wide clear tubing (Cole-Parmer, 0.125'' ID, 0.188'' OD). The syringe was detached momentarily, the plunger removed, and the open syringe reattached and filled slowly with 25 mL RB + 36.5 μM 2-NBD-Glucose. From this point on, diffusion of the fluorescent glucose began from the open syringe into the channel via the tubing. Alternatively, NDB-glucose was substituted with food dye (invert sugar, 360 g/mol), or alternatively the organoids were perfused with media in the absence of any additives. Image/video collection Image collection was performed on a Nikon Ti Live-Cell Inverted Widefield microscope inside of an incubated live imaging chamber supplemented with 5% carbon dioxide. Experiments in microfluidic devices were recorded for 6 h. During this time, cysts changed in volume (grew and shrank) and in some cases were destroyed due to bubbles arising in the tubing. Cyst growth rate in microfluidic devices was therefore calculated on an individual basis, when each cyst reached its maximal volume, which varied for each sample from 1 h to 5 h after the start of the experiment. For longer-term experiments conducted in static 96-well cultures, organoids were imaged at regular intervals (typically 24 h) and analyzed at the endpoint indicated in the figure graphs. Phase contrast and GFP (200 ms exposure) images were taken every 5 min for a maximum of 12 h. Images of fixed samples were collected on a Nikon A1R point scanning confocal microscope. Animal studies Kidney tissue from Pkd1 RC/RC mice maintained in C57BL/6 J background (gift of Mayo Clinic Translational PKD Center) and C57BL/6 J controls were utilized. In order to investigate the process of cystogenesis, younger Pkd1 RC/RC mice 6–7 weeks of age, along with wild-type C57BL/6 J mice of the same age were used. Kidneys were harvested after systemic perfusion with ice-cold PBS, followed by fixation with paraformaldehyde fixative and immersion in 18–30% sucrose at 4 °C overnight. Tissues were embedded and frozen in optimal cutting temperature compound (OCT, Sakura Finetek, Torrance, CA). Cryostat-cut mouse kidney sections (5–10 μm) were stained for acetylated α-tubulin, laminin-1, and CD31 (see “Immunostaining” for primary antibodies and dilutions). For perfusion experiments, NBD Glucose was freshly dissolved in PBS to a concentration of 1 mM. Freshly sacrificed Pkd1 RC/RC mice (>8 months old) were incised through the chest and nicked at the vena cava with a 27-gauge needle. Keeping pressure on the vena cava, mice were perfused systemically through the heart with a syringe containing 10 ml of PBS, followed by a second syringe containing 5 ml of either PBS alone (control) or PBS + 1 mM NBD-Glucose. Kidneys were harvested immediately and embedded fresh without fixation or sucrose equilibration in OCT. Cryostat-cut mouse kidney sections (20 μm) were mounted in OCT and imaged on a confocal microscope with 10X objective. All animal studies were conducted in accordance with all relevant ethical regulations under protocols approved by the Institutional Animal Care and Use Committee at the University of Washington in Seattle. Mice were maintained on a standard diet under standard pathogen-free housing conditions, with food and water freely available. Immunostaining Immunostaining followed by confocal microscopy was used to localize various proteins and transporters in the cysts and organoids. Prior to staining, an equal volume of 8% paraformaldehyde was added to the culture media (4% final concentration) for 15 mins at room temperature. After fixing, samples were washed in PBS, blocked in 5% donkey serum (Millipore)/0.3% Triton-X-100/PBS, incubated overnight in 1% bovine serum albumin/0.3% Triton-X-100/10μM CaCl 2 /PBS with primary antibodies, washed, incubated with Alexa-Fluor secondary antibodies (Invitrogen), washed and imaged. Primary antibodies or labels include acetylated α-tubulin (Sigma T7451, 1:5000), ZO-1 (Invitrogen 61-7300, 1:200), Biotinylated LTL (Vector Labs B-1325, 1:500), E-Cadherin (Abcam ab11512, 1:500), SGLT2 (Abcam ab37296, 1:100), laminin-1 (Sigma L9393, 1:50), alpha smooth muscle actin (Sigma A2547, 1:500), CD31 (BD Biosciences 557355, 1:300). Fluorescence images were captured using a Nikon A1R inverted confocal microscope with objectives ranging from 10X to 60X. Statistical analysis Experiments were performed using a cohort of PKD hPSC, previously generated and characterized, including three PKD2 −/− hPSC lines and three isogenic control lines that were subjected to CRISPR mutagenesis (gRNA CGTGGAGCCGCGATAACCC) but were found to be unmodified at the targeted locus by Sanger sequencing of each allele and immunoblot 16 , 17 . Altogether these represented two distinct genetic backgrounds, genders, and cell types: (i) male WTC11 iPS cells (Coriell Institute Biobank, GM25256, two isogenic pairs) and (ii) female H9 ES cells (WiCell, Madison Wisconsin, WA09, one isogenic pair). Quantification was performed on data obtained from experiments performed on controls and treatment conditions side by side on at least three different occasions or cell lines (biological replicates). Error bars are mean ± standard error (s.e.m.). Statistical analyses were performed using GraphPad Prism Software. To test significance, p -values were calculated using two-tailed, unpaired or paired t -test (as appropriate to the experiment) with Welch’s correction (unequal variances). For multiple comparisons, standard ANOVA was used. Statistical significance was defined as p < 0.05. Exact or approximate p- values are provided in the figure legends in experiments that showed statistical significance. For traces of cysts over time, the least squares progression model was applied to fit the data to lines in GraphPad Prism. Line scans of equal length were averaged from multiple images and structures based on raw data intensity values in the GFP channel. Lines were drawn transecting representative regions of each structure (e.g. avoiding heterogeneities, brightness artifacts, or areas where cysts and organoids overlapped), placed such that the first half of each line represented the background in the image. The intensity of each point (pixel) along the line was then averaged for all of the lines, producing an averaged line scan with error measurements. Arrows are provided in representative images showing the direction and length of the line scans used to quantify the data. Unless otherwise noted, raw intensity values (bytes per pixel) were were used without background subtraction. Hydrostatic pressure calculation The following calculation was performed: $${{{{{\rm{Pressure}}}}}}=\rho {{{{{\rm{gh}}}}}}=\left(997\frac{{kg}}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)\left({{{{\boldsymbol{{{{{\mathscr{x}}}}}}}}}}\,m\right)={Pressure}\left(\frac{{kg}}{m\cdot {s}^{2}}\right)$$ The height from channel to top of media in reservoir was measured to be: Static 1 mL: ~12 cm Static 25 mL: ~20 cm Therefore, the calculation for each of these conditions was: $$Pressur{e}_{1mL} =\rho {{{{{\rm{gh}}}}}}=\left(997\frac{kg}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)(0.12\;m) \\ =1173.7\left(\frac{kg}{m\cdot {s}^{2}}\right)\left(\frac{mmHg}{133.32\,Pa}\right)=8.8\,mmHg$$ $$Pressur{e}_{25mL} =\rho {{{{{\rm{gh}}}}}}=\left(997\frac{kg}{{m}^{3}}\right)\left(9.81\frac{m}{{s}^{2}}\right)(0.20\,m) \\ =1956.1\left(\frac{kg}{m\cdot {s}^{2}}\right)\left(\frac{mmHg}{133.32\,Pa}\right)=14.7\,mmHg$$ This amounted to a total difference in pressure of (14.7–8.8 = 5.9) mmHg. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The main data supporting the results in this study are available within the paper and its supplementary information. The raw and analysed datasets generated during the study are too large and complex to be publicly shared (numerous cell lines, replicates, images, blots, and experiments, maintained and analysed in specialized file formats and with unique identifiers). Datapoints are shown as dots in the plots provided in this paper and the Supplement. All datasets, including raw data and statistical analysis, are available upon reasonable request from the corresponding author. PKD mutant cell lines used in this study may be obtained from the corresponding author upon request and in accordance with material transfer agreements from the University of Washington and any third-party originating sources. Source data are provided with this paper.
A study of kidney organoids in a novel lab environment might have downstream implications for the treatment of polycystic kidney disease (PKD), an incurable condition that affects more than 12 million people worldwide. One key discovery of the study: Sugar appears to play a role in the formation of fluid-filled cysts that are PKD's hallmark. In people, these cysts grow big enough to impair kidney function and ultimately cause the organs to fail, necessitating dialysis therapy or transplant. The findings were published in Nature Communications. The co-lead authors are Sienna Li and Ramila Gulieva, research scientists in the lab of Benjamin Freedman, a nephrology investigator at the University of Washington School of Medicine. "Sugar uptake is something that kidneys do all the time," said Freedman, a co-senior author. "We found that increasing the levels of sugar in the dish cultures caused cysts to swell. And when we employed drugs known to block sugar absorption in the kidneys, it blocked this swelling. But I think it relates less to blood sugar level and more to how kidney cells take in sugar—which in this process seemed to go rogue and give rise to cysts." For years Freedman has studied PKD in organoids grown from pluripotent stem cells. Organoids resemble miniature kidneys: They contain filtering cells connected to tubes and can respond to infection and therapeutics in ways that parallel the responses of kidneys in people. Mini-kidney tube structures have sugar receptors (red, upper left) and form outward-facing PKD cysts (center), which swell by taking in sugar (green, lower right). Credit: Benjamin Freedman Lab / University of Washington School of Medicine Although his team can grow organoids that give rise to PKD cysts, the mechanisms of those cysts' formation are not yet understood. In this investigation, the researchers focused on how the flow of fluid within the kidney contributes to PKD. To do so, they invented a new tool that merged a kidney organoid with a microfluidic chip. This allowed a combination of water, sugar, amino acids and other nutrients to flow over organoids that had been gene-edited to mimic PKD. "We were expecting the PKD cysts in the organoids to get worse under flow because the disease is associated with the physiological flow rates that we were exploring," Freedman explained. "The surprising part was that the process of cyst-swelling involved absorption: the intake of fluid inward through cells from outside the cyst. That's the opposite of what is commonly thought, which is that cysts form by pushing fluid outward through cells. It's a whole new way of thinking about cyst formation." In the chips, the researchers observed that the cells lining the walls of the PKD cysts faced outward as they stretched and swelled, such that the tops of the cells were on the outside of the cysts. This inverted arrangement—these cells would be facing inward in living kidneys—suggests that cysts grow by pulling in sugar-rich fluid, not by secreting the liquid. The observation gives researchers more information about how cysts form in organoids, a finding that will have to be tested further in vivo. As well, the fact that sugar levels drive cyst development points to new potential therapeutic options. "The results of the experiment are significant because there is a whole class of molecules that block sugar uptake in the kidneys and are attractive therapeutics for a number of conditions," Freedman said. "They haven't been tested yet, but we view this as a proof-of-concept that these drugs could potentially help PKD patients."
10.1038/s41467-022-35537-2
Earth
Groundwater information is no longer out of depth
Danielle K. Hare et al. Continental-scale analysis of shallow and deep groundwater contributions to streams, Nature Communications (2021). DOI: 10.1038/s41467-021-21651-0 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-21651-0
https://phys.org/news/2021-03-groundwater-longer-depth.html
Abstract Groundwater discharge generates streamflow and influences stream thermal regimes. However, the water quality and thermal buffering capacity of groundwater depends on the aquifer source-depth. Here, we pair multi-year air and stream temperature signals to categorize 1729 sites across the continental United States as having major dam influence, shallow or deep groundwater signatures, or lack of pronounced groundwater (atmospheric) signatures. Approximately 40% of non-dam stream sites have substantial groundwater contributions as indicated by characteristic paired air and stream temperature signal metrics. Streams with shallow groundwater signatures account for half of all groundwater signature sites and show reduced baseflow and a higher proportion of warming trends compared to sites with deep groundwater signatures. These findings align with theory that shallow groundwater is more vulnerable to temperature increase and depletion. Streams with atmospheric signatures tend to drain watersheds with low slope and greater human disturbance, indicating reduced stream-groundwater connectivity in populated valley settings. Introduction Groundwater discharge zones establish active stream–groundwater hydrologic connectivity through the advective exchange of water. As a critical contributor to streamflow generation, groundwater discharge influences water quantity and quality throughout stream networks, especially during seasonal low flows and dry conditions 1 . Many streams host ecologically important ‘groundwater-dependent ecosystems’ 2 , yet these habitats face growing threats from climate change and groundwater contamination 1 , 3 , 4 . Aquatic organisms are particularly susceptible to shifts in thermal regimes because they have life cycles that rely on annual thermal cues 5 and metabolic rates influenced by stream temperature 6 . The relatively stable thermal regimes of some groundwater discharge zones can buffer stream temperatures against long-term air temperature trends and short-term hot and cold extremes 2 ; therefore, groundwater discharges can provide important stream channel thermal refuges and refugia for sensitive aquatic organisms such as salmonid fishes 7 , 8 . However, in response to climate change and land development, streams and rivers have recently shown widespread warming 9 , 10 . Observed stream warming trends are spatially heterogeneous due in part to spatially variable groundwater contributions to streamflow 11 . Thus, effective watershed management will require a process-based characterization of groundwater contribution to streamflow 12 at ecologically relevant scales to predict future stream thermal regimes. The magnitude, spatial distribution, and source-flow path characteristics of groundwater discharge can control the physical characteristics of individual streams 8 , 13 , 14 and whole stream networks 15 . Characterizing the depth of contributing groundwater is particularly important for understanding broad-scale responses of stream ecosystems to land development and climate change 16 for three main reasons: first, groundwater depth is associated with annual thermal stability as natural surface temperature fluctuations are prominent within the shallow aquifer but quickly attenuate with depth 13 . Deeper groundwater (defined here as greater than approximately 6 m from the land surface) shows little annual thermal variability relative to shallow groundwater 17 that flows through the near-surface portion of the ‘critical zone’ 18 .Therefore, groundwater discharge can either impart stability (deep groundwater) or variability (shallow groundwater) on atmospheric-driven stream thermal regimes. Hydrogeologic climate simulations support this definition, as water tables below 5 m have shown decoupling from surface energy balances 19 . Second, shallow groundwater is inherently more sensitive to land-use changes 20 and surface contamination 21 , 22 , 23 . Thus, effective watershed management may have a different urgency depending on the depth of contributing groundwater. Also, naturally, deep and shallow groundwater tend to have different chemical profiles 24 , 25 , 26 , which has important implications for surface water quality and stream ecosystem function including delivery of legacy contaminants 15 . Third, shallow groundwater can be directly depleted via transpiration 27 , irrigation withdrawals 28 , and is more vulnerable to seasonal water table drawdown during dry periods while discharge from deeper groundwater sources is more seasonably stable 29 . This depth-dependent effect can affect stream water transit times and catchment water balance, emphasizing the importance of parsing shallow versus deep contributing groundwater flow paths 24 . Though understanding the implications of climate change and land development for stream ecosystems requires quantifying the magnitude and source-depth of groundwater discharge, we lack efficient and broadly applicable methods to characterize source groundwater depth. Most hydrologic techniques for evaluating the physical properties of groundwater discharge are labor-intensive and not spatially and temporally scalable 30 . More efficient methods, such as stream water temperature sensitivity linear regression analyses 31 or physically based hydrograph separation techniques 32 do not directly differentiate groundwater source-depth. Inference of groundwater source-depth is possible using water chemistry end-member mixing 33 or water isotopic data 34 , but these analyses cannot inherently specify shallow groundwater flow paths without additional hydrologic characterization, and are time and resource-intensive. In the absence of groundwater discharge, annual stream water temperature signals are often well coupled to seasonal variation of local air temperature 35 . A departure from this coupling in terms of seasonal magnitude and timing is characteristic of influence from varied depth groundwater discharge 8 or dam operation 36 . Discharge of shallow groundwater to streams has physical properties closely tied to seasonally dynamic air temperature and precipitation, quickly responding to short-term perturbations such as hot, dry summers 37 . Discharge from deep groundwater sources does not tend to respond to anomalous weather years but is sensitive to long-term climate trends at extended time scales ranging from decadal to centennial 16 , 38 , 39 . In this work, we used a newly refined methodology to classify 1729 stream sites across the continental United States as having shallow or deep groundwater signatures, lacking a pronounced groundwater signature, or having major dam influence, based on publicly available multi-year air and stream water temperature records and metadata. Our analysis harnesses the relatively high annual variability in shallow groundwater temperatures and the stability of deep groundwater temperatures to identify characteristic paired air and stream water annual temperature signal relations. We used our classification to (1) compare our annual temperature signal-based categorization to baseflow indices, (2) explore continental spatial patterns and landscape drivers of groundwater discharge characteristics, and (3) evaluate how stream temperature is changing over time (14–30 years) among streams with varied source-depth of groundwater discharge. We present an unprecedented broad-scale inference of groundwater discharge contribution to streams that will inform more accurate predictions of stream responses to changing climate and land use conditions. Results and Discussion Continental classification We used paired air and stream water annual temperature signal relations to broadly classify stream and river sites with atmospheric (i.e., lacking a pronounced groundwater signature), deep groundwater, shallow groundwater, or major dam signatures across the continental U.S. Our sites represent a broad range of stream sizes encompassing 1st to 9th order (median: 3rd order) across 21 of the 25 U.S. physiographic provinces (categorized based on large-scale geomorphology; Supplemental Table 1 ). We used multi-year annual temperature signals as a diagnostic tool because they are less susceptible to variable flow and weather than other stream temperature-based groundwater discharge metrics that rely on short-term thermal variance 40 . Streams below major dams have complex, management-influenced annual thermal regimes 36 and are not explored in detail here. For streams with substantial groundwater discharge, the amplitude and phase of paired annual air and stream water temperature signals decouple in distinctive ways. At sites with a deep groundwater signature, the annual stream temperature signal is highly damped compared to air—quantified by the stream water/air amplitude ratio—but the signals are approximately in-phase. Groundwater discharge from shallow flow paths causes variable stream temperature signal damping, but uniquely shifts the timing of the annual stream water temperature signal later relative to the annual air temperature signal—quantified by the time-forward phase lag. This characteristic phase lag propagates into stream water from adjacent shallow aquifers, whereas deeper groundwater flow paths have a highly attenuated annual temperature signal and thus do not influence the stream water signal phase 8 . For our broad-scale analysis, we assigned categories of shallow and deep groundwater signatures according to paired air and stream water annual signal metrics of amplitude ratios and phase lags based on previous analyses 8 , 40 , 41 . We assigned sites that either had phase lags of greater than 40 days, which is not an expected outcome of even extreme shallow groundwater discharge mixing with stream water 8 , or are within 25 km downstream of major dams, as sites with major dam signatures. Of the 1729 sites we categorized, 305 sites met this dam criterion and are removed from the groundwater signature analysis. Sites classified as having pronounced groundwater signatures are common in this national dataset. We found that of the 1424 sites analyzed for groundwater signatures, groundwater substantially influences the annual thermal regimes of 39% ( n = 556). We classified 47% ( n = 264) of these sites as having deep groundwater signatures, and 53% ( n = 292) as having shallow groundwater signatures (Fig. 1 ). The average amplitude ratio is 0.54 ( σ = 0.10) for sites with deep groundwater signatures and 0.59 ( σ = 0.18) for sites with shallow groundwater signatures. The air to stream water annual signal phase lag averaged 16.6 days ( σ = 6.6 days) for sites with shallow groundwater signatures and 3.8 days ( σ = 3.4) for sites with deep groundwater signatures. In contrast, the average amplitude ratio for sites with atmospheric signatures is better coupled to annual air temperature at 0.85 ( σ = 0.12) with a negligible average phase lag of 2.3 days ( σ = 2.7 days) that is not significantly different than zero phase lag. Fig. 1: Spatial distribution of stream sites by categorical groundwater signature. Categorical groundwater (GW) signatures derived from annual paired air–stream water temperature signals a across the continental United States and b within a single watershed, the North Fork of the Clearwater River-Lake Creek watershed, Idaho—Montana, USA (Hydrologic Unit Code HUC10 – 1706030701). Lake Creek stream is highlighted. Across the United States, counts of each category are atmospheric signature (pink) n = 868; shallow groundwater GW signature (yellow) n = 292; deep groundwater GW signature (blue) n = 264. Legend descriptions are maintained between a and b . Base map a was generated from R package ‘maps’ version 3.3.0 and the Nation Hydrography Dataset 70 b was created from 7.5-minute ground surface elevation data courtesy of the U.S. Geological Survey. Full size image Deep and shallow groundwater contributions to streamflow are not mutually exclusive, often a spectrum of flow path depths contributes to streamflow 42 , but our analysis derives which signature is dominant. The distribution of annual signal metrics within our groundwater contribution categories indicate that our thresholds that define the groundwater signature categories occur near natural breaks (Supplementary Fig. 1 ), indicating alignment with potential groundwater-driven separations of underlying populations in the data. We compared our temperature-based approach for classifying groundwater contribution to streamflow data by using multi-year baseflow regression analysis for the subset of sites that had concurrent streamflow records ( n = 554) (Fig. 2 ). Specifically, we calculated the baseflow index (BFI), an estimate of the ratio of baseflow to total streamflow based on the annual stream hydrograph, as it is one of the few current methods for quantifying relative groundwater contributions to streamflow efficiently at broad scales 32 . As may be expected, sites with atmospheric thermal signatures had significantly lower BFIs (median—0.69) than sites with either shallow groundwater (median BFI – 0.79) or deep groundwater (median BFI—0.86) signatures (Fig. 2 ). This result aligns with theory that the primary driver of baseflow throughout river networks is groundwater discharge. Fig. 2: Categorical groundwater (GW) signatures compared to baseflow index (BFI). Letters indicate significance at p < 0.05 reported alongside median BFI. Counts of each category are atmospheric signature (pink) n = 401; shallow groundwater GW signature (yellow) n = 71; deep groundwater GW signature (blue) n = 82. Boxplots center line is the median and box limits are the upper and lower quartiles. Full size image BFI varies among groundwater contribution categories; streams with shallow groundwater signatures have significantly lower BFIs than those with deep groundwater signatures. This observation supports site-specific research that found shallow groundwater sources are less reliable for generating baseflow at seasonal timescales 29 , 37 . Shallow (less than 6 m depth) aquifer flow paths drain a relatively small groundwater reservoir that is highly sensitive to seasonally dynamic recharge rates and transpiration 27 , and are therefore less-reliable generators of stream baseflow. In contrast, deep groundwater flow from larger reservoirs is generally sustained throughout the year 42 , 43 at a more constant rate 44 , increasing the average baseflow index in streams dominated by deeper groundwater discharge. This result highlights that effective water resource and aquatic habitat management in a changing world should consider both groundwater connectivity and the source-depth of groundwater discharge. Spatial patterns and physical drivers Our results demonstrate that the spatial distribution of groundwater contributions to streamflow is complex across the continental United States, but large-scale spatial patterns emerge (Fig. 1a ). Physiographic provinces with the highest percentage of deep groundwater signatures are often associated with those expected to have productive aquifers, such as glaciated terrains (e.g., 31% of sites in New England have a deep groundwater signature) or sedimentary bedrock (e.g., 27% of sites in the Colorado Plateau have a deep groundwater signature) (Supplementary Table 1 ). Physiographic provinces that have a high proportion of streams draining steep mountainous terrain with thin soil coverage generally have a higher percentage of shallow groundwater signatures (e.g., Northern Rocky Mountains—74% of sites have shallow groundwater signatures) (Supplementary Table 1 ). Thus, landforms and geologic structures are likely, in part, controlling the spatial patterning of groundwater contribution to streams across the United States. Yet, within regions, there is substantial heterogeneity in groundwater signatures. For example, in the Cascades-Sierra Mountains, 38% of sites have shallow groundwater signatures, and 32% of sites have deep groundwater signatures. This observation is likely in part because of the geologic variation between the High Cascades (younger, highly fractured volcanic bedrock) and Western Cascades (shallow soils, and abundance of clay) 37 . Also, within the Coastal Plain province (eastern coastline of the United States from Massachusetts to Mexico), while 91% of sites have an atmospheric signature, sites with shallow and deep groundwater signatures do occur in isolated areas such as the Floridian Section that is dominated by karst aquifers (Fig. 1 , Supplementary Table 1 ). Indeed, atmospheric, shallow, and deep groundwater signatures co-occur within all eight physiographic regions and within 18 out of 21 physiographic provinces considered in our study. Previous research has shown broad-scale mapping of expected stream water– groundwater connectivity characteristics which can be inferred with a combination of physiography and climate, a concept supported with relatively sparse BFI analysis 43 . Because low-cost stream temperature measurements are currently being performed at thousands of publicly available sites nationally, paired air and stream water temperature signal-based analysis offers a highly scalable approach to provide additional specificity regarding groundwater discharge dynamics, refining broad-scale zonation of stream water–groundwater connectivity. Among physiographic regions, local watershed characteristics likely also play an important role influencing groundwater discharge to streams 45 . Overall, sites with shallow groundwater signatures tend to have higher watershed slopes than sites with atmospheric or deep groundwater signatures (Fig. 3a ). We hypothesize that watersheds with higher slopes are more likely to have a shallow depth to bedrock, which is a known driver of near-surface hillslope groundwater flow to streams 46 . Yet, our results show that strong connectivity of streams and shallow groundwater occurs in environments beyond smaller, steep headwater streams, such as areas with shallow confining layers 47 . Sites with shallow groundwater signatures drain larger watersheds (median 153 km 2 ; Q1–Q3: [17 km 2 , 2131 km 2 ]), have higher streamflow (median 13 m 3 s −1 ), and have a greater range of streamflow (Q1–Q3: [2 m 3 s −1 , 98 m 3 s −1 ]) than sites with deep groundwater signatures (watershed size: 65 km 2 ; Q1–Q3: [18 km 2 , 616 km 2 ]; streamflow 2 m 3 s −1 ; [0.4 m 3 s −1 , 10 m 3 s −1 ]) suggesting shallow groundwater signatures occur across a wide spectrum of hydrogeologic settings that may not be predicted by current conceptual models of baseflow generation. Fig. 3: Watershed properties for groundwater (GW) signature categories. a Mean slope of the watershed draining to each site. b Percent impervious surface from the year 2011 of the local catchment draining to each site. Y -axis is truncated at 40% impervious surface, which removed 44 outliers from atmospheric signature and 5 outliers from shallow groundwater GW signature categories. c The Hydrologic Disturbance Index for each site based on the GAGES-II dataset 52 , 63 . Higher values indicate more disturbance. For a and c site counts of each category are atmospheric signature (pink) n = 277; shallow groundwater GW signature (yellow) n = 40; deep groundwater GW signature (blue) n = 51. Boxplots center line is the median and box limits are the upper and lower quartiles. For b site counts of each category are atmospheric signature (pink) n = 831; shallow groundwater GW signature (yellow) n = 275; deep GW groundwater signature (blue) n = 246. Full size image Heterogeneity in groundwater signatures exists even at the sub-watershed scale. For example, at the North Fork Clear Water—Lake Creek watershed in Idaho, USA (Fig. 1b ), sites within the steep headwaters are dominated by shallow groundwater signatures while sites along the mainstem river valley are largely characterized by deep groundwater signatures, with the outlet of the watershed shifting to an atmospheric signature. This watershed represents an important habitat for a range of cold-water salmonid species 48 . Interestingly, a major tributary (Lake Creek, highlighted in Fig. 1b ) was moved to the list of impaired waters in 2010 by the Idaho Department of Environmental Quality for elevated temperature criteria violations 48 . Without explicit consideration of groundwater dynamics, this impairment was attributed to a slight reduction in canopy shading (4%) compared to the local shade optimal target. However, of the four sites we investigated in upper Lake Creek watershed, one main stem stream and three tributaries, all are classified as having shallow groundwater signatures of greater than 15-day phase lags. These large phase lags suggest dominance of the annual thermal regime by shallow groundwater, and we speculate that the previously observed warm stream impairment is due in part due to warming of shallow groundwater. Consideration of local to regional groundwater responses to climatic and watershed modifications is crucial yet often overlooked in stream temperature predictions, which can mislead future projections and produce less effective mitigation strategies when ignored. The multi-scale heterogeneity of groundwater contribution to streamflow within and among physiographic regions and individual watersheds provides the impetus for higher spatial resolution regional characterization for targeted cold-water species management. Human drivers of stream/groundwater disconnection Human alterations can also influence the spatial patterns of groundwater connectivity and discharge to streams 49 . Our results demonstrate that streams with atmospheric signatures tend to occur in local catchments (area directly draining to a river segment, excluding any upstream contribution 50 ) with a higher percentage of impervious surface area (Fig. 3b ). Sites with atmospheric signatures also tend to have a higher “Hydrologic Disturbance Index” (HDI), which is a more holistic metric of human influence derived from seven anthropogenic watershed modifications, not including percent impervious cover 51 , 52 (Fig. 3c ). The median HDI score for atmospheric signature sites is 16 and a maximum of 31. Sites with pronounced deep groundwater signatures have a median watershed HDI of 9 and shallow groundwater signature sites have a median HDI score of 5.5 (Fig. 3c ). This discrepancy in HDI scores between groundwater categories may result in part from the fact that human disturbance is more immediately influential to shallow groundwater dynamics, and therefore fewer streams in such disturbed basins show shallow groundwater discharge signatures, compared to more resilient deeper groundwater. One of these seven HDI parameters is groundwater withdrawal, which has been shown to have immediate effects on streamflow generation, especially within areas reliant on irrigation, and is generally projected to increase in the future to offset droughts 53 . We hypothesize that in addition to pumping, the relative lack of sites with groundwater signatures observed in this study in more disturbed landscapes is a result of the many human landscape modifications that reduce groundwater discharge to streams and rivers. These impacts occur either directly through groundwater withdrawal or indirectly through impervious cover and stormwater infrastructure that saps shallow groundwater and diverts precipitation to streams, reducing infiltration and aquifer recharge. Therefore, streams within watersheds with high human modification, predominantly in lowlands, are likely to have lower groundwater connectivity and be more susceptible to warming, though recent research suggests that extreme low flows may be buffered along urban corridors due to infrastructure-based recharge 54 . Understanding how human modifications alter groundwater discharge dynamics across the U.S. will therefore involve disentangling how urban development interacts with geology and landscape features. Stream temperature temporal trends Quantifying the thermal stability of streams influenced by groundwater discharge is essential in predicting the effects of climate change on stream networks. The capacity of stream water temperature to be buffered against a warming world depends in part on the source depth of groundwater discharge 55 , and high groundwater connectivity is often invoked as a primary driver of persistent cold-water habitat 8 . Indeed, of the 184 sites that had long-term contiguous temperature records (ranging 14 to 30 years), we found that sites with deep groundwater signatures had a substantially smaller proportion of significant positive temperature trends than sites with shallow groundwater or atmospheric signatures (Fig. 4 ). More than half of the long-term sites with atmospheric signatures ( n = 132) have stream water temperatures that are increasing over the last 14 to 30 years ( n = 70), ranging from 0.01 to 0.09 °C yr −1 (μ: 0.04 °C yr −1 ). Similarly, for long-term sites with shallow groundwater signatures ( n = 29), we found that 45% have stream water temperatures that are increasing with rates of warming ranging from 0.01 to 0.1 °C yr −1 (μ: 0.04 °C yr −1 ). The rates of warming for sites with shallow groundwater signatures and atmospheric signatures are consistent with previously reported stream water warming trends 9 , 10 . Fig. 4: Stream water temperature trends based on average monthly values for 14–30 years of data post 1990. a Spatial distribution of stream water temperature annual trends across the United States by groundwater (GW) signature category. Base map was generated from R package ‘maps’ version 3.3.0. b The proportion of sites with long-term annual temperature increasing (warming (red), p < 0.05), decreasing (cooling (blue), p < 0.05) monotonic trends, or stable condition (gray) ( p > 0.05) by GW signature category. c Similarly, the long-term temperature trends based on summer temperatures (June – August) by GW signature category. Site counts of each category in a – c are atmospheric signature (triangle) n = 132; shallow GW signature (circle) n = 29; and deep GW signature (square) n = 23. Full size image In contrast to sites with shallow groundwater signatures, 52% of sites with deep groundwater signatures had stable stream water temperature regimes (Fig. 4a,b ). This finding underscores the strong thermal buffering capacity of deep groundwater discharge and the likely greater resistance to climate warming of groundwater-dependent and cold-water habitat sourced by deep compared to shallow groundwater. The six deep groundwater signature sites with significant warming trends had rates ranging from 0.01 to 0.05 °C yr −1 (μ: 0.01 °C yr −1 ). Sites with deep groundwater signatures also showed the greatest proportion (22% of sites) of significant cooling trends. Although stream cooling trends appear counterintuitive under climate change, they have also been identified in previous work 56 , and may be due to localized changes in winter precipitation patterns 57 . The difference in thermal buffering capacity of streams dominated by shallow versus deep groundwater discharge has been predicted by modeling efforts for individual watersheds 29 , 37 , 55 . Our empirical results confirm these predictions and expand evidence to sites across the United States. We recognize that there are confounding factors that influence long-term stream temperature, notably discharge variability. Therefore, streams fed by shallow groundwater could warm at a faster rate in part because of drought conditions or groundwater withdrawal (e.g., for irrigation) lowering groundwater levels, which disproportionately affects shallow groundwater 28 . The disparity between long-term stream temperature trends of sites with shallow versus deep groundwater signatures also occurs during the summer season, when cold water fishes are most often thermally stressed. Over 70% of sites with shallow groundwater signatures show significant summer season warming trends compared to 43% of sites with deep groundwater and 61% of sites with atmospheric signatures (Fig. 4c ). These seasonal warming trends follow the fundamental nature of the classification method, which relies on the pronounced annual temperature signals of shallow groundwater to be transferred to stream water via groundwater discharge zones. Sites with shallow groundwater signatures will be immediately sensitive to hotter summers, exacerbating thermal stress on sensitive aquatic organisms 41 . Thus, vulnerable biota within streams dominated by shallow groundwater may not only have to adapt to a warming baseline condition, but also be particularly vulnerable to the impacts of single season heatwaves. Deep groundwater is more resistant to land surface temperature changes, but still sensitive to longer-term thermal shifts at timescales tied to source flow path depth 38 . This re-emphasizes the importance of distinguishing shallow versus deep groundwater source-depth, rather than assuming streams with strong baseflow components imply thermal stability. Groundwater discharge to streams and rivers occurs via a spectrum of source groundwater flow paths, which exerts high-level controls on streamflow, channel thermal stability, and stream water quality characteristics that are tightly linked to the source aquifer. The relative flow path depth of contributing groundwater is particularly important for stream ecosystems; yet, until recently we lacked efficient process-based methodology to parse the relative dominance of shallow or deeper groundwater discharge to streams at broad spatial scales. Our continental-scale characterization demonstrates a framework for harnessing burgeoning publicly available air and stream temperature datasets to categorize the relative flow path depth of groundwater contribution to streams and rivers, which can inform how both hydrologic models and stream ecosystem management approaches incorporate groundwater dynamics. Implications of groundwater discharge source-depth Groundwater-dependent ecosystems have become an important consideration for watershed management decisions 1 , and streams with substantial groundwater contributions are generally considered most resilient to change. Our work underscores the need for expanding the direct incorporation of groundwater discharge dynamics, especially source-flow path depth, into decision-making processes and predictive frameworks. Streams with shallow or deep groundwater signatures were ubiquitous nationally (nearly 40% of sites) and distributed across stream sizes, U.S. physiographic provinces, and within regional subwatersheds. Yet, regional generalizations remain uncertain at scales relevant for managing stream habitat. Although the more thermally stable streams with deep groundwater signatures tended to occur more frequently in regions with productive aquifers and in watersheds with lower slopes, they also occurred across nearly all physiographic provinces, and a range of watershed slopes and drainage areas. Human land development may explain some of the heterogeneity in groundwater connection, as we found that sites with groundwater signatures were less likely to be associated within catchments with high impervious cover or other types of human disturbance, including groundwater pumping and channelization. Our characterization of groundwater contribution to streamflow has important implications for understanding and predicting how streamflow and water quality respond to climate change, groundwater extraction, and watershed development. By definition, shallow aquifer flow paths with pronounced annual temperature signals are tightly coupled to seasonal temperature (and precipitation) dynamics, and our analysis shows that streams influenced by shallow groundwater are more likely to be warming over time than sites with deep groundwater signatures. Shallow groundwater discharge will then have reduced stream cooling potential in summer, particularly during anomalous seasons, when thermal refuges in marginal cold-water habitat are most needed. Our analysis also shows that streams influenced by shallow groundwater tend to have a reduced fraction of total streamflow composed of baseflow compared to deep groundwater. Thus, streams with substantial shallow groundwater contribution are more vulnerable to extreme low flows or drying from climate change-related increases in drought or evapotranspiration, or from increased groundwater extraction. The high responsiveness of shallow groundwater to land surface disturbances also suggests streams with substantial shallow stream water contributions are likely more susceptible to diffuse nutrient and other pollution additions, while deeper groundwater can perpetuate legacy watershed land uses 3 and emerging contaminants such as per- and polyfluoroalkyl substances from outside the river corridor 4 . Still, shallow groundwater dominated streams may be more responsive to short-term management actions that reduce groundwater extraction and limit land application of fertilizers and other chemicals. Thus, our analysis provides foundational knowledge to the importance of source groundwater discharge flow path depth on stream temperature, flow, and water quality. We consider this additional dimension of groundwater discharge essential to informing current stream process models and necessary to building robust predictions in a time of change. Methods We classified streams by their groundwater signature based on the observed decoupling of annual air and stream water temperature signals, both in terms of amplitude and timing (phase), which is driven by the magnitude and relative source-depth (shallow versus deep) of groundwater discharge to streams 8 . Shallow groundwater is defined here as groundwater within the near-surface critical zone where annual aquifer temperature is highly variable (within approximately 6 m from land surface), and this variability is transferred to streams through groundwater discharge zones causing annual temperature signal mixing with characteristic outcomes. Thermally stable, deeper groundwater discharge serves to attenuate annual stream temperature signals but does not cause notable phase shifts, as deeper groundwater temperature signals are highly attenuated. We used this newly expanded signal processing-based methodology (explained below, see refs. 8 , 40 ) to infer the source-flow path depth of groundwater discharge to streams based on these first principles. We acquired publicly available data from ~4000 discrete stream water temperature stations, of which 1811 met our required data criteria of being located within 25 km of a National Oceanic and Atmospheric Administration (NOAA) air temperature station, and having at least 2 consecutive years of temperature data collected in 2010 or after without gaps of 30 continuous days or more. This data gap criteria is supported by parallel paired air and water temperature signal analysis research 58 . Stream temperature datasets were used from three repositories: the USGS National Water Information System database (NWIS) 59 , the NorWeST Stream Temperature dataset 60 , and the Spatial Hydro-Ecological Decision System (SHEDS); all repositories are assumed to have internal quality assurance and quality control (QA/QC) protocol. 1729 sites met our data quality review, which are discussed in the Temperature Signal Processing Approach section below. We acquired the paired daily air temperature record for each stream station from the Global Historical Climatology Network-Daily (GHCN-daily) Database 61 using the R package ‘rnoaa’ 62 . We extracted data from the two nearest NOAA stations. The nearest air station data were used first; however, if the data did not meet our criteria (75% of annual data available and 75% data overlap with paired stream temperature), then a second NOAA station, if available, was evaluated and used if the criteria were met ( n = 191). We linked coordinates of each stream site to the nearest National Hydrography Dataset Plus flowline common identifier (COMID) (within 250 m) and paired with the U.S. Environmental Protection Agency (EPA) Stream-Catchment (StreamCat) dataset 50 to obtain watershed land cover. We also paired NWIS sites 59 with the USGS Geospatial Attributes of Gages for Evaluating Streamflow, version II (GAGES II) dataset 63 by station identifier (ID) value to obtain distance from nearest major dam, watershed slope, and the Hydrologic Disturbance Index. The Hydrologic Disturbance Index is derived from anthropogenic disturbances within the site’s watershed including the presence of major dams, change in reservoir storage from 1950 to 2009, percentage of canals, road density, distance to nearest major pollutant discharge site, estimate of fresh-water withdrawal, and calculated fragmentation of undeveloped land 51 . To categorize sites into shallow groundwater, deep groundwater, atmospheric, or major dam signatures, we designed an automated signal processing software tool in Python that fits a static sine curve to the stream water and local air temperature data and derives the paired air and stream water signal metrics of amplitude ratio and phase lag. Although some datasets were collected at sub-daily frequency, average daily values were used for both air temperature and stream water temperature input data. Based on principles described in previous studies, we excluded average daily temperature readings below 1˚C from the analysis, because the paired air–stream temperature relationships decouple due to the freeze–thaw dynamics of water 35 . Also, stream values greater than 60 ˚C were removed during analysis. For each discrete temperature record, we fit the annual temperature cycle using a linearized static sinusoidal function (equation 1) by minimizing the root mean square error (RMSE) of the average daily temperature residuals (°C) with the Python scipy optimize curve fit module 64 . This function was chosen to most simply extract the ‘average’ fundamental (annual) signal from the time series and is consistent with the analysis conducted by previous studies 8 , 40 . The average daily root-mean-square errot for both air and stream water signals at each site are provided in the Fig. 1 Source files. $$\alpha \sin (t) + \beta \cos (t) + C$$ (1) Using the calculated regression coefficients α and β , we calculated the amplitude ( A ; equation 2) and the phase ( ϕ in radians; equation 3) of each signal. January 1 was defined as 1/365. $$A = \sqrt {\alpha ^2 + \beta ^2}$$ (2) $$\phi = {\mathrm{arctan}}\left( {\frac{\beta }{\alpha }} \right)$$ (3) We defined the groundwater signature categories by the paired air and stream water signal metrics, which are amplitude ratio ( A r ) and phase lag (Δ ϕ) . We calculated A r by dividing the annual stream water signal amplitude by the annual air temperature signal amplitude; Δ ϕ is calculated as the difference between the phase of the annual stream water temperature signal and that of the air temperature signal and converted from radians to days (d) using 365 divided by 2π. A positive phase lag indicates the number of days the fitted stream water signal is delayed with respect to the fitted air temperature signal. Negative phase lags imply that stream water temperature responds to atmospheric thermal input faster than air, which is not logical for natural stream systems (except those influenced by geothermal heating). As a result, within the dataset we explored negative phase lags ( n = 454, mean of −4). Negative phase lags greater than 10 days ( n = 25) were dropped from the analysis as these data were associated with heavily managed stream flows as indicated by visual inspection of the stream temperature patterns or highly variable winter air temperature data that were not well captured by the fitted sine curve. Negative phase lags between 0 and −10 days ( n = 429) are still included within the dataset but set to 0 for calculations. These data and multi-day atmospheric signature phase lags were attributed to inherent imprecision of signal fitting to natural data, as other studies that use this same method did not show any negative phase lags when using streamside air signals 40 , 41 . Because the classification analysis only utilized parameters α and β , and not C , we assumed altitude differences between air temperature and stream water temperature sampling location did not have substantial influence on the amplitude ratio or phase lag. We categorized sites as having an atmospheric, shallow groundwater, or deep groundwater signature by identifying ‘conservative’ threshold values of A r (0.65) and Δϕ (10 days) that parsed only sites with pronounced groundwater signatures (Supplementary Fig. 1 ). These threshold values were chosen based on previously presented stream and groundwater annual signal-mixing theory, process-based modeling, and field data 8 , 40 . Specifically, we developed A r and Δϕ thresholds using evidence from three well-studied systems, the Quashnet River, Cape Cod, Massachusetts 8 , Shenandoah National Park, Virgina 41 , and the Olympic Experimental State Forest, Washington 40 . The hydrogeology of the Quashnet River has been extensively characterized 65 , 66 , indicating streamflow is dominated by deep groundwater discharge that at times makes up close to 100% of total streamflow. Using a dynamic sinusoidal regression technique, Briggs et al. 8 found that A r ranged from approximately 0.49 to 0.63 over a 3-year period with varied climatic conditions. Thus, we chose a threshold of 0.65 to indicate a deep groundwater signature for our study. It is likely that A r values up to approximately 0.75 also indicate substantial deep groundwater influence, but with less certainty. Other physical factors such as channel confinement, aspect, and shading could affect A r , but to date no published work that we are aware of indicates these factors could explain A r < 0.65 without the influence of groundwater. However, we hypothesize that these factors are likely to change the distance downstream these annual signals can be detected. All A r values less than 0.4 were manually checked for a major dam within 30 km upstream of the site by visual inspection. Extensive field data collected at Shenandoah National Park, a region known to be dominated by shallow bedrock conditions, indicates an average Δϕ of 11 days, and conceptual mixing models of stream and groundwater annual temperature signals from Shenandoah headwater streams indicate a Δϕ of about 10 days or greater when shallow groundwater discharge contributes at least 25% of total streamflow 8 . Therefore, for our analysis we used the threshold phase lag of 10 days to identify sites with a shallow groundwater signature. A r and Δϕ thresholds may vary among watersheds and regions and thus can and should be modified based on additional information about individual watersheds for more precise, localized analyses. However, for the purposes of our analysis these thresholds represent conservative estimates applied across broad spatial scales. Sites with atmospheric signatures in our dataset had an A r between 0.65 and 1.1. Sites with deep groundwater signatures had an A r of 0.05 to 0.65. Sites with amplitude ratio values greater than 1.1 were removed as these extremes likely reflected poor pairings between the air and stream water station data, or measurement error. Because there are different numbers of sites within each groundwater signature category, we used a modified comparison of means for unbalanced designs for all statistical comparisons 67 . For sites within the USGS NWIS dataset 59 , stream discharge data for 554 stream water sites were available for the same time record as the analyzed temperature dataset. We calculated baseflow index (BFI) for the 554 stream discharge stations to provide a direct comparison between typically used hydrograph separation methods and our temperature-based methods. We used the ‘bfi’ function within USGS-R ‘DVstats’ package version 0.3.4 to calculate percent baseflow for each site by averaging the percent daily baseflow (daily baseflow discharge divided by total daily flow) over the time period of the temperature record. We analyzed a subset of our stream water temperature records for monotonic 14-year to 30-year trends (January 1990—December 2019). This record length was chosen to account for the El Niño-Southern Oscillation (ENSO) period, which is three to seven years, thus the minimum length of record (14 years) would encapsulate at least two full cycles. We recognize that these time series are short when accounting for Pacific Decadal Oscillations; however, our results indicate there is not a distinction between sites located in the western United States and the rest of the sites. Of the 1424 stream sites without major dam signatures, 197 sites had stream water temperature records with greater than 14 years of complete year records (i.e., greater than 75% of daily average temperature data) within a 30-year time span (1990–2019). Of the remaining sites, we removed a total of 13 sites manually due to data inconsistencies, such as anomalous value sets and managed patterns determined by visual inspection; therefore, 184 sites were analyzed for long-term stream temperature trends. We determined non-parametric Theil–Sen regression slopes for both annual and summer (June–August) time periods using the TheilSen function from the R package ‘openair’ 68 , which allows for the seasonality of average monthly data to be detrended and is robust against outliers. Previous studies have stated the Theil Sen approach is comparable to a simple linear regression method when analyzing long-term stream temperatures 9 . We used the monthly averages to reduce autocorrelation and the ‘deseason’ option of the function to account for potentially important seasonal temperature influences such as changes to snowmelt. Data availability The datasets generated during and/or analyzed during the current study are available in the USGS National Water Information System (NWIS) repository ( ); the NorWest Stream Temperature repository ( ); the Spatial Hydro-Ecological Decision System (SHEDS) repository ( ); and the NOAA Daily Global Historical Climatology Network (GHCN-Daily) repository ( ). Watershed parameters are from two publicly available datasets: the USGS data release for GAGES-II ( ) and EPA StreamCat dataset ( ). Source data are provided with this paper. Code availability Mathematical algorithms used for the analysis are presented within the text and provide sufficient information for data replication. Signal processing automation code is available on GitHub 69 .
A UConn Ph.D. candidate and a faculty member have developed a novel way of gathering data about streams fed by groundwater that provide important insights about the possible effects of climate change. Water is constantly on the move: through the air, through waterways, and underground. Life depends on a consistent supply of water and details about its journey are necessary for understanding and managing this dynamic resource. However, those details are often difficult to measure. UConn Ph.D. candidate Danielle Hare, in the lab of associate professor of Natural Resources and the Environment Ashley Helton, has expanded on a novel method to easily access vital details about groundwater, and in doing so, they have discovered that many streams are more vulnerable to stressors like climate change than previously thought. The team has published their findings in the latest issue of Nature Communications. Precipitation enters streams and rivers by flowing over land surfaces, or it percolates through soil into the groundwater. Groundwater then flows back into waterways, but understanding the details, such as the depth of groundwater entering streams, is more challenging. "Normally, you'd have to go to a site and spend a lot of time and money just to figure out the source of groundwater discharging to the stream," she says. These details are important for watershed managers, who take into account numerous variables to keep water clean and safe, both for drinking water and for wildlife habitats. Details like depth are crucial because, for example, shallower groundwater reserves are more prone to disturbances than deeper sources. Hare says one of the threats to the streams supplied by shallower groundwater is climate change, as shallow groundwater is more susceptible to warming and has grave impacts on water resources down the line. Helton explains some of the roles groundwater plays for streams and groundwater-dependent ecosystems. "You can think about the three services that groundwater provides to streams as it discharges back to the streams at the surface," she says. "First is flow; groundwater provides water and deeper groundwater provides more consistent flow. Second, groundwater provides a temperature buffer and what is called thermal refuge for organisms, and deeper groundwater provides more stable temperatures. Third, groundwater provides nutrients and carbon for ecosystems and deeper groundwater often has a different chemical profile." In the case of streams with significant groundwater inputs, Helton says management often defaults to assuming that groundwater-dependent streams are managed similarly. Hare, with a strong interest in stream temperatures and groundwater dynamics, sought to explore if this was truly the case as part of a class project. "This project was open-ended and it was a great opportunity to combine my interests. We were not sure if it would work, but even if it didn't, I knew I would learn along the way," says Hare. Hare used data that is frequently gathered and often publicly accessible: stream and air temperature measurement. These data are paired at over 1,700 streams nationwide, and the researchers were able to deduce which streams had substantial groundwater inputs and, of those, which were deep or shallow groundwater-fed. The findings were eye-opening. "Something that surprised me was just how prominent shallow groundwater sites are across the US. We saw about 40% of the sites had substantial groundwater component, and how many of those were shallow were about 50%. I would not have guessed that; I would have guessed that there were more deep groundwater," says Hare. The researchers were excited that what started as a course project for Hare has turned into such a powerful tool. "This method is straightforward and accessible to watershed managers and stakeholders. There is a lot of power to that. There is no need to spend a lot money to define different geology, we can simply use a temperature logger or thermometer to monitor the temperatures. They are widely available and straightforward," says Hare. Hare and Helton are hopeful this information will be considered in making watershed management decisions going forward. "The sites that are dominated by groundwater are really wide spread and about half were shallow," says Helton. However, this could be problematic when sites are managed as if they are deep groundwater-fed sites. Hare cautions that managers could be missing out on important conservation opportunities in the face of challenges that can impact groundwater replenishment. "The streams that are shallow are not going to be buffered as well as we previously thought," says Hare. "Especially when considering the groundwater dependent ecosystems, when we're thinking about fishes that we really do need to consider or else we may have a missed opportunity as far as mitigating, supporting, observing that important ecosystem resource." For those tasked with managing these important watersheds, this new method ensures vital information is no longer out of reach, says Hare. "Where the power is in this study and what makes it distinct is we separate the shallow versus deep components of groundwater. Not only are we able to find streams that are more groundwater-dominated, we can parse that information into whether it is groundwater shallow or deep. The shallow are going to be more susceptible to both climate warming and development changes."
10.1038/s41467-021-21651-0
Earth
Managing UK agriculture with rock dust could absorb up to 45% the atmospheric carbon dioxide needed for net-zero
David Beerling, Substantial carbon drawdown potential from enhanced rock weathering in the United Kingdom, Nature Geoscience (2022). DOI: 10.1038/s41561-022-00925-2. www.nature.com/articles/s41561-022-00925-2 Journal information: Nature Geoscience
https://dx.doi.org/10.1038/s41561-022-00925-2
https://phys.org/news/2022-04-uk-agriculture-absorb-atmospheric-carbon.html
Abstract Achieving national targets for net-zero carbon emissions will require atmospheric carbon dioxide removal strategies compatible with rising agricultural production. One possible method for delivering on these goals is enhanced rock weathering, which involves modifying soils with crushed silicate rocks, such as basalt. Here we use dynamic carbon budget modelling to assess the carbon dioxide removal potential and agricultural benefits of implementing enhanced rock weathering strategies across UK arable croplands. We find that enhanced rock weathering could deliver net carbon dioxide removal of 6–30 MtCO 2 yr − 1 for the United Kingdom by 2050, representing up to 45% of the atmospheric carbon removal required nationally to meet net-zero emissions. This suggests that enhanced rock weathering could play a crucial role in national climate mitigation strategies if it were to gain acceptance across national political, local community and farm scales. We show that it is feasible to eliminate the energy-demanding requirement for milling rocks to fine particle sizes. Co-benefits of enhanced rock weathering include substantial mitigation of nitrous oxide, the third most important greenhouse gas, widespread reversal of soil acidification and considerable cost savings from reduced fertilizer usage. Our analyses provide a guide for other nations to pursue their carbon dioxide removal ambitions and decarbonize agriculture—a key source of greenhouse gases. Main Governments worldwide are increasingly translating the Paris Agreement under the United Nations Framework Convention on Climate Change into national strategies for achieving net-zero carbon emissions by 2050. More than 120 nations have set full decarbonization goals that account for 51% of global CO 2 emissions, with the United Kingdom among several of these nations legislating for net-zero emissions 1 . The United Kingdom, where the industrial revolution driven by burning fossil fuels originated, is responsible for ~5% of the cumulative CO 2 emissions over the period 1751–2018 that drive climate change 2 . Carbon emissions in the United Kingdom have declined by 43% between 1990 and 2018 owing to the rise of renewables, and the transition from coal to natural gas, while growing the economy by 75% (ref. 3 ). Continued phase-out of emissions is, however, required to meet the United Kingdom’s net-zero commitment, together with the capture and storage of residual emissions using carbon dioxide removal (CDR) technologies and a strengthening of nature-based carbon sinks 4 . Enhanced rock weathering (ERW), a CDR strategy based on amending soils with crushed calcium- and magnesium-rich silicate rocks, aims to accelerate natural CO 2 sequestration processes 5 , 6 , 7 , 8 . The estimated net global potential for ERW deployed on croplands to draw down CO 2 is substantial, up to 2 GtCO 2 yr − 1 (ref. 6 ), with co-benefits for production 9 , 10 , 11 , soil restoration and ocean acidification 7 , 8 , 12 . Agricultural co-benefits can create demand for ERW deployment that is unaffected by diminishing income from carbon-tax receipts generated by other CDR technologies as the transition to clean energy advances and emissions approach net zero 13 . Global action on CDR, and hence progress towards net zero, requires leadership from early-adopting countries through their development of flexible action plans to support policymakers of other nations. Assessment of the contribution of ERW to the United Kingdom’s net-zero commitment is therefore required, given that it is a CDR strategy for assisting with decarbonization while improving food production and rebuilding soils degraded by intensified land management 9 . Here we examine in detail the technical potential of ERW implementation on UK arable croplands in a national net-zero context and provide a blueprint by which other nations may proceed with this CDR technology as part of their legislated plans for decarbonization. Using coupled climate–carbon–nitrogen (climate–C–N) cycle modelling of ERW (Methods and Extended Data Fig. 1 ), we constructed dynamic UK net 2020–2070 C removal budgets and CDR costs after accounting for secondary CO 2 emissions from the ERW supply chain (Methods and Extended Data Fig. 2 ). Coupled C–N cycle ERW modelling provides the fundamental advance in assessing the effects of cropland N fertilizers on the soil alkalinity balance and mineral weathering kinetics (Methods and Extended Data Fig. 3 ; Supplementary Information ) and ERW-related mitigation of nitrous oxide (N 2 O) emissions from agricultural soils 14 . Nitrous oxide is a key long-lived greenhouse gas and important stratospheric-ozone-depleting substance 15 ; UK agriculture accounts for 75% of N 2 O emissions nationally with high external costs (~£1 billion yr −1 ) 16 . Our analysis, constrained by future energy policies 17 , utilizes basalt as an abundant natural silicate rock suitable for ERW with croplands 9 , 10 , 11 , with low- (S1), medium- (S2) and high- (S3) extraction scenarios between 2035 and 2050 (Methods and Extended Data Fig. 4 ; Supplementary Information ). Patterns of cropland CDR Across basalt supply scenarios S1 to S3, ERW implementation on arable lands was simulated to remove 6–30 MtCO 2 yr −1 by 2050 (Fig. 1a–c ); that is, up to 45% of the CO 2 emissions removal required for UK net-zero emissions (balanced net-zero pathway engineered carbon removal requirement ~58 MtCO 2 yr −1 ; range 45–112 MtCO 2 yr −1 ) 4 . Modelled maximum CDR rates were predominantly governed by the geographical extent of ERW application, which increased as resource provision allowed (Fig. 1a–c ). Year-on-year legacy effects are also important. CDR rates per unit area increased over time with successive annual applications of rock dust, even if the land area of deployment remained constant. These effects are evident in all scenarios when basalt extraction levelled off, and result from slower-weathering silicate minerals continuing to capture CO 2 in years post-application before they are fully dissolved 6 . By quantifying the geochemical dissolution rates governing ERW and legacy effects, our simulations indicated the CDR potential of ERW rise over time to exceed that suggested by previous mass balance estimates 18 , 19 , 20 . Fig. 1: Net CDR by ERW deployed on UK arable croplands. a – c , Simulated net CDR (left y axis) and annual basalt extraction (right y axis) for S1 ( a ), S2 ( b ) and S3 ( c ) resource extraction scenarios. Results are shown for two particle size distributions (p80 = 10 µm diameter and p80 = 100 µm diameter). The shaded envelopes denote 95% confidence limits. d – f , Isolines of UK decadal running-average net CDR (MtCO 2 yr −1 ) for S1 ( d ), S2 ( e ) and S3 ( f ) over time (2020–2070). a – f show mean results for three UK-specific basalts. g – i , Cumulative net CDR over time for S1 ( g ), S2 ( h ) and S3 ( i ) resource extraction scenarios by UK region, showing the mean of simulations with p80 = 10 µm and p80 = 100 µm and three UK-specific basalts. Insets in g – i show the cumulative CDR time series for 2020 to 2050. Source data Full size image Net-zero pathways for greenhouse gas removal internationally 21 , and in the United Kingdom 4 , have tended to focus narrowly on bioenergy with carbon capture and storage (BECCS), and direct air carbon capture and storage (DACCS). However, our new results indicate that ERW could be an important overlooked component of national CDR technology net-zero portfolios, working synergistically with croplands, rather than competing with them, as large-scale deployment of BECCS might. In S1, for example, ERW reaches net CDR of 5 MtCO 2 yr −1 by 2050, equalling the DACCS estimate 5 , and closer to 10 MtCO 2 yr −1 by 2060 (Fig. 1a ). In the highest resource scenario, S3, ERW delivers approximately half of the net CDR forecast for UK BECCS facilities 5 by 2050 (Fig. 1c ). Milling rocks to fine particle sizes is the most energy-demanding step in the ERW supply chain 18 , 22 . We therefore assessed a range of options for milled rock particle sizes, as defined by p80 (that is, 80% of the particles have a diameter of less than or equal to the specified value), and the associated energy demands across scenarios S1 to S3 (Fig. 1d–f ). For all scenarios, we show that particle size typically has a small effect on net CDR for the first 10–20 years of implementation, as indicated by flat CDR isolines. In the model, ERW deployment locations were prioritized over time, starting from high weathering potential and progressing to low weathering potential. The prioritization of sites with high weathering potential in the first couple of decades means that basalt particles are weathered rapidly regardless of size—a result verified with soil column experiments 23 . In S2, for example, a drawdown of 3 MtCO 2 yr −1 in 2035 with a p80 of 500 µm was achieved only 5 years earlier by milling to a p80 of 10 µm. Our dynamic simulations of temporal ERW carbon budgets, together with recent experimental findings 23 , challenge the assumption that rocks must be ground finely to accelerate dissolution for effective CDR 7 , 8 , 18 , 22 . Coarser particles minimize health and safety risks when handling rock dust, in addition to reducing energy demand. However, as S2 and S3 encompass rock dust application on more agricultural land post-2040, with a greater proportion of sub-optimal weathering locations, the dissolution of small particles becomes relevant and the effect of p80 on net CDR increasingly apparent. Energy requirements for delivering ERW are generally low. Before 2035, the energy demand for rock grinding was minimal across all three scenarios at ~1 TWh yr −1 , which is less than 0.2% of the United Kingdom’s power production (Extended Data Fig. 5 ). After 2040, the energy demand for grinding an increased rock mass to be distributed across an expanding area of arable land increased. However, limiting grinding to achieve rock dust with a p80 of 100 µm or more keeps energy demand to less than or equal to 4 TWh yr −1 , or 0.6% of UK production for all scenarios. These results mitigate previous concerns that undertaking extensive deployment of ERW in the United Kingdom may compromise energy security 13 . Reducing cumulative CO 2 emissions on the pathway to net zero helps minimize the United Kingdom’s contribution to the remaining future carbon budget consistent with keeping warming below a given level 24 . Assuming that ERW practices are maintained between 2020 and 2070, the resulting cumulative net CO 2 drawdown was simulated to be 200, 410, and 800 MtCO 2 by 2070 (Fig. 1g–i ). Longer-term compensatory ocean outgassing and sediment CaCO 3 uptake could reduce net CDR effectiveness by 10–15% by 2070 (Extended Data Fig. 6 ). Attained over 50 years with ERW, these cumulative CDR ranges compare with an estimated ~696 MtCO 2 sequestration over 100 years for afforestation in organic soils of the Scottish uplands 24 and avoids possible soil carbon loss from tree planting 25 and sustained long-term management requirements. More broadly, cumulative ERW-based CDR ranges are comparable to CO 2 removal estimates for UK woodland creation schemes aligned to a balanced net-zero framework (112 MtCO 2 by 2050 and ~300 MtCO 2 by 2070) 26 . A breakdown of cumulative CDR by region revealed marked shifts in regional contributions from S1 to S3, with increasing contributions over time from croplands in Scotland, northeastern and southwest England, and the Midlands. These regions have acidic soils, where early deployment offers increasing CDR over time from legacy weathering effects. The more aggressive CDR strategy of S3 requires less optimal regions for ERW with the lowest rainfall (southeast and eastern England). Mapped UK-wide CDR rates per unit area provide fine-scale estimates of modelled carbon removal potential across space and time provide an important tool for precisely targeting ERW interventions (Fig. 2a–c ). Results highlight the limited cropland area required for CDR by ERW in the first couple of decades in S1 and S2, and the rise in CDR per unit area over time. Across all decades and scenarios, our geospatial net CDR estimates typically exceed those for the low-carbon farming practices forming part of net-zero pathways for agriculture 4 , including switching to less intensive tillage (typically ~1 tCO 2 ha −1 yr −1 ) 27 , conversion of arable land to ley pasture (~1–5 tCO 2 ha −1 yr −1 ) 28 and inclusion of cover crops in cropping systems (1.1 ± 0.3 tCO 2 ha −1 yr −1 ) 29 . Fig. 2: Mapped fine-scale decadal average UK net CDR. a – c , Mapped net CDR from ERW deployed on arable croplands for S1 ( a ), S2 ( b ) and S3 ( c ) resource extraction scenarios is shown for the decades indicated. The mean of simulations with p80 = 10 µm and p80 = 100 µm and three UK-specific basalts is shown. Source data Full size image Underlying the geospatial maps of net CDR are strong cycles in alkalinity generation and soil pH, and intra-annual dissolution/precipitation of soil carbonates, driven by seasonal climate and crop production effects (Extended Data Fig. 7 ). These results show a decline in the periodic dissolution of soil (pedogenic) carbonates over decades as the cumulative effect of alkalinity systematically raises the seasonal minimum in soil pH and drives a steady increase in the net CDR per unit area each year. Rising alkalinity over time increases the soil buffer capacity, which reduces the risk of pH reversal, thereby improving security of CO 2 storage. These results for the UK maritime climate are consistent with soil carbonate accumulation and persistent in arid systems 30 , and highlight the challenge of monitoring, reporting and verifying CDR via seasonal dynamics of soil carbonates, and soil fluid alkalinity discharge, over multiple field seasons. Costs of cropland CDR The costs of CDR must be known to evaluate commercial feasibility, permit comparison with other CDR technologies and allow governments to understand the carbon price required to pay for it. Between 2020 and 2070, CDR costs fall from £200–250 tCO 2 −1 yr −1 in 2020 to £80–110 tCO 2 −1 yr −1 by 2070 (Fig. 2a–c ). Modelled longer-term cost trends are driven by rising CDR with successive rock dust applications (Fig. 1a–c ) and declining renewable energy prices (Methods and Extended Data Fig. 8 ). Grinding rocks to smaller particle sizes carries a minor financial penalty. As the geographical deployment of ERW increases in S3, the price of CDR rises from 2030 to 2050 due to higher total energy costs associated with grinding more rock and the requirement for more extensive logistical operations, particularly spreading of the rock dust over farms. However, it subsequently falls as CDR rates increase with repeated rock dust applications (Fig. 1 ). The dominant cost elements are electricity for rock grinding and fuel for spreading the milled rock on farmland (Fig. 3d–f ). Mineral P and K nutrient fertilizers are expensive (£300–400 t −1 and £250–300 t −1 for P and K fertilizers, respectively) 31 . Given fertilizer application rates per unit of land area typical for arable crops (Extended Data Fig. 9 ), using basalt could provide savings sufficient to cover transport costs (Fig. 3d–f ). Fig. 3: Costs of CDR by ERW deployed on UK arable croplands. a – c , Costs of net CDR for S1 ( a ), S2 ( b ) and S3 ( c ) resource extraction scenarios over time (2020–2070). Results are shown for two particle size distributions (p80 = 10 µm and p80 = 100 µm). The shaded envelopes denote 95% confidence limits. d – f , Breakdown of ERW processes contributing to CDR costs, including savings resulting from basalt substituting for P and K fertilizers averaged for 2060–2070 under S1 ( d ), S2 ( e ) and S3 ( f ). Error bars indicate 95% confidence limits. All panels display average results for three UK-specific basalts. Source data Full size image Modelled average CDR costs for ERW practices are towards the lower end of the range for BECCS, which varies widely across sectors 4 (£70–275 tCO 2 −1 ), and half of that estimated for early-stage DACCS plants. DACCS CDR has an indicative price of £400 tCO 2 −1 during the 2020s and £180 tCO 2 −1 by 2050 as the technology develops and scales up globally 4 , 21 . ERW is thus competitive relative to industrial CDR technologies such as these that will also be required to help achieve net-zero emissions. Fine-scale spatial and temporal assessment of CDR costs (Fig. 4a–c ), combined with analysis of regional CO 2 drawdown (Fig. 2a–c ), informs geographical prioritization of near-term opportunities for rapid ERW deployment and public consultations on these activities. Costs in all scenarios decrease through time as CDR rises, with geographical variations in CDR costs approximately twofold by 2050–2060. These patterns reflect differences in CDR and, to a lesser extent, transport distances between source rocks and croplands. By 2060–2070, the lowest costs (£75–100 tCO 2 −1 ) occur in the northeast of England, the Midlands and Scotland, where CDR rates are highest because of favourable soil weathering environments and regional climate effects on site water balance (precipitation minus evapotranspiration). Fig. 4: Mapped fine-scale decadal average UK net CDR costs. a – c , Mapped net CDR costs of ERW deployed on arable croplands for SI ( a ), S2 ( b ) and S3 ( c ) resource extraction scenarios for the decades indicated. The mean of simulations with p80 = 10 µm and p80 = 100 µm and three UK-specific basalts is shown. Source data Full size image Nations committing to net-zero targets require carefully designed economic and policy frameworks to incentivize uptake and cover the costs of CDR technologies 13 , 21 , as well as the modification of existing emissions trading schemes. Costs might be met in the near term through farming subsidies; agriculture is heavily supported in most countries worldwide 13 . Actions to enhance soil carbon storage are already subsidized in the United States, and European proposals to incentivize CDR by farmers are underway 32 . Redesigned agricultural policies in the United Kingdom post-Brexit aim to provide public funding to support farmers in delivering environmental public goods and contributing to net-zero emissions 33 by 2050. Identifying strategic options, such as ERW, with multiple co-benefits for agricultural productivity and the environment is key to enhancing uptake. Co-benefits of ERW for agriculture Arable soils are a critical resource supporting multiple ecosystem services, and the adoption of ERW into current agricultural practices could enhance soil functions. We quantified three major soil-based co-benefits with the potential to increase the demand for early deployment of the technology: reducing excess soil acidity, increasing the primary supply of fertilizer-based mineral nutrients (P and K) 5 , 9 , 10 and mitigating soil N 2 O fluxes 14 . Soil acidity (that is, pH below 6.5) 34 limits yields and correction is essential for good soil management, crop growth, nutrient use efficiency and environmental protection 35 . Following initialization with topsoil (0–15 cm) pH values based on high-resolution field datasets (Methods), the implementation of ERW reduces the fraction of arable soils with pH less than 6.5 in England to 13% by 2035 (S1), and completely by 2045 and 2055 in S2 and S3, respectively (Fig. 5a ). In Scotland, where agricultural soils are more acidic than in England, the co-benefit of ERW in raising soil pH could be considerable, with reductions to 10% by 2050 in S1 and eliminating acidic soils by 2045–2050 in S2 and S3 (Fig. 5b ). Reversing soil acidification across England and Scotland could increase nutrient uptake to boost yields on underperforming croplands 34 , 35 , lower the potential for metal toxicity 10 at low pH and enhance N fixation by legumes 36 . Calcium released by ERW can also stimulate root growth and water uptake 37 and multi-element basalt can fortify staple crops such as cereals with important micronutrients, including iron and zinc 9 . Raising soil pH with widespread ERW practices in the United Kingdom, and elsewhere, to improve agricultural productivity 38 releases land for additional CDR opportunities, including afforestation and bioenergy cropping 4 , 21 . Fig. 5: Agricultural ecosystem co-benefits of ERW. a , b , Reduction in the fraction of acidic land in England ( a ) and Scotland ( b ) following deployment of ERW. c , d , CO 2 emissions avoided ( c ) and cost savings ( d ) resulting from using basalt to substitute for P and K fertilizers. e , f , Soil N 2 O emissions reductions from croplands ( e ) and percentage change from 2010 ( f ) following ERW deployment. N 2 O results are shown as 10 yr annual running averages. The black line in f and g denotes results of the control 'no basalt' simulations. Results are shown for S1, S2 and S3 resource extraction scenarios in all panels, with the line style indicating the particle size distribution (p80 = 10 µm and p80 = 100 µm); the legend in b applies to all panels. The shaded envelopes denote 95% confidence limits. Source data Full size image Calculated rates of inorganic P and K nutrient supply for crops via ERW of basalt are comparable to typical P and K fertilizer application rates for major tillage crops (Extended Data Fig. 9 ). ERW with basalt could therefore substantially reduce the reliance of agriculture on the expensive and finite rock-derived sources of P and K fertilizers required to support increased agricultural production over the next 50 years in the United Kingdom, and globally, to meet the demands of a growing human population 39 . Reductions in P and K fertilizer usage lower unintended environmental impacts, supply chain CO 2 emissions and costs. For the United Kingdom, assuming that annual fertilizer application on ERW cropland areas in S1–S3 to replenish pools of P and K is reduced, the avoided carbon emissions are estimated to be 0.1–1 MtCO 2 yr −1 , with maximum cost savings of £100–700 million yr −1 by 2070 (Fig. 5c–f ). However, we note that not all crops require annual fertilization. These savings could contribute to offsetting the cost of undertaking ERW practices, but may be reduced by precision farming techniques, including applying variable levels of fertilizers within fields, and controlled-release fertilizers. Practices that optimize the efficient use of N on croplands to reduce N 2 O emissions from soils are important for ambitious net-zero agriculture pathways in the United Kingdom 4 . Our process-based model simulations, calibrated with field data 14 , indicate that ERW deployment on UK croplands could reduce soil N 2 O emissions by ~0.1 Mt of CO 2 equivalent (CO 2 e) per year, ~1 MtCO 2 e yr −1 and ~1.5 MtCO 2 e yr −1 by 2070 in S1, S2 and S3, respectively (Fig. 5e ); this equates to a reduction of up to 20% relative to croplands in 2010 (Fig. 5f ). This contrasts with large-scale land-based CDR strategies for increasing soil organic carbon stocks, which can increase soil N 2 O emissions 40 . ERW may therefore offer a new management option for mitigating soil N 2 O fluxes that is comparable in magnitude to other proposed abatement measures 41 with the additional win of CDR. Societal and community acceptability Societal acceptance of ERW practices is needed on all scales, from the national-political to local community and individual farm scales. ‘Acceptance’ in this context should be regarded not as an absolute mandate to proceed, but instead as recognition of the need to work with stakeholders and affected publics to identify the conditions under which this technology might proceed 42 . Additional mining operations with unintended environmental impacts raise particular sensitivities 42 and two of our scenarios (S2 and S3) require new mines to be established between 2035 and 2050 to provide basalt; increases post-2035 account for delays due to complex licensing procedures (Extended Data Fig. 4 ). Concentrating resource production at larger sites (~1 Mt basalt yr −1 ) requires annual increases in mine numbers of 6% (S2) and 13% (S3); smaller mines (~250 kt yr −1 ) necessitate larger annual increases ( Supplementary Information ). However, the scale-up rate is less than the historical 10-year maximum (1960–1970) and limited to 15 years. Recycling the United Kingdom’s annually produced calcium silicate construction and demolition waste (~80 Mt yr −1 ) 43 , which has potential to substitute for basalt 6 , could substantially reduce mined resource demand by between 80% (S2) and 45% (S3). Traditional mining operations provide local employment opportunities but have encountered controversy nonetheless because of concerns about sustainability, community impacts and local health and environmental risks 44 . Mining operations to enhance national carbon sequestration may raise different ethical and risk–benefit narratives 45 . Procedural and distributional fairness in siting mines, alongside long-term proactive engagement with the communities likely to be affected by any new mining operations, will be critical for acceptance 44 , together with sustainable management plans for quarry restoration post-extraction 46 , 47 . The issue of mining new materials for CDR is part of a wider debate regarding the sustainability of increasing resource extraction for green technologies, such as electric vehicles or photovoltaic cells. Achieving this at scale requires the development of innovative solutions that combine improved resource efficiency and use of waste mining products, circular economy production systems and extraction efforts focused primarily in the regions or countries where materials are to be used 48 . Although nature-based techniques for CDR (for example forestry, carbon sequestration in soils) are likely to be preferred by public groups over engineered technologies 42 , 49 , they are unlikely to be sufficient to deliver net-zero emissions nationally or globally. Above all, broad societal support is unlikely to be forthcoming unless ERW is developed alongside an ambitious portfolio of conventional climate mitigation policies 49 . Implications for ERW deployment Our analysis with dynamic ERW carbon budget modelling suggests that this technically straightforward-to-implement CDR technology could prove transformative for utilizing agriculture to mitigate climate change 6 , 9 , 10 and play a larger role in national CDR portfolio programmes than previously realized. Unlike industrial CDR processes, including BECCS or DACCS, ERW could be rolled out without major new industrial infrastructure, and incentivized through amended agricultural subsidy frameworks. We show that eliminating the energy-demanding requirement for milling rocks to fine particle sizes requires early and sustained implementation of ERW practices, subject to public acceptance. This has the additional important advantages of maximizing CDR and lowering costs to a highly competitive price of £80–110 tCO 2 −1 yr −1 by 2070. Our findings underscore the urgent need for long-term field trials across a range of agricultural systems to evaluate this technology with empirical evidence, alongside monitoring of potential unintended negative consequences 9 , 50 . High-resolution geospatial ERW assessments provide a detailed basis for mapping out routes to technological development and afford opportunities to minimize social and economic barriers by identifying priority regions for public engagement. Scaling up ERW in the United Kingdom and other G20 nations will require funding, public support, regulation and governance to ensure sustainability, and a stable policy framework 4 , 13 to accelerate global CDR goals with agriculture 6 , 9 , 10 as the world transitions to net-zero emissions. Methods Resource extraction scenarios Under S1, per-capita production of aggregates continues to fall from 1.9 to 1.5 t yr −1 by 2032 and remains constant thereafter, with the spare capacity used and ramped up for ERW. Under S2, rock extraction is scaled up by 7% (half the historical maximum rate of increase) until the total additional capacity is equal to the maximum historical value in 1990 (100 Mt yr −1 ). Under S3, rock extraction is scaled up by 15% (that is, historical annual 10-yr rolling average) until the additional capacity is 160 Mt yr −1 ; that is, equivalent to the total increase in the UK crushed rock supply post-1945 ( Supplementary Information ). Extraction of resources scales at rates compatible with historical patterns (Extended Data Fig. 4 ) and those advanced for delivering CDR by BECCS (and its supply chains) and DACCS 4 . Soil profile ERW modelling Our analysis used a one-dimensional vertical reactive transport model for rock weathering with steady-state flow and transport through a series of soil layers. The transport equation included a source term that represents rock grain dissolution within the soil profile 4 with advancements to incorporate the effects of the biogeochemical transformations of N fertilizers ( Supplementary Information ). The core model accounted for changing dissolution rates with soil depth and time as grains dissolve, chemical inhibition of dissolution as pore fluids approach equilibrium with respect to the reacting basaltic mineral phases, and the formation and dissolution of pedogenic calcium carbonate mineral in equilibrium with pore fluids 4 . Simulations considered UK basalts with specified mineralogies from three commercial quarries ( Supplementary Information ). We modelled the ERW of a defined particle size distribution (psd) with theory developed previously 4 . As the existing psds at each soil layer are at different stages of weathering, the combined psd at each level, and for each mineral, was calculated and tracked over time 4 . We accounted for repeated basalt applications by combining the existing psd with the psd of the new application. Simulated mineral dissolution fluxes from the model output were used to calculate the release of P and K over time. Mass transfer of P within the relatively more rapidly dissolving 51 accessory mineral apatite was calculated on the basis of the P content of the rock and the volume of bulk minerals dissolved during each time step. The mathematical model combined a multi-species geochemical transport model with a mineral mass balance and rate equations for the chemical dissolution of basaltic mineral phases. The model included an alkalinity mass balance that incorporated the effect of fertilizer applications and soil N cycling and dynamic calculations of pH in soil pore waters. The main governing equations are detailed below. Transport equation The calculated state variable in the transport equation is the dissolved molar equivalents of elements released by stoichiometric dissolution of mineral i , in units of mol l −1 ; ϕ is the volumetric water content, C i is the dissolved concentration (in mol l −1 ) of mineral i transferred to solution, t is time (months), q is vertical water flux (m y −1 ), z is the distance along the vertical flow path (m), R i is the weathering rate of basalt mineral i (mole per litre of bulk soil month −1 ) and C eqi is the solution concentration of weathering product at equilibrium with the mineral phase i (equation (1) ). Values for C eqi for each of the mineral phases in the basalt grains were obtained by calibrating the results of the performance model against those of a 1D reactive transport model, as described previously 4 . Rates of basalt grain weathering defined the source term for weathering products and were calculated as a function of soil pH, soil temperature, soil hydrology, soil respiration and crop net primary productivity. The vertical water flux was zero when pore water content was below a critical threshold for vertical flow. Weathering occurred under no-flow conditions and the accumulated solutes in pore water were then advected when water flow was initiated under sufficient wetting, tracked using a single bucket model. $$\phi \frac{{\partial C_i}}{{\partial t}} = - q\frac{{\partial C_i}}{{\partial z}} + R_i\left( {1 - \frac{{C_i}}{{C_{\mathrm{eqi}}}}} \right)$$ (1) Mineral mass balance The change in mass of basalt mineral i , B i , is defined by the rate of stoichiometric mass transfer of mineral i elements to solution. Equation (2) is required because we considered a finite mass of weathering rock, which over time could react to completion, either when solubility equilibrium between minerals and pore water composition was reached, or when applied basalt was fully depleted. $$\frac{{\partial B_i}}{{\partial t}} = - R_i\left( {1 - \frac{{C_i}}{{C_{\mathrm{eqi}}}}} \right)$$ (2) Removal of weathering products The total mass balance over time (equation (3) ) for basalt mineral weathering allows calculation of the products transported from the soil profile. The total mass of weathering basalt is defined as follows where m is the total number of weathering minerals in the rock, T is the duration of weathering and L is the total depth of the soil profile (in m). We define q , the vertical water flux, as the net monthly sum of water from precipitation and irrigation, minus evapotranspiration, as calculated by the Community Land Model v.5 (CLM5). $${{{\mathrm{Total}}}}\;{{{\mathrm{weathered}}}}\;{{{\mathrm{basalt}}}} = \mathop {\sum }\limits_{i = 1}^m \phi \mathop {\smallint }\limits_{z = 0}^L C_i\left( {t,z} \right){{{\mathrm{d}}}}z + q\mathop {\smallint }\limits_{t = 0}^{T} C_i\left( {t,L} \right){{{\mathrm{d}}}}t$$ (3) Coupled climate–C–N cycle ERW simulations Our model simulation framework (Extended Data Fig. 1 ) started with future UK climates (2020–2070) from the medium-mitigation future pathway climate (Shared Socioeconomic Pathway (SSP) 3-7.0) ensemble of Coupled Model Intercomparison Project Phase 6 (CMIP6) runs with the Community Earth System Model v.2. Future climates were used to drive CLM5 to simulate at high spatial resolution (23 km × 31 km) and high temporal resolution (30 min) terrestrial C and N cycling with prognostic crop growth and other ecosystem processes, including heterotrophic respiration 52 , 53 ( Supplementary Information ). CLM5 simulates monthly crop productivity, soil hydrology (precipitation minus evapotranspiration), soil respiration and N cycling. CLM5 includes representation of eight crop functional types, each with specific ecophysiological, phenological and biogeochemical parameters 52 , 53 . CLM5 includes CO 2 fertilization effects on agricultural systems benchmarked against experiments and observations 54 , 55 . An atmospheric CO 2 increase of ~200 ppm from 2015 to 2070 is defined by SSP3-7.0. In our CLM5 simulations with rising CO 2 and climate change, wheat net primary productivity increased by 8%, evapotranspiration decreased by 21% and water-use efficiency increased by 25% ( Supplementary Information ). Both increasing net primary productivity and decreasing evapotranspiration can facilitate weathering in our soil profile ERW model ( Supplementary Information ). We initialized CLM5 simulations for 2010 using fully spun-up conditions from global runs at ~100 km × 100 km resolution, adding an extra 60 yr spin-up in the regional set-up to stabilize the C and N pools to the higher-resolution setting. CLM5 includes an interactive N fertilization scheme that simulates fertilization by adding N directly to the soil mineral N pool to meet crop N demands using both synthetic fertilizer and manure application 52 , 53 . Synthetic fertilizer application was prescribed by crop type and varied spatially for each year based on the Land Use Model Intercomparison Project and land-cover change time series (Land-Use Harmonization 2 for historical rates and SSP3 for future rates) 55 , 56 . N fertilizer rates increased by 18% per decade from 2020 to 2050 in agreement with the United Kingdom’s Committee on Climate Change forecasts of future N fertilizer usage 57 , and then stabilized from 2050 to 2070. Average UK CLM5 fertilizer application rates (148 kg N ha −1 yr −1 ) are consistent with current practices 58 . Organic fertilizer was applied at a fixed rate (20 kg N ha −1 yr −1 ) throughout the simulations. CLM5 tracks N content in soil, plant and organic matter as an array of separate N pools and biogeochemical transformations, with exchange fluxes of N between these pools 52 , 53 . The model represents inorganic N transformations based on the DayCent model, which includes separate dissolved NH 4 + and NO 3 − pools, as well as environmentally controlled nitrification, denitrification and volatilization rates 59 . To model the effect of basalt addition on fluxes of N 2 O from soil, we included the updated denitrification DayCent module 14 , modified to capture the soil pH ranges in UK croplands. The possible effect of increased soil pH from basalt application increasing NH 3 volatilization and, indirectly, N 2 O emissions, was not explicitly modelled. However, the error term is likely to be small, given that it accounts for less than 5% of total agricultural N 2 O emissions 60 , 61 . Cropland CLM5 soil N emissions are within the range of estimates in UK croplands based on bottom-up inventories and other land surface models, with N 2 O fluxes showing broad similarities in terms of regional patterns and magnitude with the UK National Atmospheric Emission Inventory ( Supplementary Information ). Modelling soil N effects on ERW The inclusion of mechanistic simulation of N cycling processes coupled to ERW via 16 stoichiometric N transformations that influence the soil weathering environment represents a theoretical advance over previous modelling ( Supplementary Information ). The modelling accounts for 20 depths (20 soil layers) in the soil profile at each location with a monthly time step; the variables passed from CLM5 by time and depth to the 1D ERW model are given in the Supplementary Information . At each depth, we computed N transformation effects on soil water alkalinity with reaction stoichiometries that added or removed alkalinity. Together with soil CO 2 levels, this affected pore water pH and the aqueous speciation that determined mineral weathering rates. This modelling advance allowed us to mechanistically account for the impact of N fertilization (which is recognized to potentially lead to nitric acid-dominated weathering 62 , 63 at low pH with no C capture) of cropland on basalt weathering rates. Dynamic modelling at monthly time steps resolved seasonal cycles of CDR via alkalinity fluxes and soil carbonate formation/dissolution in response to future changes in atmospheric CO 2 , climate, land surface hydrology, and crop and soil processes. The effect of the N cycle on the soil acidity balance (Extended Data Fig. 3 ) was derived from N transformations associated with the production or consumption of hydrogen ions ( Supplementary Information ). We assigned a stoichiometric acidity flux ∆ H i ,N (mol H + mol −1 N) to each N flux F i ,N (g N m −3 soil s −1 ) calculated by the CLM5 code ( Supplementary Information ). The product ( F i ,N ∆ H i ,N ), with appropriate unit conversions, gives the acidity flux during the time step ∆ t (s month −1 ) for the i th reaction of the CLM5 N cycle. Their sum (equation (4) ) is, therefore, the total change in acidity ∆Acidity N due to the CLM5 N cycle: $$\Delta {{{\mathrm{Acidity}}}}_{{{\mathrm{N}}}} = \sum \left( {F_{i,{{{\mathrm{N}}}}}\Delta H_{i,{{{\mathrm{N}}}}}} \right)/14.0067\Delta {{{t}}}$$ (4) where 14.0067 gN mol −1 N is the atomic weight of N and the time step is one month. Along with the Ca, Mg, K and Na ions released from weathering the applied minerals, ∆Acidity N contributes a negative term to the soil water alkalinity balance used to calculate the soil pH 4 . $${{{\mathrm{Alk}}}}_{{t}}{{{\mathrm{ = Alk}}}}_{{{t - 1}}}{{{\mathrm{+2}}}}\times\left( {{{{\mathrm{Ca}}}}_{{{{\mathrm{weath}}}}}{{{\mathrm{ + Mg}}}}_{{{{\mathrm{weath}}}}}} \right){{{\mathrm{ + K}}}}_{{{{\mathrm{weath}}}}}{{{\mathrm{ + Na}}}}_{{{{\mathrm{weath}}}}}{{{\mathrm{-}}}}\Delta {{{\mathrm{Acidity}}}}_{{{\mathrm{N}}}}$$ (5) This pH value is one component that is accounted for in the rate laws for mineral dissolution and therefore influences the net alkalinity produced at each depth within the soil profile, which contributes to CDR 4 . The initial alkalinity profile in each grid cell was determined from the starting soil pH and the partial pressure of CO 2 ( \(p_{\mathrm{CO}_2}\) ) profile at steady state based on spin-up of the model with average long-term biomass production and soil organic matter decomposition that reflected the long-term land use history of a particular location. The alkalinity mass and flux balance for an adaptive time step accounted for alkalinity and acidity inputs from (1) mineral dissolution rates and secondary mineral precipitation (pedogenic carbonate), (2) biomass production and decomposition 64 and (3) biogeochemical N transformations. The soil pH profile was determined from an empirical soil pH buffering capacity 65 relating soil pH to the alkalinity at each depth. The soil \(p_{\mathrm{CO}_2}\) depth profile of a grid cell was generated with the standard gas diffusion equation 66 , scaled by monthly soil respiration from CLM5. At any particular location, the soil solution was in dynamic equilibrium with dissolved inorganic C species and the values of gas phase soil and atmospheric \(p_{\mathrm{CO}_2}\) . The relative change induced by weathering will be the consumption of H + and the production of HCO 3 − . Using this modelling framework (Extended Data Fig. 1 ), we analysed a baseline application rate of 40 t ha −1 yr −1 (equivalent to a <2-mm-thick layer of rock powder distributed on croplands) to UK croplands. Similar road transport of mass occurred in reverse during grain transport from field to market during UK harvest 67 , indicating the appropriate capacity of rural transport networks to move basalt to the fields for ERW. Gross CDR calculations The gross CDR by ERW of crushed basalt applied to soils was calculated as the sum of two pathways: (1) the transfer of weathered base cations (Ca 2+ , Mg 2+ , Na + and K + ) from soil drainage waters to surface waters that are charge balanced by the formation of HCO 3 − ions and transported to the ocean (equation (6) ), and (2) the formation of pedogenic carbonates (equation (7) ). Pathway 1 for calcium ions: $${{{\mathrm{CaSiO}}}}_{{{\mathrm{3}}}}{{{\mathrm{ + 2CO}}}}_{{{\mathrm{2}}}}{{{\mathrm{ + 3H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}} \to {{{\mathrm{Ca}}}}^{{{{\mathrm{2 + }}}}}{{{\mathrm{ + 2HCO}}}}_{3}^ - {{{\mathrm{ + H}}}}_{{{\mathrm{4}}}}{{{\mathrm{SiO}}}}_{{{\mathrm{4}}}}$$ (6) Pathway 2 for calcium carbonate formation: $${{{\mathrm{Ca}}}}^{{{{\mathrm{2 + }}}}}{{{{\mathrm{ + 2HCO}}}}_{{{{\mathrm{3}}}}}^{-}} \to {{{\mathrm{CaCO}}}}_{{{\mathrm{3}}}}{{{\mathrm{ + CO}}}}_{{{\mathrm{2}}}}{{{\mathrm{ + H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}$$ (7) CDR, via pathway 1, potentially sequesters two moles of CO 2 from the atmosphere per mole of divalent cation. However, ocean carbonate chemistry reduces the efficiency of CO 2 removal to an extent, depending on ocean temperature, salinity and the dissolved CO 2 concentration of the surface ocean. We used annual ERW alkalinity flux time series (2020–2070) calculated with our 1D ERW model for S1 to S3 as inputs to GENIE (version 2.7.7) 68 , 69 . GENIE is an intermediate-complexity Earth system model with ocean biogeochemistry that allows computation of oceanic CDR via pathway 1. We used the same methodology as described previously 12 to simulate atmospheric CDR via the release of enhanced weathering alkalinity products into the ocean. The uncertainty for each scenario was determined by ensemble GENIE simulations with 84 different parameter sets that varied 28 parameters, each calibrated to simulate a reasonable pre-industrial and historical transient climate and carbon cycle 68 , 69 , 70 . CDR via pathway 2 occurred if dissolved inorganic carbon derived from atmospheric CO 2 precipitates as pedogenic carbonate, and sequestered 1 mol CO 2 per mole of Ca 2+ . Costs and carbon emissions of logistical operations Mining A breakdown of mining costs (in £ t −1 ) of rock for the year 2010 and a representative granite mine with a daily 1,500 output and annual 375,000 output were obtained from a comprehensive analysis of UK aggregate mining 71 . Capital expenditure costs amounted to £24,395,636 over a 50-yr life cycle (£1.30 t −1 rock) whereas operating expenses (OPEX) amounted to £1,150,072 yr −1 (£3.07 t −1 rock) for a total £4.37 t −1 rock for the year 2010. To obtain cost projections over 2020–2070, the contributions of wages, diesel fuel and electricity consumption in OPEX (35.9%, 2.5% and 20.0% respectively) were normalized and projected for 2020–2070 using E3ME outputs of median wage, diesel prices and industrial electricity tariffs, respectively ( Supplementary Information ). Capital expenditure costs and the remaining OPEX (plant, buildings, equipment, tyres; in £ t −1 rock) remained constant over the period. Emissions of CO 2 e t −1 rock extracted using diesel fuel and explosives were set at 4.29 kgCO 2 e t −1 rock (ref. 71 ). Emissions of CO 2 e per unit of electricity consumed were obtained by combining electricity requirements per tonne of rock (1.48 kWh t −1 rock) and projected life cycle emissions (in kgCO 2 e kWh −1 ) from 2020 to 2070. Grinding Grinding breakdown costs were obtained from ref. 18 ). Capital expenditure costs were set at £1.59 t −1 rock, while OPEX for plant, buildings and equipment were set at £0.97 t −1 rock. Diesel fuel and personnel costs (£0.08 t −1 rock and £0.85 t −1 rock for 2010) were projected to 2020–2070 using the methodology described above. We expressed electricity consumption per tonne of rock milled as a function of particle size, defined as p80 6 . To obtain electricity costs, we multiplied electricity consumption (in kWh t −1 rock milled) by E3ME projections of the unit cost of electricity (£ kWh −1 ) and grinding emissions by multiplying electricity consumption by E3ME projections of electricity-production life cycle emissions (gCO 2 e kWh −1 ). Spreading Spreading costs were set at £8.3 t −1 rock for the year 2020 by averaging costs in the United Kingdom and United States 6 . Spreading costs were assigned equally to equipment, fuel/electricity and wages, with E3ME data used to provide cost projections to 2070 for the last two. A sigmoid function showing the transition to electric cars was obtained from E3ME, to which a 10 yr lag was added to signify delayed uptake by heavy agriculture vehicles ( Supplementary Information ). Spreading emissions were set at 0.003 kgCO 2 t −1 rock (ref. 18 ). Our cost assessments assume that ERW practices are undertaken on farms as part of business-as-usual land-management practices. The pricing of external contracting of land management for rock dust application to soils is uncertain but could increase CDR prices per tCO 2 on the order of 10–15%. Fertilizers Projections of P fertilizer prices (2020–2070) for a global medium-resource scenario were obtained from ref. 72 , showing an increase in global prices due to the depletion of phosphate reserves 72 , 73 , 74 . Even though K resources are also depleting, we kept K prices constant as alternative technologies and the opening of new mines in the Global South might alleviate the problem 75 . UK fertilizer prices for the year 2020 were used 31 as a baseline for our projections. Fertilizer savings were obtained as the product of release (in kg) of P and K by their unit price (£ kg −1 ) over the time period 2020–2070. Life-cycle CO 2 emissions for P and K fertilizers were calculated as average values for different time horizons from the methodologies included in the Ecoinvent database 76 ( Supplementary Information ). Global markets for these products were selected for this analysis to include all that those fertilizers coming to the United Kingdom from any region of the world. Energy requirements Electricity supply characteristics for the United Kingdom were obtained from E3ME simulations (see the Transportation section). The annual electricity supply increases from 320 GWh yr −1 in 2020 to 637 GWh yr −1 in 2070, with life-cycle emissions dropping from 177.4 gCO 2 kWh −1 to −64.5 gCO 2 kWh −1 . The electricity mix profile shows an initial transition to onshore wind energy, followed by a marked uptake of solar and various CCS technologies. The annual costs of ERW (in £ tCO 2 −1 ) CDR was obtained from equation (8) by summing the logistical costs for all locations (loc) (in £) that rock was applied according to each scenario for the particular year ( y ) and dividing by their total net CDR (in tCO 2 ) (equation (8) ). Mining (Min) and spreading (Spread) costs are functions of the year, as the application rate was the same for all locations. Grinding (Grind) costs are a function of the year and p80. Transport (Transp) costs are function of the year and location, and consider the distance from the rock source. P and K release is a function of the year, p80 and location, as both particle size and location (climate) affect weathering rates and, subsequently, elemental release. All process costs are functions of the year due to time-varying wage, fuel, electricity and fertilizer costs. $$\begin{array}{ll}{\mathrm{Costs}}\left( {y,{\mathrm{p}}80} \right) \\ = \mathop {\sum }\limits_{\mathrm{Locations}} \frac{{\mathrm{Min}\left( y \right) + {\mathrm{Grind}}\left( {y,{\mathrm{p}}80} \right) + {\mathrm{Transp}}\left( {y,{\mathrm{loc}}} \right) + {\mathrm{Spread}}\left( y \right) - {\mathrm{P}}\left( {y,{\mathrm{p}}80,{\mathrm{loc}}} \right) - {\mathrm{K}}(y,{\mathrm{p}}80,{\mathrm{loc}})}}{{{\mathrm{CO}}_2{\mathrm{Gross}}\;{\mathrm{Seq}}\left( {y,{\mathrm{p}}80,{\mathrm{loc}}} \right) - {\mathrm{CO}}_2{\mathrm{Secondary}}\;{\mathrm{Emissions}}(y,{\mathrm{p}}80,{\mathrm{loc}})}}\end{array}$$ (8) Secondary emissions (in tCO 2 ) for each location were obtained by summing the emissions of each process (tCO 2 t −1 rock) in that year and multiplying by rock application (Rock) (in t rock) (equation (9) ) $$\begin{array}{ll}{\mathrm{CO}}_2{\mathrm{Secondary}}\;{\mathrm{Emissions}}\left( {y,{\mathrm{p}}80,{\mathrm{loc}}} \right)\\ = \left[ {{\mathrm{Min}}\left( y \right) + {\mathrm{Grind}}\left( {y,{\mathrm{p}}80} \right) + {\mathrm{Transp}}\left( {y,{\mathrm{loc}}} \right)}\right. \\ \left.{ + {\mathrm{Spread}}\left( y \right) - {\mathrm{P}}\left( {y,{\mathrm{p}}80,{\mathrm{loc}}} \right) - {\mathrm{K}}\left( {y,{\mathrm{p}}80,{\mathrm{loc}}} \right)} \right]\\ \times {\mathrm{Rock}}(y,{\mathrm{loc}})\end{array}$$ (9) An initial run determined the order of the grid cells on the basis of their weathering potential. Rock was then applied, prioritizing grid cells with the highest potential, while the addition of rock in new areas each year was constrained by the annual rock availability of each scenario. Transportation Detailed transport analyses (based on UK road and rail networks) were undertaken to calculate distance costs and CO 2 emissions for the distribution of rock dust from source areas to croplands. We used the GLiM database for the UK distribution of basalt deposits 77 and the 2019 land cover map ( Supplementary Information ) to calculate transportation distances, cost (£ t −1 rock dust km −1 ) and emissions (tCO 2 km − 1 ) from potential local rock sources to cropland areas, together with UK road and rail transport networks 78 . Spatial analysis was undertaken with least-cost path algorithms from the ArcGIS software 79 . Wages and electricity/fuel prices and CO 2 emission factors were derived from E3ME’s 1.5 °C energy scenario 2 . We started using typical fuel/electricity consumption values for both freight road (2.82 km l −1 and 3.07 kWh km −1 ) 71 and rail (98 km l −1 ) 76 to estimate the projected transport efficiency expressed in cost/emissions of a tonne of rock dust per kilometre (t km −1 ) 80 , 81 , 82 . Transport cost distribution per tonne-kilometre was derived using generic road and rail cost models that include wages, fuel, maintenance and depreciation 83 , 84 . The UK rail freight diesel-to-electricity decarbonization transition is already underway 85 , 86 , and we used the continued projection for this transport mode. For road freight, the transport technology transition from the E3ME for electric vehicles was adopted, modified under the assumption that diesel ban policies and the availability of electric heavy goods vehicles for basalt transportation take place after 2030 87 . Energy and economic forecasts UK energy–economic modelling (2020–2070) 88 , 89 , 90 was based on an updated version of the scenario described in ref. 17 that includes carbon pricing and has responses for the power sector (output and efficiency) consistent with government policy 91 ( Supplementary Information ). Total renewable energy sources over time were similar but with solar instead of 40 GW of offshore wind power. The simulations considered the phase-out of conventional vehicles by 2030, in line with government policy, and a consistent shift in aviation and freight towards biofuels, and electrified rail, as well as increased efficiency in buildings and the use of heat pumps. These simulations provide outputs for the United Kingdom for 2020 to 2070 of CO 2 emissions per unit energy, total energy mix and output, labour costs, electricity costs, fuel costs, and road and rail transport costs, which were inputs for calculating the costs of ERW CDR and secondary emissions during the grinding of rocks (Extended Data Fig. 2 ). Data availability Soil pH data were obtained from and . The high-resolution monthly fields of soil temperature and precipitation data were obtained from . Additional environmental and climate drivers were acquired through simulations of CLM5 available at . The UK crop cover map was obtained from , annual time series of crop yields from and UK fertilizer usage data from . UK national border data were obtained from . The GLiM v1.0 dataset used to identify rock sources is available at . Datasets with 5 min resolution on global crop production and yield area to identify cropland are available at . Datasets on road and rail vector data used for transport network analysis are available at . Datasets on LCA impact factors used for K and P fertilizers are available within Ecoinvent 3.6 at . Source data are provided with this paper. Code availability The weathering model was developed in MATLAB v.R2019a, and data processing was conducted in both MATLAB v.R2019a and Python v.3.7. MATLAB and Python codes developed for this study belong to the Leverhulme Centre for Climate Change Mitigation. These codes, and the modified codes in CLM5 developed in this study, are available from the corresponding author upon reasonable request.
Adding rock dust to UK agricultural soils could absorb up to 45% of the atmospheric carbon dioxide needed to reach net zero, according to a major new study led by scientists at the University of Sheffield. The study, led by the Leverhulme Centre for Climate Change Mitigation at the University, provides the first detailed analysis of the potential and costs of greenhouse gas removal by enhanced weathering in the UK over the next 50 years. The authors show this technique could make a major overlooked contribution to the UK's requirement for greenhouse gas removal in the coming decades with a removal potential of 6–30 million tons of carbon dioxide annually by 2050. This represents up to 45% of the atmospheric carbon removal required nationally to meet net-zero greenhouse gas emissions alongside emissions reductions. Deployment could be straightforward because the approach uses existing infrastructure and has costs of carbon removal lower than other Carbon Dioxide Removal (CDR) strategies, such as direct air capture with carbon capture storage, and bioenergy crops with carbon capture and storage. A clear advantage of this approach to CDR is the potential to deliver major wins for agriculture in terms of lowering emissions of nitrous oxide, reversing soil acidification that limits yields and reducing demands for imported fertilizers. The advantages of reducing reliance on imported food and fertilizers have been highlighted by the war in Ukraine that has caused the price of food and fertilizers to spike worldwide as exports of both are interrupted. The authors of the study highlight that societal acceptance is required from national politics through to local community and farm scales. While mining operations for producing the basalt rock dust will generate additional employment and could contribute to the UK government's leveling up agenda; however this will need to be done in ways which are both fair and respectful of local community concerns. This new study provides much needed detail of what enhanced rock weathering as a carbon dioxide removal strategy could deliver for the UK's net-zero commitment by 2050. The Committee on Climate Change, which provides independent advice to the government on climate change and carbon budgets, overlooked enhanced weathering in their recent net-zero report because it required further research. The new study now indicates enhanced weathering is comparable to other options on the table and has considerable co-benefits to UK food production and soil health. Professor David Beerling, Director of the Leverhulme Centre for Climate Change Mitigation at the University of Sheffield and senior author of the study, says that their "analysis highlights the potential of UK agriculture to deliver substantial carbon drawdown by transitioning to managing arable farms with rock dust, with added benefits for soil health and food security." Dr. Euripides Kantzas of the Leverhulme Centre for Climate Change Mitigation at the University of Sheffield and lead author, says that "by quantifying the carbon removal potential and co-benefits of amending crops with crushed rock in the UK, we provide a blueprint for deploying enhanced rock weathering on a national level, adding to the toolbox of solutions for carbon-neutral economies." Professor Nick Pidgeon, a partner in the study and Director of the Understanding Risk Group at Cardiff University, says that "meeting our net zero targets will need widespread changes to the way UK agriculture and land is managed. For this transformation to succeed we will need to fully engage rural communities and farmers in this important journey." The research was published in Nature Geoscience.
10.1038/s41561-022-00925-2
Biology
Predators and hidey-holes are good for reef fish populations
Enie Hensel et al, Effects of predator presence and habitat complexity on reef fish communities in The Bahamas, Marine Biology (2019). DOI: 10.1007/s00227-019-3568-3 Journal information: Marine Biology
http://dx.doi.org/10.1007/s00227-019-3568-3
https://phys.org/news/2019-10-predators-hidey-holes-good-reef-fish.html
Abstract Reef ecosystems are highly diverse habitats that harbor many ecologically and economically significant species. Yet, globally they are under threat from multiple stressors including overexploitation of predatory fishes and habitat degradation. While these two human-driven activities often occur concomitantly, they are typically studied independently. Using a factorial design, we examined effects of predator presence, habitat complexity, and their interaction on patch reef fish communities in a nearshore ecosystem on Great Abaco Island, The Bahamas. We manipulated the presence of Nassau groupers ( Epinephelus striatus ), a reef predator that is critically endangered largely due to overharvest, and varied patch reef structure (cinder blocks with and without PVC) to reflect high or low complexity-four treatments in total. To assess changes in fish community composition we measured fish abundances, species richness, and evenness. We found that predators present and high reef complexity had an additive, positive effect on total fish abundance: fish abundance increased by ~ 250% and 300%, compared to predators absent and low complexity reef treatments, respectively. Species richness increased with reef complexity. Variation in community structure was explained by the interaction between factors, largely driven by juvenile Tomtate grunt ( Haemulon aurolineatum ) abundances. Specifically, Tomtate grunt abundance was significantly higher on high complexity reefs with predators present, but on low complexity reefs predators present had no effect on Tomtate grunt abundance. Our data suggest that both fisheries management of large-bodied piscivores and reef habitat restoration are critical to the management and conservation of reef ecosystem functions and services. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Coral dominated reefs are highly productive ecosystems that can harbor large and diverse fish communities, but are threatened worldwide by myriad stressors (Hughes and Connell 1999 ; Hoegh-Guldberg et al. 2007 ). Two primary stressors are overfishing at higher trophic levels and habitat degradation (Dulvy et al. 2004 ; Lee 2006 ). These stressors can fundamentally change processes that act from both the top-down (e.g., trophic cascades) (Baum and Worm 2009 ; Allgeier et al. 2016 ; Valdivia et al. 2017 ) and bottom-up (e.g., nutrient cycling regimes and provision of refugia) of interaction networks (Beukers and Jones 1998 ; Lee 2006 ; Smith et al. 2006 ; Graham and Nash 2013 ). Overexploitation of predators and the loss of habitat complexity are typically concomitant. Yet because these factors are most often studied independently, our understanding of how their simultaneous effects combine or interact to alter reef communities remains limited (but see Wilson et al. 2008 ). Predators and habitat structural complexity play a central role in determining post-settlement coral reef fish communities (Hixon and Carr 1997 ; Steele 1999 ). Direct effects of predators can alter fish communities including species richness (Freestone et al. 2011 ) and overall community composition (Almany 2003 ). Predators also have indirect effects on their surrounding communities, however the outcome is often context-dependent for both predator and prey identity making the effect difficult to predict or generalize (Almany 2004a ; Stallings 2008 ; Chamberlain et al. 2014 ). Additionally, predator impacts on reef communities can be mediated by the structural complexity of reefs (Wilson et al. 2008 ). Specifically, habitat complexity can decrease predators’ direct effects by reducing prey encounter rates (Swisher et al. 1998 ; Almany 2004b ; Warfe and Barmuta 2004 ). In contrast, structural complexity can increase predators’ indirect effects due to predators and their prey residing in close proximity to one another (Grabowski et al. 2008 ). To date, most studies examining predator effects in marine systems have tended to use simplified interaction webs within mesocosms, focused on small-bodied predator species, or a combination of both (Steele 1999 ; Johnson 2006 ; Grabowski et al. 2008 ). However, fishing typically targets large-bodied predators (e.g., sharks, tunas, and groupers) that are more difficult to study. Further complicating this scenario is the challenge of understanding how losses of predators may interact with concomitant changes in habitat complexity that is a result of major shifts in foundation species (e.g., coral to sponge, or macroalgae) that can render reefs flat, reducing available refugia for fauna. This potential interaction has important implications for management and conservation efforts seeking to mitigate human impacts on coral reefs. Here we ask: how do predators present and reef complexity affect reef fish communities in terms of total abundance, species richness, evenness, and overall community composition? We conducted an in situ experiment designed to examine effects of large predator presence, habitat complexity, and their interaction on artificial patch reef communities in a nearshore ecosystem on Great Abaco Island, The Bahamas. Nearshore, patch reef habitats are ideal for this experiment because they are isolated, complex, vertical habitats surrounded by hard or soft low-relief substrate and, being located in nearshore habitats, they are strongly influenced by local stressors (Stallings 2009 ). Artificial patch reefs are particularly ideal because they are relatively easy to manipulate and subsequently study whole community effects (Carr and Hixon 1997 ). In a 2 × 2 factorial design, we manipulated the presence of a locally abundant reef predator, Nassau grouper ( Epinephelus striatus ), on patch reefs of high and low complexity. We used Nassau grouper as our predator species because they are an important fishery species that has experienced drastic population declines throughout the Caribbean from overexploitation (Dahlgren et al. 2016 ; Sherman et al. 2018 ). We hypothesized that reefs with Nassau groupers present would have more fishes compared to those without Nassau groupers due to strong indirect predator effects on smaller-bodied, mesopredator species. Additionally Nassau groupers present would alter fish community composition, i.e., relative abundance of the constituent species, due to a combination of their direct and indirect predator effects. For habitat complexity, we hypothesized that complex reefs would have more fishes as well as more species present than non-complex reefs to do an increase in refugia hole availability and diversity in refugia morphology (shape). Methods Our study occurred in a back-reef system in the Sea of Abaco along the eastern shoreline of Marsh Harbour on Great Abaco Island, The Bahamas, from May to August 2014 (Fig. 1 ). Water visibility is typically ≥ 5 m and low tide depth ranges from 2 to 5 m. The nearshore seascape is a mosaic of hard-bottom, sand, algal beds, seagrass meadows dominated by turtle grass ( Thalassia testudinum), and scattered coral and artificial patch reefs. Coral patch reefs are characterized by low relief (≤ 2 m height) and typically covered with some variation of encrusting or soft coral (e.g. Orbicella spp. or Gorgonian spp.), exposed limestone, sponges, and macroalgae (e.g. Halimeda spp. and Sargassum spp.). Artificial patch reefs are broadly defined as any human-introduced structure that is submerged on the benthos, usually introduced to mimic the function of patch reefs by providing structural complexity for biota to use as refugia and foraging grounds (Seaman 2000 ). Our experimental units were cinder block artificial reefs, which have been used widely over the past four decades to study reef fish assemblages (e.g. Hixon and Beets 1989 , Carr and Hixon 1997 ). In April 2014, we constructed 16 artificial reefs (~ 1.4 m 3 ), each using 35 cinder blocks (15 × 20 × 40 cm) on mixed sand/seagrass substrate. For all 16 artificial reefs (hereafter reefs), low tide depth was ~ 3.0 m, distance to shoreline ranged between 250 and 500 m, and ambient water temperatures and currents were similar amongst all reefs. Each reef was located > 200 m from any other artificial or natural patch reef; location of experimental reefs followed Yeager et al. ( 2014 ) where reef isolation was set at > 200 m to minimize among-reef movements of more transient fish species (Carr and Hixon 1997 ; Allgeier et al. 2018 ). We measured seagrass density (within a 2 m radius of the reef) at the beginning of the experiment as a potential covariate because seagrass density adjacent to reefs has been shown to alter certain grunt species’ reef densities (Haemulids; Yeager et al. 2011 ). Fig. 1 Sixteen artificial reefs constructed in the Sea of Abaco, The Bahamas, in April 2014. They were constructed on mixed sand and seagrass benthic habitat > 80 m away from hard bottom substrate and > 200 m from natural or artificial patch reefs. At the bottom are representative images of the four treatments from left to right: predator presence × high reef complexity (PH), predator absent × high reef complexity (AH), predator presence × low reef complexity (PL), and low predator absent × low reef complexity (AL) Full size image To test how predator presence and structural complexity affect reef fish communities, we randomly assigned each of the 16 reefs to one of four treatments: predator present × high complexity (PH), predator absent × high complexity (AH), predator present × low complexity (PL), or predator absent × low complexity (AL; Fig. 1 ). For predator treatments we used Nassau groupers which have a complex life cycle undergoing a series of ontogenetic shifts in both habitat and diet (Eggleston et al. 1998 ; Dahlgren et al. 2006 ). When they are ~ 3 months old, individuals begin to migrate from nearshore macroalgae beds to hard bottom or patch reef habitat. At this stage, individuals show strong site fidelity to their home activity area, i.e., often returning and reusing the same patch reef habitat as well as a limited home range of ~ 50 m from their home reef (Eggleston et al. 1998 and Dahlgren unpublished data). Due to their high site fidelity, grouper additions are difficult, and thus removals are the optimal method to manipulate their presence and absence (Stallings personal communication). Prior to the start of the experiment in May 2014, all 16 experimental reefs had at least two Nassau groupers present, ranging from 16 to 33 cm total length (TL). For predator absent treatments, we removed Nassau groupers with either trap or hand nets and relocated each individual to a reef habitat > 3 km from our study site to reduce the chance of their return (Stallings personal communication). We tagged each individual before release, and we did not observe the return of any individuals during the duration of our experiment. We also removed any non-native, invasive lionfish ( Pterois volitans ) throughout the study period. To establish high complexity reef treatments, we installed a PVC tree structure mimicking historically important reef-building corals within our study system, i.e., Acropora cervicornis , A. palmata , and A. prolifera . Low complexity reefs had no PVC structure, and the cinder block holes were filled with cement, leaving only three large gaps for potential refugia (Fig. 1 ). This low complexity reef architecture mimics a shift from branching, reef building corals to boulder-like reef heads dominated by encrusting corals and macroalgae. To determine if fish community structure differed among treatments, we monitored all reefs weekly with Underwater Visual Census (UVC) surveys. We also deployed GoPro ® cameras three times at each reef during the study to monitor and verify predator treatments. Video footage was not used to record fish species present or total fish abundance due to several logistical issues such as limited visibility for cryptic species that reside within the reefs and the number of cameras to record all 16 reefs within 24–48 h. The UVC surveys entailed monitoring each reef using a mask and snorkel for 10 min and recording all fishes within 1 m of the reef at the species-level. Once all the active swimming fish were recorded, a flashlight was used to search every reef hole twice for less active or cryptic species. All 16 reefs were surveyed within each survey date. During the last 2 weeks of the experiment, visual estimates for the TL of each fish were recorded to the nearest centimeter. We used UVC surveys to quantify species richness and evenness, as measured by the reciprocal Simpson’s \(D = 1 - \sum \left( {n - N} \right)^{2}\) , where n is the abundance of that species per survey and N is the number of total species per survey (Simpson 1949 ). For all analyses, we used UVC surveys from 60 days after reef construction in order to capture reef fish assemblage at the reefs’ oldest age possible; we were not able to monitor predator presence after 60 days to validate predator treatment. We did not include Nassau groupers in fish community response variables. We used two-way analysis of covariance (ANCOVA) to test for independent and interactive treatment effects on total fish abundance, species richness, and the reciprocal Simpson’s Index of Diversity (Simpson 1949 ) at the species-level. We log-transformed fish abundance prior to analysis to meet homoscedasticity assumptions, and included seagrass density in each model as a potential covariate (but removed this variable when not significant for reasons of parsimony). To analyze fish community structure across treatments, i.e., relative abundance of the constituent species, we used non-metric multidimensional scaling (nMDS), and permutational multivariate analysis of variance (PERMANOVA) on square-root transformed fish abundances at the species-level. We then conducted similarity percentage analysis (SIMPER) computations using the Bray–Curtis similarity coefficient, limited to the species contributing to the top 70% of dissimilarity between treatment’s reef fish assemblage (Bray and Curtis 1957 ). Herein, we refer to the results of this nMDS analysis of reef fish assemblages as community structure. We conducted all analyses using program R (Team RC 2017 ). Results For UVC surveys at 60 days after reef construction, we documented a total of 2461 fish from 40 species and 19 families. Wrasses (Labridae), parrotfish (Scaridae), damselfish (Pomacentridae), and grunts (Haemulidae) were present on all 16 reefs (see supplemental material for complete species list). Community abundance and biodiversity Predator present and reef complexity had significant positive effects on total fish abundance, largely due to changes in Tomtate grunt ( Haemulon aurolineatum ) abundance (see supplemental material), but their interaction was not significant (Table 1 ). The highest fish abundance was found in predator present × high complexity (PH) treatments, with an average of six times more fish than predator absent × low complexity (AL) treatments (Figs. 2 , 3 ). We also tested for the same effects on total fish biomass—trends followed those of total fish abundances (see supplemental material). Species richness was only significantly affected by reef complexity, with high complexity reefs averaging ~ 4 more fish species compared to low complexity reefs (Fig. 2 ; Table 1 ). Inverse Simpson’s index of diversity at the species-level was not affected by either factor or their interaction [ANOVA global model (3, 12) = 1.27, P = 0.33; Table 1 ]. Table 1 Results of two-way ANOVA and PERMANOVA for reef fish communities based on fish species’ abundances at 60 days after reef construction Full size table Fig. 2 Effects of Nassau grouper presence (present, P or absent, A) and reef structural complexity (high, H or low, L) for end-of-experiment (60 days) reef fish total abundance and species richness ( n = 16) Full size image Fig. 3 Photographs demonstrate an example of the difference in total fish abundance between the two extreme reef treatments at 60 days after reef construction: predators present × high complexity (left) and predators absent × low complexity (right) Full size image Community composition Fish community structure, i.e., relative abundance of the constituent species, of high and low complexity reefs differed (PERMANOVA R = 0.28, P < 0.01; Table 1 ; Fig. 4 ). For high complexity treatments, reefs with and without predators differed from each other (PERMANOVA permutations = 999, R = 0.13, P = 0.03; Table 1 ; Fig. 4 ). Based on Bray–Curtis similarity indices, differences in fish communities among reef treatments were largely due to changes in Tomtate grunt abundances. Tomtate abundances differed by 26.9–72.9% between treatments (SIMPER). PH reefs versus AL reefs had the largest disparity in Tomtate grunt abundances, with predator present × high complexity reefs having an average of 15 × more Tomtate grunts than predator absent × low complexity reefs (Fig. 5 ). To help explain this pattern we also conducted a one-way ANOVA to test the effect of predators on the second most abundant species, White grunts ( Haemulon plumieri ), a potential competitor with Tomtate grunts (one-way ANOVA P = 0.18; Fig. 6 ). Fig. 4 Non-metric multidimensional scaling (nMDS) plot for fish community structure for four reef treatments ( n = 16). Reef fish communities were analyzed using reef fish abundances at day 60 of experimental treatments. “P” and “A” represent Nassau groupers present or absent, and “H” and “L” represent high and low reef complexity treatments. Each colored point represents one reef Full size image Fig. 5 Reef treatment effects on end-of-experiment (60 days) Tomtate grunt abundance ( n = 16, two-way ANOVA P = 0.03). Letters above each point indicate statistical difference between treatments; treatments that do not share the same letter are significantly different from one another. Reef treatments included predator present × high reef complexity (PH), predator absent × high reef complexity (AH), predator present × low reef complexity (PL), and low predator absent × low reef complexity (AL) Full size image Fig. 6 Average White grunt abundances for reef treatments at end-of-experiment (60 days). White grunt abundance was comprised from White grunt individuals that were estimated ≤ 5 cm in total length during UVC surveys to compare with similarly-sized Tomtate grunts. One-way ANOVA was conducted to compare white grunt abundances on predator present × high complexity reefs (PH) versus predator absent × high complexity reefs (AH; n = 8, P = 0.18) Full size image Discussion Overharvest of fishes and habitat degradation are two main anthropogenic threats to coastal and nearshore ecosystems (Lotze et al. 2006 ). We manipulated Nassau grouper presence and artificial patch reef complexity to simulate how simultaneous overexploitation of large-bodied predator species and reef degradation alter coral reef fish communities. We found predator present and high habitat complexity to have a positive, additive effect on total reef fish abundance (Figs. 2 , 3 ). Comparing our two most extreme treatments that simulated healthy versus degraded reefs, total reef fish abundance differed by 300% (Figs. 2 , 3 ). High reef complexity was positively associated with species richness as well as fish abundance for almost all 40 fish species recorded, except three species in which complexity had no effect (Fig. 2 ). The effect of predator present on fish abundance was dependent on fish species identity and reef type (Fig. 5 ). Overall, our study suggests that both top-down and bottom-up up changes (i.e., predators and habitat architecture) to interaction networks can have far-reaching effects on entire fish communities and, by extension, coral ecosystems. Further, rather than isolating top-down or bottom-up impacts, it is valuable to consider these effects in tandem. Reefs on which predators, in this case Nassau groupers, were present tended to have a higher total abundance of fish (Fig. 2 ). This result may be counterintuitive because Nassau groupers at this life stage are known to consume fishes (Eggleston et al. 1998 ). Our result is also in conflict with studies focused on groupers in fringing or barrier reef habitat that have typically found groupers to decrease prey abundance (Hixon and Carr 1997 ; Almany 2004a ). There are a few possible explanations for this observation. First, Nassau groupers can initiate trophic cascades in which their presence has an indirect, positive effect on smaller organisms. For example, Stallings ( 2008 ) showed that Nassau groupers present on reefs reduced the movement of two smaller-bodied grouper species, which indirectly increased reef fish recruitment. Such a behaviorally-mediated trophic cascade could be occurring in our study system because, even though we did not observe any small-bodied grouper species, we did observe other residential piscivores such as moray eels (Moridae) and transient predators such as jacks (Carangidae), albeit at low abundances. Another explanation might be due to unique life history characteristics of Nassau groupers. For instance, Nassau groupers are known to make nocturnal hunting migrations to other habitats directly adjacent to their home reefs. Therefore, the diurnal fish assemblages found on patch reefs may not be especially susceptible to predation from Nassau groupers residing on the same reef (Eggleston 1996 ; Sadovy and Eklund 1999 ). A third possibility is that Nassau groupers may select particular prey species and, in doing so, release those species’ competitors; we expand on this idea below. Our findings with respect to habitat complexity reinforce previous studies that reef structural complexity affects fish communities (Table 1 and Fig. 2 ; Hixon and Beets 1993 ; Almany 2004a ; Graham and Nash 2013 ). Reef complexity and heterogeneity in the morphology of a reef’s structure have been shown to be positively associated with fish abundance and species diversity (Hackradt et al. 2011 ; Kerry and Bellwood 2011 ), likely due to a combination of reducing competition for refugia and providing varied shapes and sizes of refugia to match fishes’ morphologies. Throughout the study, we observed different fishes consistently using different parts of the reefs, which may to some degree reflect habitat niche-partitioning. For example, grunt species were frequently observed aggregating within the high complex reefs’ PVC structures that mimicked an architecture of Acropora spp. (Lirman 1999 ), while squirrelfish and soldierfish (Holocentridae) were often found within cinder block holes. Therefore, high complexity reef treatments likely reduced competition for refugia by providing space for many individuals and species to cohabitate. Our results also show that even with predators present, high structural complexity seems to be an important factor mediating overall fish community composition (Fig. 4 ). A plausible explanation could be that our high complexity treatments decreased the consumption rate of Nassau groupers or other piscivores through the provision of smaller-sized refugia where individual piscivores could not enter (Hixon and Menge 1991 ; Almany 2004b ). We did not observe an interaction between predator presence and reef complexity for total fish abundance, but did observe a clear interaction when comparing fish community structure, i.e., relative abundance of the constituent species (Table 1 ). Our nMDS plot shows a separation between fish communities on high complexity reefs with and without predators present (Fig. 4 ). Results from SIMPER analyses show that differences in Tomtate grunt abundances, the most abundant species on all experimental reefs, largely explain the treatment effect on reef fish community composition (Table 1 and supplemental material). For high complexity reefs, there were more Tomtate grunts when Nassau groupers were present, while on low complexity reefs, Nassau groupers presence had a no effect on Tomtate grunt abundances (Fig. 5 ). A plausible explanation for this trend is that on complex reefs Nassau groupers present may have altered the outcome of interspecific competition between Tomtate grunts and other fish species that utilize similar resources but that are inferior at finding refugia from groupers (Persson 1991 ). Examining high complexity reefs only, we compared average abundances of similarly-sized (≤ 5 cm in TL) Tomtate and White grunts ( Haemulon plumieri ), the latter being the second most abundant species on all experimental reefs. Although statistically insignificant, when Nassau groupers were present on reefs, there were less White grunts than when groupers were absent (one-way ANOVA P = 0.18; Fig. 6 ). Thus, Tomtate grunts may have been superior than White grunts accessing refugia when Nassau groupers were present. Lastly, we did not study the direct mechanism for predator effects on Tomtate grunt abundances and another potential explanation could be density-independent factors. For example, pelagic fish larvae settlement is known to be influenced by a reef’s soundscape and both Nassau grouper and grunt species are well known to be vocal animals, and therefore, could have affected reef fish settlement patterns (Hazlett and Winn 1962 ; Freeman and Freeman 2016 ). The high abundance of Tomtate grunts on complex reefs with groupers present could have effects on other facilitative species interactions (Meyer and Schultz 1985 ). For example, Huntington et al. ( 2017 ) suggested that large fish aggregations on high complexity reefs provide sufficient consumer-mediated nutrients to facilitate coral growth and survivorship, which in turn can increase reef complexity over time and feedback to support larger fish aggregations. In general, grunt species form dense, diurnal aggregations on reefs, and make nocturnal migrations to nearby seagrass and mangrove habitats to hunt (Meyer et al. 1983 ). Because they often makeup a large percentage of the biomass found on reefs and make nightly migrations, they are thought to be important transporters of critical nutrients for primary producers including the symbiotic algae (zooxanthellae) that resides in reef-building corals (Allgeier et al. 2017 ). In light of our results that show grunt abundances are sensitive to predator presence, we argue that current impact assessments could be underestimating the ecological impact of intense fishing pressure on large-bodied reef predators like Nassau grouper. Our study is one of few to examine how fisheries-targeted predator removal, declining reef habitat complexity, and their interaction affect fish community assemblage using in situ manipulations. Currently, most coral reef ecosystems throughout the world are faced with multiple stressors and, consequently, it is important to understand not only how species interactions may change, but also how these changes may scale up to alter community composition and ecosystem function. In the context of fisheries management and coral reef restoration and conservation, we have shown how the removal of large-bodied piscivores and the decrease in reef complexity can alter reef fish densities and species richness. We suggest that in order to maintain biodiverse coral reef communities and preserve ecosystem processes, management should focus on both the conservation of large-bodied piscivores and the restoration of reef habitat complexity, through either reintroducing live coral or introducing artificial structure mimicking lost coral morphology.
New research highlights two factors that play a critical role in supporting reef fish populations and—ultimately—creating conditions that are more favorable for the growth of both coral reefs and seagrass. "Previous work has shown mixed results on whether the presence of large predator species benefits reef fish populations, but we found that the presence of Nassau grouper (Epinephelus striatus) had an overall positive effect on fish abundance," says Enie Hensel, a former Ph.D. student at North Carolina State University and lead author of a paper on the work. "We also found that habitat complexity benefits both fish abundance and species richness, likely because it gives fish a larger variety of places to shelter." This is consistent with previous work. "One of the surprises here was that the effect of predator presence on fish abundance was comparable to the effect of habitat complexity," Hensel says. To better understand the effect of these variables, researchers constructed 16 artificial "patch" reefs in shallow waters off the coast of Great Abaco Island in The Bahamas. Eight of the reefs consisted of cement-filled cinder blocks, mimicking degraded reefs with limited habitat complexity. The remaining eight reefs consisted of unfilled cinder blocks and branching pipe structures, mimicking the more complex physical environment of healthier reefs. Once in place, the researchers waited for groupers to move in and claim the new reef territory. The groupers were large juveniles, ranging in size from 16-33 centimeters. The researchers then removed the groupers from four of the degraded reef sites and from four of the complex reef sites. Groupers that were removed were relocated to distant reefs. Researchers monitored the sites for 60 days to ensure that the grouper-free reefs remained free of groupers. At the end of the 60 days, the researchers assessed the total number of fish at each reef site, as well as the total number of fish species at each site. The differences were significant. Simple artificial reef structures, like that on the left, did little to support fish populations. Complex structures, like the one on the right, helped to support larger communities of fish. Credit: Enie Hensel Fish abundance, or the total number of fish, was highest at sites that had both a resident grouper and complex habitat. Abundance at these sites ranged from 275 fish to more than 500—which is remarkable given that each reef was less than a meter long in any direction. By comparison, sites that had simple structures and no grouper had fewer than 50 fish on average. Simple structures with predators had around 75 fish, while complex sites without grouper had around 100. "We think the presence of the grouper drives away other predators, which benefits overall fish abundance," Hensel says. "And a complex habitat offers niches of various sizes and shapes, which can shelter more and different kinds of fish than a degraded, simple habitat." The presence of grouper had little or no effect on species richness, or the number of different species present at each site. However, habitat complexity made a significant difference. Complex sites had 11-13 species, while degraded sites had around seven. "We found that the sites with complex habitats and the presence of predators had fish populations that were actually larger than what we see at surrounding, similar-sized natural reefs," Hensel says. "That's because the natural reefs in the area are all degraded due to a variety of stressors. "We also found that the presence of grouper on complex reefs led to a significant jump in the population of Tomtate grunts (Haemulon aurolineatum)," Hensel says. "That's good news, because Tomtates are a species that provides a lot of ecosystem services, which would be good for creating conditions that are more amenable to both coral reef growth and seagrass growth. "Currently, my colleagues and I are building from these findings in two directions. We're measuring long-term community and ecosystem level responses to coral restoration or the reintroduction of structurally complex habitat; and we are also measuring long-term biological and physiological responses of fishes residing on restored reefs. For the latter, Haley Gambill, an undergraduate at NC State, is measuring changes in the age and growth of grunts. "It's also worth noting that this is an area that was hit hard by Hurricane Dorian. Because we've done so much reef population work in that area, I'm hoping to return to do some work that can help us understand how extreme weather events can affect these ecosystems."
10.1007/s00227-019-3568-3
Physics
The spin state story: Observation of the quantum spin liquid state in novel material
Masayoshi Fujihala et al. Gapless spin liquid in a square-kagome lattice antiferromagnet, Nature Communications (2020). DOI: 10.1038/s41467-020-17235-z Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-17235-z
https://phys.org/news/2020-07-state-story-quantum-liquid-material.html
Abstract Observation of a quantum spin liquid (QSL) state is one of the most important goals in condensed-matter physics, as well as the development of new spintronic devices that support next-generation industries. The QSL in two dimensional quantum spin systems is expected to be due to geometrical magnetic frustration, and thus a kagome-based lattice is the most probable playground for QSL. Here, we report the first experimental results of the QSL state on a square-kagome quantum antiferromagnet, KCu 6 AlBiO 4 (SO 4 ) 5 Cl. Comprehensive experimental studies via magnetic susceptibility, magnetisation, heat capacity, muon spin relaxation ( μ SR), and inelastic neutron scattering (INS) measurements reveal the formation of a gapless QSL at very low temperatures close to the ground state. The QSL behavior cannot be explained fully by a frustrated Heisenberg model with nearest-neighbor exchange interactions, providing a theoretical challenge to unveil the nature of the QSL state. Introduction Magnetic phases of low-dimensional magnets have been studied both theoretically and experimentally in the last half century. Intensive studies of one-dimensional (1D) spin systems have successfully captured the exotic quantum states, such as the Tomonaga–Luttinger spin-liquid state 1 and the Haldane state 2 . Recent progress in synthesising ideal 1D magnets has evolved this research field 3 . On the other hand, in 2D spin systems, the spin-1/2 kagome antiferromagnet is an excellent choice for searching for the QSL state induced by geometrical frustration 4 . A possible compound for QSL in the kagome antiferromagnets was herbertsmithite, which has the Cu 2+ layers with ideal kagome geometry sandwiched by nonmagnetic Zn 2+ layers 5 . To date, no long-range order has been found at any temperature, and a continuum of spin excitations was observed by INS experiments. However, the low-energy magnetic excitation is still unclear as seen in a controversy on gapless 6 or gapped 7 excitation. This is related to the fact that herbertsmithite is obtained by selectively replacing magnetic Cu 2+ ions with nonmagnetic Zn 2+ ions on the triangular-lattice planes of its parent compound clinoatacamite 8 , Cu 2 (OH) 3 Cl. This replacement inevitably causes site mixing 9 . Other materials with the kagome lattice exhibit long-range magnetic or valence-bond crystal (VBC) orders caused by lattice distortions, the DM interaction and further neighbour interactions 10 , 11 , 12 , 13 , 14 . The lack of a suitable model material exhibiting the QSL hinders observations of the QSL state in the 2D spin-1/2 systems. Another highly frustrated 2D quantum spin system expected to be a QSL state is a compound with the square-kagome lattice (SKL). The SKL was introduced by Siddharthan et al. 15 . It has the same coordination number as the kagome lattice ( z = 4), but with a composition of two inequivalent sublattices in contrast to the kagome lattice. Richter et al. reported that the ground state of the spin-1/2 SKL with three equivalent exchange interactions (the case of J 1 = J 2 = J 3 and J X = 0 in Fig. 1 c) is similar to that of the kagome lattice 16 . The ground state of the spin-1/2 J 1 – J 2 SKL antiferromagnet (the case of J 2 = J 3 and J X = 0 in Fig. 1 c) was calculated by Rousochatzakis et al. 17 . It has been predicted to be a crossed-dimer VBC state and a square pinwheels VBC state, depending on J 2 / J 1 . Moreover, there is a possibility that the QSL ground states are realised in the SKL with three nonequivalent exchange interactions (the case of J X = 0 in Fig. 1 c), which lead to the melting of these VBC states 18 . Very recently, it has also predicted to be a topological nematic spin-liquid state 19 . In the magnetic field, the existence of the magnetisation plateaus of M / M sat = 1/3 and 2/3 has theoretically clarified 16 , 17 , 18 , 20 , where M sat is the saturation magnetisation. These plateau phases exhibit VBC, up–up–down structure, and alternate trimerized states. In the high magnetic field and low-temperature regime, a magnetic-field-driven Berezinskii–Kosterlitz–Thouless phase transition exists 21 . However, the lack of a model compound for the SKL system has obstructed a deeper understanding of its spin state. Motivated by the present status on the study of the SKL system, we searched for compounds with the SKL containing Cu 2+ spins, and synthesised the first compound of a SKL antiferromagnet, KCu 6 AlBiO 4 (SO 4 ) 5 Cl, successfully. Here, we use thermodynamic, muon spin relaxation and neutron-scattering experiments on powder samples of KCu 6 AlBiO 4 (SO 4 ) 5 Cl, to demonstrate the absence of magnetic ordering and the presence of gapless continuum of spin excitations. Fig. 1: Spin-1/2 J 1 – J 2 – J 3 square-kagome lattice in KCu 6 AlBiO 4 (SO 4 ) 5 Cl. a Crystal structure of KCu 6 AlBiO 4 (SO 4 ) 5 Cl featuring a large interlayer spacing. b Arrangement of the Cu 2+ orbitals in SKL. The \({d}_{{x}^{2}-{y}^{2}}\) orbitals carrying spin-1/2 are depicted on the Cu sites. c Square-kagome lattice of KCu 6 AlBiO 4 (SO 4 ) 5 Cl consisting of Cu 2+ ions with nearest-neighbour exchange couplings J 1 , J 2 , J 3 and next-nearest-neighbour exchange coupling J X . Full size image Results Crystal structure The synthesis of KCu 6 AlBiO 4 (SO 4 ) 5 Cl was conceived following the identification of the naturally occurring mineral atlasovite, KCu 6 FeBiO 4 (SO 4 ) 5 Cl 22 . The space group and structural parameters of KCu 6 AlBiO 4 (SO 4 ) 5 Cl are determined as P 4/ n c c , (the same space group as atlasovite) and a = 9.8248(9) Å, c = 20.5715(24) Å, respectively (see Supplementary Note 1 ). As shown Fig. 1a and b, the SKL in the crystal structure of KCu 6 AlBiO 4 (SO 4 ) 5 Cl comprises the six-coordinated Cu 2+ ions. In each SK unit, the square is enclosed by four scalene triangles. From this crystal structure, it is recognised that KCu 6 AlBiO 4 (SO 4 ) 5 Cl has three types of first neighbour interactions, J 1 , J 2 and J 3 , as shown in Fig. 1 c. The orbital arrangements can be reasonably deduced from the oxygen and chloride positions around the Cu 2+ ions. Judging from the \({d}_{{x}^{2}-{y}^{2}}\) orbitals arranged on the SKL, the nearest-neighbour (NN) magnetic couplings J i ( i = 1–3) are superexchange interactions occurring through Cu–O–Cu bonds: J 1 through the Cu1–O–Cu1 bond with a bond of angle 112.62°, and J 2 and J 3 through Cu1–O–Cu2 with bond angles of 120.12° and 108.61°, respectively. Since the Cu–O–Cu angle significantly influences on the value of the exchange interactions, the variation of the angles can give strong bond-dependent exchange interactions 23 . Therefore, J 2 with the largest angle is expected to be the largest antiferromagnetic interaction, while J 3 with the smallest angle is considered to be the smallest antiferromagnetic interaction among the three interactions. One prominent and important feature of the present structure is the occupancy of nonmagnetic atoms in the interlayer space of the unit cell (Fig. 1 b), which elongate the interlayer spacing. Furthermore, the Cu 2+ ions and nonmagnetic ions have different valence numbers in KCu 6 AlBiO 4 (SO 4 ) 5 Cl, avoiding site mixing, unlike the Cu 2+ and Zn 2+ site mixing observed in herbertsmithite (for more details, see Supplementary Notes 1 and 2 ). Therefore, the crystal perfectness and high two-dimensionality of KCu 6 AlBiO 4 (SO 4 ) 5 Cl are ideal for studying the intrinsic magnetism on frustrated 2D magnets. However, the obtained INS experimental results are inconsistent with the calculated results for the J 1 - J 2 - J 3 SKL model (discussed below). Magnetic and thermodynamic properties Figure 2 a presents the temperature dependence of the magnetic susceptibility χ ( T ) and the inverse magnetic susceptibility 1/ χ ( T ) of KCu 6 AlBiO 4 (SO 4 ) 5 Cl in the temperature range 1.8–300 K. With decreasing temperature, the magnetic susceptibility gradually increases. This feature suggests the absence of any long-range order down to 1.8 K. From 1/ χ ( T ) with the Curie–Weiss law C /( T – θ CW ), between 200 K and 300 K, we estimated the Curie constant and Weiss temperature to be C = 2.86(1) and θ CW = −237(2) K, respectively. The C corresponds to an effective moment of 1.96 μ B , consistent with the spin S = 1/2 of Cu 2+ . The large negative Weiss temperature and the absence of long-range orders suggest an antiferromagnetic frustrated system. Fig. 2: Magnetic and thermodynamic properties of KCu 6 AlBiO 4 (SO 4 ) 5 Cl. a Temperature dependence of the magnetic susceptibility χ (open red circles) and the inverse susceptibilities 1/ χ (open blue circles) of KCu 6 AlBiO 4 (SO 4 ) 5 Cl measured at 1 T. The χ is obtained by subtracting the Pascal's diamagnetic contribution from the experimental data. The solid grey lines denote the fitting curves by the Curie–Weiss law. b High-field magnetisation measured up to 60 T at 1.8 K. The observed data M obs. (filled red circles) are broken down into two components: M bulk (black solid line) and M free (open green circles). Inset shows the magnetisation measured using MPMS at 1.8 K. The grey line is the Brillouin function for g = 2 and 2.4% of free S = 1/2 spins. c Temperature dependence of the total specific heat measured at zero field (filled red circles). The grey line is the assumed lattice contribution C lattice. = 0.000555 T 3 . The green dashed line is the estimated magnetic entropy. Inset shows a log-log plot of the same data. Full size image The magnetisation curve measured at 1.8 K, as shown in the inset of Fig. 2 b, has two components: an intrinsic component M bulk and free spin component M free . Following the analysis for herbertsmithite 24 , a saturated magnetisation of M free can be estimated by subtraction of the linear M bulk from the measured total magnetisation M obs. . The M free can be fitted a Brillouin function for a spin-1/2, suggesting the component is attributed to the paramagnetic impurity or the unpaired spins on surface of powder particles. The saturated value of M free indicates the presence of free spins with about 2.4% in total Cu 2+ ions in our sample. The M bulk at high magnetic field is only ~0.15 μ B /Cu 2+ at 60 T, indicating that strong antiferromagnetic exchange interaction dominates in this system (see Fig. 2 b). A Schottky-like peak in the heat capacity is observed at around T * ≈ 2 K, as shown in Fig. 2 c. As the released magnetic entropy at 15 K is only 16% of the expected total entropy, which is similar to that of herbertsmithite. In herbertsmithite, this behaviour was attributed to weakly coupled spins residing on the interlayer sites 9 . However, in KCu 6 AlBiO 4 (SO 4 ) 5 Cl, it is difficult to assign the 16% entropy to the site mixing because of the valence of nonmagnetic ions different from Cu 2+ . Rather the observed peak can be attributed easily to the development of short-range spin correlations. Similar characteristics are observed in the calculated specific heat and entropy of the the spin-1/2 kagome antiferromagnet 25 . Small broad peak appear at around T ≈ J /100, and the released entropy at around this temperature is about 20%. As discussed below, the magnitude of the exchange interaction J a v ≡ ( J 1 + J 2 + J 3 )/3 = 137 K for KCu 6 AlBiO 4 (SO 4 ) 5 Cl, namely, J a v /100 ≈ T * . However, careful consideration is necessary about what origin of this peak is. We therefore conclude that the long-range magnetic and VBC-ordering behaviours are not observed in magnetic susceptibility, magnetisation and specific heat. Quantum spin fluctuations in KCu 6 AlBiO 4 (SO 4 ) 5 Cl To confirm the absence of spin ordering caused by quantum fluctuations, we performed μ SR measurements. Figure 3 a shows the weak longitudinal-field (LF) (=50 G) μ SR spectra at various temperatures. The weak LF was applied to quench the depolarisation due to random local fields from nuclear magnetic moments. The spectra are well fitted by the exponential function $$a(t)={a}_{1}\exp (-\lambda t)+{a}_{{\rm{B}}G},$$ (1) where a 1 is an intrinsic initial asymmetry a 1 = 0.133, a B G is a constant background a B G = 0.047 (see Supplementary Note 2 ), λ is the muon spin relaxation rate. Hartree potential calculation predicted a local potential minimum in the lattice (see Fig. 3 d, e) 26 , 27 , 28 . A muon site corresponding to a local potential minimum is located at the 16g site. Quantum fluctuations of the Cu 2+ spins down to 58 mK without spin ordering/freezing are evidenced by the long-time μ SR spectra. The weak LF signals at the lowest temperature (58 mK) decrease continuously without oscillations up to 15 μs, as shown in Fig. 3 b. If this spectrum is due to static magnetism, the internal field (estimate as λ ZF / γ μ , where γ μ is the muon gyromagnetic ratio) should be less than 20 G. (see Supplementary Note 3 ). However, relaxation is clearly observed in the LF spectrum even at 0.395 T, which is evidence for the fluctuation of Cu 2+ electron spins without spin ordering/freezing (see Fig. 3 c). As shown in Fig. 3 f, the increase of λ at around T * renders evidence for a slowing down of the spin fluctuation resulting from the development of short-range correlations. In addition, they exhibit a plateau with weak temperature dependence at low temperature, which has been found in other QSL candidates 29 . The LF spectra measured at 58 mK under several magnetic fields are also fitted by Eq. (1). Using the power law represented by 1/( a + b H α ) with α = 0.46, where a and b depend on the fluctuation rate and fluctuating field, we obtain a good fitting to the LF dependence of the muon spin relaxation rate λ , as shown in Fig. 3 g. Incidentally, the 1/( a + b H 2 ) is a standard case that the λ obeys the Redfield equation. In ordinary disordered spin systems, the muon spin relaxation rate exhibits a field- inverse square dependence. Such a spectral-weight function is commonly used to describe classical fluctuations in the paramagnetic regime. The observed values, α = 0.46, are inconsistent with the existence of a single timescale and suggest a more exotic spectral density, such as the one at play in a QSL. All of these μ SR results strongly support the formation of a QSL at very low temperature close to the ground state in KCu 6 AlBiO 4 (SO 4 ) 5 Cl 30 , 31 . Fig. 3: Muon spin relaxation data of KCu 6 AlBiO 4 (SO 4 ) 5 Cl. a LF- μ SR spectra (obtained in a dilution refrigerator) at representative temperatures (see Supplementary Note 3 for the spectra obtained using the 4 He cryostat). The thick lines behind the data points are the fitted curves (see text for details). b The LF- μ SR spectrum measured at 58 mK. The spectrum decreases continuously without oscillations up to 15 μs. c μ SR spectra measured at 58 mK under several longitudinal magnetic fields. d Projection along the c axis. e Projection along the a axis. The muon site was obtained by a Hartree potential calculation. f Temperature dependence of the muon spin relaxation rate λ . The grey solid lines are guides for the eyes. g Magnetic-field dependences of the muon spin relaxation rate λ . The solid curves are fitted to a power law of the form 1/( a + b H α ). The error bars in a , b and c represent 1 s.d. and in f and g the maximum possible variation due to correlation of parameters. Full size image Gapless continuum of spin excitations The quantum statistics of quasiparticle excitations depend on the type of QSL, in particular, the nature of their excitation. To grasp the whole picture of the spin excitation, first we performed the INS experiment in a wide energy range. As shown in Fig. 4 a, streak-like excitation at Q = 0.8 Å −1 and flat signals at around E = 7 and 10 meV are observed at 5 K. The E -dependence of the INS intensity can be fitted well by two or three Gaussian functions and linear baseline, and the corresponding integrated intensities are obtained (for more details, see Supplementary Note 4 ). As shown in Fig. 4 b, the peak positions of excitations are estimated to be 10.1(1) meV, 9.4(3) meV, and 7.3(1) meV, respectively. The signal due to magnetic excitation is generally enhanced at low- Q values, whereas phonon excitation is dominant at high- Q . As shown in Fig. 4 c, the baseline increase with increasing with Q . Therefore, the baseline may well comes from a number of phonon excitations in a multi-element material KCu 6 AlBiO 4 (SO 4 ) 5 Cl. The peak at 9.4 meV also increases with increasing with Q , indicating that it comes from phonon excitation. On the other hand, the flat signals have a characteristic feature of magnetic excitation. In order to investigate whether the spin excitation is gapless or gapped, we performed the INS experiments in the low-energy region. These signals are also observed at 0.3 K, as shown in Fig. 4 d, there are the streak-like excitation and flat signals are also observed. As shown in Fig. 4 e and g, the INS spectra exhibit the feature of a gapless continuum of spin excitations. Streak-like excitation at Q = 0.8 Å −1 is clearly visible down to the elastic line, and its intensity increases continuously without signature of energy gap at least within the instrumental resolution (FWHM = 0.05 meV for E i = 1.69 meV). The excitation persist up to at least T = 30 K (see Supplementary Fig. 5 ), which is consistent with the exchange constants estimated later. The Q -dependence of the INS intensity after integration over a finite energy interval is shown in Fig. 4 h. There are three peaks at Q = 0.8, 1.25, and 1.58 Å −1 at 0.3 K, and the peaks are observed even at low temperatures close to the ground state (48 mK). As discussed below, this result is inconsistent with the calculated dynamical spin structure factor S ( q , ω ) in the J 1 – J 2 – J 3 SKL antiferromagnet with parameters, which reproduce magnetic susceptibility and magnetisation process. These INS data are consistent with a gapless continuum of spinon excitations. From the above, the flat signals at approximately 10 and 7 meV probably indicate a van Hove singularity of spinon continuum edges at this energy. Fig. 4: Inelastic neutron-scattering data of KCu 6 AlBiO 4 (SO 4 ) 5 Cl. a INS spectra at 5 K observed using HRC with an incident neutron energy of 45.95 meV. b Energy dependence of the scattering integrated over Q in the range 1.9 Å −1 < Q < 2.1 Å −1 and 3.9 Å −1 < Q < 4.1 Å −1 measured at 5 K (HRC). The solid lines are the fitted curves (see text for details), the thin lines are its components. c Q -dependence of the integrated intensity for the different Gaussian components ( E = 10.1(1) meV, 9.4(3) meV and 7.3(1) meV). The solid thick lines are guides for the eyes. d INS spectra at 0.3 K observed using AMATERAS with an incident neutron energy of 15.16 meV. e INS spectra at 0.3 K observed using AMATERAS with incident neutron energy of 1.69 meV. f Energy dependence of the scattering integrated over Q in the range 0.6 Å −1 < Q < 1.0 Å −1 measured at 0.3 K. The grey solid line is guides for the eyes. g INS spectra at 0.3 K observed using AMATERAS with incident neutron energy of 3.14 meV. h Q -dependence of the scattering integrated over energy transfers 0.5 meV < E < 1.5 meV measured at 0.3 K (AMATERAS) and 50 mK (PELICAN). The error bars represent standard deviation. Full size image Comparison with theory To determine the magnetic parameters and to clarify the magnetic properties of KCu 6 AlBiO 4 (SO 4 ) 5 Cl, we calculated the magnetic susceptibility, the magnetisation curve at zero temperature, and the magnetic excitation at zero temperature by mean of the exact diagonalization (ED), finite temperature Lanczos (FTL) 32 and density-matrix renormalization group (DMRG) method. We succeeded in reproducing the magnetic susceptibility and magnetisation curve of KCu 6 AlBiO 4 (SO 4 ) 5 Cl with the J 1 – J 2 – J 3 SKL model with J 1 = 135 K, J 2 = 162 K, J 3 = 115 K and g = 2.11, as shown in Fig. 5 a and b, where g is the gyromagnetic ratio. In the magnetisation process, the magnetisation plateaus of M / M sat = 1/3 and 2/3 were confirmed at around 150 T and 270 T, respectively. This indicates the possibility to observe magnetisation plateaus experimentally if the measurement of the magnetisation process in a further strong magnetic field is performed. However, the result of inelastic neutron scattering that is the most important evidence of QSL cannot be reproduced in the J 1 – J 2 – J 3 SKL with these parameters. In the inelastic neutron-scattering experiment, in the low-energy region, the strongest intensity become around Q = 0.8 Å −1 as shown Fig. 4 f, g, while in the dynamical DMRG method, it is around Q = 1.3 Å −1 as shown in Fig. 5 c. To eliminate this discrepancy, we also calculated the SKL model with next-nearest-neighbour (NNN) interaction J X in the diagonal direction of the Cu 2+ square. We calculated this SKL model with various values of the parameters, but we could not reproduce the experimental results. Therefore, in order to understand the experiment correctly, we need to calculate the model with further interactions. Fig. 5: Experimental results of KCu 6 AlBiO 4 (SO 4 ) 5 Cl compared to theory. a Temperature dependence of the magnetic susceptibility χ (open red circles) of KCu 6 AlBiO 4 (SO 4 ) 5 Cl and the fitted calculation data obtained by the FTL method for a 36-site cluster (blue line) and ED method for a 18-site cluster (green dashed line). Note that the statistical error of the FTL is within the grey area (for more details, see Supplementary Note 5). b High-field magnetisation measured up to 60 T at 1.8 K (open red circles) and the fitted calculation data at T = 0 K obtained by the Lanczos-type ED method for a 36-site cluster (blue dashed line) and DMRG method for a 60-site cluster (black solid line). c Q -dependence of powder-averaged dynamical spin structure factor S ( Q , E ) integrated over 0.5 meV < E < 1.5 meV at T = 0 K obtained by dynamical DMRG for a 48-site PBC cluster of the SKL. Full size image Discussion We have synthesised a SKL spin-1/2 antiferromagnet KCu 6 AlBiO 4 (SO 4 ) 5 Cl without site disorder, thus providing a first candidate to investigate the SKL magnetism. The μ SR measurement shows no long-range ordering down to 58 mK, roughly three orders of magnitude lower than the NN interactions. The INS spectrum exhibits a streak-like gapless excitation and flat dispersionless excitation, consistent with powder-averaged spinon excitations. Our experimental results strongly suggest the formation of a gapless QSL in KCu 6 AlBiO 4 (SO 4 ) 5 Cl at very low temperature close to the ground state; however, they are inconsistent with the theoretical studies based on the J 1 – J 2 – J 3 SKL Heisenberg model. In the J 1 – J 2 – J 3 SKL Heisenberg model, the VBC and Néel order stats are expected with high probability. In fact, the VBC state is the ground state of the J 1 – J 2 SKL antiferromagnet regardless of the magnitude relation of J 1 / J 2 17 . Thus, to realise the QSL state in the SKL, we must impose an additional condition such as longer-range exchange interactions. Further theoretical study would reveal the conditions inducing the QSL state in SKL antiferromagnets. Methods Sample synthesis Single phase polycrystalline KCu 6 AlBiO 4 (SO 4 ) 5 Cl was synthesised by the solid-state reaction in which high-purity KAl(SO 4 ) 2 , CuCl 2 , CuSO 4 , CuO and Bi 2 O 3 powders were mixed in a molar ratio of 2:1:6:5:1, followed by heating at 600 °C for 3 days and slow cooling in air. X-ray diffraction Synchrotron powder XRD data were collected using an imaging plate diffractometer installed at the BL-8B of the Photon Factory. The incident synchrotron X-ray energy of 18.0 keV (0.68892 Å) was selected. Magnetic susceptibility and low-field magnetisation Magnetic susceptibility and low-field magnetisation measurements were performed using a commercial superconducting quantum interference device magnetometer (MPMS-XL7AC: Quantum Design). High-field magnetisation High-field magnetisation measurements up to 60 T were conducted using an induction method in a pulsed magnetic field at the International MegaGauss Science Laboratory, The University of Tokyo. Heat capacity The specific heat was measured between 0.2 and 20 K using a PPMS (physical property measurement system; Quantum Design). Muon spin relaxation ( μ SR) The μ SR experiments were performed using the spin-polarised pulsed surface-muon ( μ + ) beam at the D1 beamline of the Materials and Life Science Experimental Facility (MLF) of the Japan Proton Accelerator Research Complex (J-PARC). The spectra were collected in the temperature range from 58 mK to 300 K using a dilution refrigerator and 4 He cryostat. Inelastic neutron scattering (INS) The high-energy INS experiment was performed on the HRC 33 , installed at BL12 beamline at MLF of J-PARC. At the HRC, white neutrons are monochromatised by a Fermi chopper synchronised with the production timing of the pulsed neutrons. The energy transfer was determined from the time-of-flight of the scattered neutrons detected at position sensitive detectors. A 200-Hz Fermi chopper was used to obtain a high neutron flux. A GM-type closed cycle cryostat was used to achieve 5 K. The energy of incident neutrons were E i = 45.95 meV. The data collected by HRC were analysed using the software suite HANA 34 . The low-energy INS experiments were performed using the cold-neutron time-of-flight spectrometer PELICAN at the OPAL reactor at ANSTO 35 . The instrument was aligned for an incident energy E i = 2.1 meV. The sample was held in an oxygen-free copper can and cooled using a dilution insert installed in a top-loading cryostat and data collected at 25 K, 15 K and 48 mK. The sample was corrected for background scattering from an empty can and normalised to the scattering from a vanadium standard. The PELICAN data corrections were performed using the freely available LAMP software. The INS spectra in a wide momentum-energy range were measured using the cold-neutron disk chopper spectrometer AMATERAS installed in the MLF at J-PARC 36 . The sample was cooled to 0.3 K using a 3 He refrigerator. The scattering data were collected with a set of incident neutron energies, E i = 1.69, 3.14 and 15.16 meV. The data collected by AMATERAS were analysed using the software suite UTSUSEMI 37 . Calculations Magnetic susceptibility of the SKL is calculated by the full ED method for 18-site and FTL method for 36-site under the periodic boundary condition (PBC). The result of the FTL method is deduced by the statistical average of 40 sampling. The magnetisation curve at T = 0 K is calculated by the Lanczos-type ED calculations for a 36-site PBC cluster and the DMRG method for a 60-site PBC cluster. The truncation number in the DMRG calculation is 6000 and resulting truncation errors are less than 2 × 10 −5 . The dynamical spin structure factor S ( q , ω ) is calculated using the dynamical DMRG 38 method for a 48-site PBC cluster. The truncation number m = 6000 and the truncation error are less than 5 × 10 −3 . S ( q , ω ) is defined as follows: $$S({\bf{q}},\omega )=-\frac{1}{\pi N}{\rm{Im}}\left\langle 0\right|{S}_{-{\bf{q}}}^{z}\frac{1}{\omega -{\mathscr{H}}+{E}_{0}+i\eta }{S}_{{\bf{q}}}^{z}\left|0\right\rangle ,$$ (2) where q is the momentum, \(\left|0\right\rangle\) is the ground state with energy E 0 , η is a broadening factor and \({S}_{{\bf{q}}}^{z}={N}^{-1/2}{\sum }_{i}{e}^{i{{\bf{q}}r}_{i}}{S}_{i}^{z}\) with r i being the position of spin i and \({S}_{i}^{z}\) being the z component of S i . The value of η is taken to be 1.16 meV. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Aside from the deep understanding of the natural world that quantum physics theory offers, scientists worldwide are striving to bring forth a technological revolution by leveraging this newfound knowledge in engineering applications. Spintronics is an emerging field that aims to surpass the limits of traditional electronics by using the spin of electrons, which can be roughly seen as their angular rotation, as a means to transmit information. But the design of devices that can operate using spin is extremely challenging and requires the use of new materials in exotic states—even some that scientists do not fully understand and have not experimentally observed yet. In a recent study published in Nature Communications, scientists from the Department of Applied Physics at Tokyo University of Science, Japan, describe a newly synthesized compound with the formula KCu6AlBiO4(SO4)5Cl that may be key in understanding the elusive "quantum spin liquid (QSL)" state. Lead scientist Dr. Masayoshi Fujihala explains his motivation: "Observation of a QSL state is one of the most important goals in condensed-matter physics as well as the development of new spintronic devices. However, the QSL state in two-dimensional (2-D) systems has not been clearly observed in real materials owing to the presence of disorder or deviations from ideal models." What is the quantum spin liquid state? In antiferromagnetic materials below specific temperatures, the spins of electrons naturally align into large-scale patterns. In materials in a QSL state, however, the spins are disordered in a way similar to how molecules in liquid water are disordered in comparison to crystalline ice. This disorder arises from a structural phenomenon called frustration, in which there is no possible configuration of spins that is symmetrical and energetically favorable for all electrons. KCu6AlBiO4(SO4)5Cl is a newly synthesized compound whose copper atoms are arranged in a particular 2-D pattern known as the "square kagome lattice (SKL)," an arrangement that is expected to produce a QSL state through frustration. Professor Setsuo Mitsuda, co-author of the study, states: "The lack of a model compound for the SKL system has obstructed a deeper understanding of its spin state. Motivated by this, we synthesized KCu6AlBiO4(SO4)5Cl, the first SKL antiferromagnet, and demonstrated the absence of magnetic ordering at extremely low temperatures—a QSL state." However, the experimental results obtained could not be replicated through theoretical calculations using a standard "J1-J2-J3 SKL Heisenberg" model. This approach considers the interactions between each copper ion in the crystal network and its nearest neighbors. Co-author Dr. Katsuhiro Morita explains: "To try to eliminate the discrepancy, we calculated an SKL model considering next-nearest-neighbor interactions using various sets of parameters. Still, we could not reproduce the experimental results. Therefore, to understand the experiment correctly, we need to calculate the model with further interactions." This disagreement between experiment and calculations highlights the need for refining existing theoretical approaches, as co-author Prof Takami Tohyama concludes: "While the SKL antiferromagnet we synthesized is a first candidate to investigate SKL magnetism, we may have to consider longer-range interactions to obtain a quantum spin liquid in our models. This represents a theoretical challenge to unveil the nature of the QSL state." Let us hope physicists manage to tackle this challenge to bring us yet another step closer to the wonderful promise of spintronics.
10.1038/s41467-020-17235-z
Medicine
Determination of glycine transporter opens new avenues in development of psychiatric drugs
Azadeh Shahsavar et al, Structural insights into the inhibition of glycine reuptake, Nature (2021). DOI: 10.1038/s41586-021-03274-z Journal information: Nature
http://dx.doi.org/10.1038/s41586-021-03274-z
https://medicalxpress.com/news/2021-03-glycine-avenues-psychiatric-drugs.html
Abstract The human glycine transporter 1 (GlyT1) regulates glycine-mediated neuronal excitation and inhibition through the sodium- and chloride-dependent reuptake of glycine 1 , 2 , 3 . Inhibition of GlyT1 prolongs neurotransmitter signalling, and has long been a key strategy in the development of therapies for a broad range of disorders of the central nervous system, including schizophrenia and cognitive impairments 4 . Here, using a synthetic single-domain antibody (sybody) and serial synchrotron crystallography, we have determined the structure of GlyT1 in complex with a benzoylpiperazine chemotype inhibitor at 3.4 Å resolution. We find that the inhibitor locks GlyT1 in an inward-open conformation and binds at the intracellular gate of the release pathway, overlapping with the glycine-release site. The inhibitor is likely to reach GlyT1 from the cytoplasmic leaflet of the plasma membrane. Our results define the mechanism of inhibition and enable the rational design of new, clinically efficacious GlyT1 inhibitors. Main Glycine is a conditionally essential amino acid with a dual role in the central nervous system (CNS). It acts as a classical neurotransmitter at inhibitory glycinergic synapses, where it induces hyperpolarizing chloride influx at postsynaptic terminals through ionotropic glycine receptors 1 , 2 . Yet, as the obligatory co-agonist of the N -methyl- d -aspartate (NMDA) receptor, glycine also positively modulates calcium-dependent neuronal excitation and plasticity at glutamatergic synapses 1 , 3 . Glycine homeostasis is tightly regulated by reuptake transporters—including the glycine-specific GlyT1 and GlyT2—that belong to the secondary active neurotransmitter/sodium symporters (NSSs) of the solute carrier 6 (SLC6) transport family 5 . GlyT1 (encoded by the SLC6A9 gene), GlyT2 (encoded by SLC6A5 ) and the other members of the NSS family, such as the serotonin transporter (SERT), dopamine transporter (DAT) and γ-aminobutyric acid (GABA) transporter (GAT), share a sequence identity of approximately 50%. GlyT1 is located on presynaptic neurons and astrocytes surrounding both inhibitory glycinergic and excitatory glutamatergic synapses, and is considered the main regulator of extracellular levels of glycine in the brain 1 , 6 . At glutamatergic synapses, GlyT1 has a key role in maintaining subsaturating concentrations of regulatory glycine for the NMDA receptor 7 , 8 . Hypofunction of the NMDA receptor has been implicated in the pathophysiology of schizophrenia 9 , but pharmacological interventions to directly enhance neurotransmission via this receptor in patients with the condition have been unsuccessful 10 , 11 . Selective inhibition of glycine reuptake by GlyT1 is an alternative approach to increase endogenous extracellular levels of glycine and potentiate NMDA transmission 1 , 4 . Several chemotypes of potent and selective GlyT1 inhibitors, such as bitopertin, have been developed to achieve antipsychotic and procognitive activity for the treatment of schizophrenia 4 , 12 . Bitopertin has shown clear signs of enhancing neuroplasticity 13 , 14 via the glycine-binding site of the NMDA receptor; however, it failed to show efficacy in phase III clinical trials (at a reduced dose), and a drug candidate that targets GlyT1 has yet to emerge. Studies of NSS and homologues have revealed an alternating-access mechanism 15 , which involves a binding and occlusion of the extracellular substrate, dependent on a Na + (and Cl − in eukaryotic NSS) gradient. Binding is followed by a rearrangement to an inward-facing state and subsequent intracellular opening and release of bound ions and substrate. Conformational rearrangements of transmembrane helices during the transport cycle expose the substrate-binding site to either side of the membrane 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 . Bitopertin behaves functionally as a non-competitive inhibitor of glycine reuptake 24 ; nevertheless, detailed structural information on the inhibitor’s binding site, selectivity and underlying molecular mechanism of glycine reuptake inhibition have yet to be obtained. Here we present the structure of human GlyT1 in complex with a highly selective bitopertin analogue 25 , 26 , Cmpd1, and an inhibition-state-selective synthetic nanobody (sybody). Cmpd1 has been patented as a more potent inhibitor targeting GlyT1 that contains a benzoylisoindoline scaffold originating from the bitopertin chemical series 26 . The structure of GlyT1 reveals the molecular determinants and mechanism of action underlying the inhibition of glycine reuptake. Stabilization and crystal structure of GlyT1 Wild-type human GlyT1 (encoded by SLC6A9 ) is unstable when extracted from the membrane, and contains unstructured termini and a large, flexible extracellular loop 2 (EL2). To enable structure determination, we screened for point mutations that increase thermal stability while preserving ligand-binding activity. For the final crystallization construct, we combined the point mutations L153A, S297A, I368A and C633A with a shortened EL2 (Δ240–256) and truncated amino and carboxyl termini (Δ1–90 and Δ685–706) (see Methods ), and were able to measure persistent transport activity, albeit 42-fold decreased compared with that of wild-type GlyT1 (Extended Data Fig. 1 ). Adding the selective GlyT1 inhibitor Cmpd1 increases the thermal stability of the transporter further by 30.5 °C (Fig. 1a ). Indicative of high-affinity binding with a stabilizing effect, we measured a half-maximal inhibitory concentration (IC 50 ) for Cmpd1 of 12.9 ± 0.9 nM and 7.2 ± 0.4 nM on human and mouse GlyT1, respectively (Fig. 1b ), in a membrane-based competition assay with the [ 3 H]Org24598 compound 27 (a non-competitive GlyT1 inhibitor). We therefore purified GlyT1 in the presence of Cmpd1 and generated sybodies to further stabilize the transporter in the inhibition-state conformation, identifying sybody Sb_GlyT1#7 to bind GlyT1 with an affinity of 9 nM (ref. 28 ). We then obtained microcrystals of GlyT1 in complex with Sb_GlyT1#7 and Cpmd1 in lipidic cubic phase. Merging the oscillation patterns collected from 409 mounted loops containing microcrystals by a serial synchrotron crystallography approach yielded a complete dataset at 3.4 Å resolution. The structure was determined by molecular replacement using structures of the inward-occluded bacterial multiple hydrophobic amino acid transporter (MhsT; Protein Data Bank identification code (PDB ID) 4US3) and the inward-open human SERT (PDB ID 6DZZ) 17 , 19 . The high quality of the resulting electron density maps enabled us to unambiguously model human GlyT1 in complex with the sybody and bound ligand (Fig. 1c and Extended Data Fig. 2 ). Fig. 1: Stabilization, binding and recognition of inhibitor Cmpd1 by human GlyT1. a , Increasing concentrations of Cmpd1 show a strong dose-dependent stabilization of GlyT1, raising the melting point from 48.8 ± 0.4 °C to 79.3 ± 0.3 °C (mean ± s.e.m.). Data for GlyT1 minimal (containing deletions of the N and C termini) with and without addition of the inhibitor are depicted in green and black, respectively. Individual data points from n = 4 technical replicates are shown. AU, arbitrary units. b , Cmpd1 inhibits mouse and human GlyT1 with an IC 50 of 7.2 ± 0.4 nM and 12.9 ± 0.9 nM (mean ± s.e.m.), respectively, in membrane-based competition assays with [ 3 H]Org24598. Curves were calculated from n = 4 technical replicates (individual data points are shown; whiskers extend from minimum to maximum). c , Overall structure of human GlyT1 bound to the selective inhibitor Cmpd1 and an inhibition-state-selective sybody. A magnified view of the inhibitor-binding pocket in a 2 F o – F c electron density map (blue mesh) countered at 1.0 r.m.s.d. is depicted. TM8 is not shown for clarity. d , Topology diagram of the GlyT1 crystallization construct. EL2 carries a strictly conserved disulfide bridge (C220–C229) and four N -linked glycosylation sites, N237, N240, N250 and N256. Three glycosylation sites were removed by the EL2 truncation (240–256), but N237 was essential for membrane-based ligand binding, probably enabling correct trafficking of the transporter to the plasma membrane 40 . The one remaining glycosylation site at N237 is shown as a sphere on EL2. The locations of the single point mutations L153A, S297A, I368A and C633A on transmembrane helices are shown. Full size image Architecture and conformation of GlyT1 GlyT1 adopts the general architecture of SLC6 transporters, with 12 α-helical transmembrane segments (TMs 1–12) and an inverted pseudo-twofold symmetry that relates two transmembrane domains, TMs 1–5 and 6–10, denoted as the LeuT fold 17 , 18 , 21 , 22 , 29 (Fig. 1c, d ). The transporter structure exhibits an inward-open conformation, and superposition of this structure to the inward-open structures of SERT and leucine transporter (LeuT) and inward-oriented occluded MhsT yields Cα root mean square deviations of 1.8 Å, 2.3 Å and 3.2 Å, respectively (see Methods ). TM1 and TM6 possess nonhelical segments in the middle of the lipid bilayer; these segments coordinate Na + and Cl − ions 18 , 20 , 21 , accommodate substrates and inhibitors of various sizes 18 , 19 , 22 , and stabilize the ligand-free return state 17 . The intracellular part of TM5 is unwound at the conserved helix-breaking Gly(X 9 )Pro motif 17 (G313(X 9 )P323 in GlyT1), and the N-terminal segment of TM1 (TM1a) is bent away from the core of GlyT1, opening the intracellular pathway to the centre of the transporter (Fig. 2a ). The splayed motion of TM1 disrupts the interaction between the conserved residues W103 of TM1a and Y385 at the cytoplasmic part of TM6 that is otherwise present in outward-open and occluded conformations 17 , 18 , 20 , 22 (Extended Data Fig. 3 and Supplementary Fig. 1 ). Fig. 2: Inhibition of glycine uptake and binding mode of Cmpd1 at inward-open GlyT1. a , Surface representation of the inward-open structure of GlyT1, viewed parallel to the membrane. The closed extracellular vestibule around W124 (yellow) and the open intracellular pathway are displayed. Residues R125 (TM1), P437 (EL4), L524 and D528 (TM10) are shown as sticks. b , c , Comparison of the binding modes of Cmpd1 (green) in GlyT1 with the inhibitor-binding sites in other NSS transporters. Paroxetine (orange) and ibogaine (yellow) bound to SERT (PDB IDs 5I6X and 6DZY, respectively) and cocaine (purple) bound to Drosophila melanogaster DAT (dDAT, PDB ID 4XP4) are shown as examples. The differences in the locations of the bound ligands in the transporters are marked with dotted lines in b . Compared with paroxetine, ibogaine and cocaine, Cmpd1 is located 5.6 ± 0.1 Å further away from the centre of the transporter (shown in c ). This distance is measured between the centre of the phenyl ring of Cmpd1 and the centre of mass of the other NSS inhibitors shown. d , Cmpd1 inhibits the uptake of glycine by human GlyT1 with an IC 50 of 26.4 ± 0.6 nM (mean ± s.e.m.). The curve was calculated from n = 4 technical replicates (individual data points are shown; whiskers extend from minimum to maximum). Full size image Comparison of GlyT1 with inward-open SERT shows structural differences mainly at the intracellular halves of the helices (Extended Data Fig. 4a–e ), and in particular at the intracellular gate of GlyT1 defined by TM1a and TM5. The intracellular half of TM5 has splayed away from the transporter core by 17°, whereas TM1a is by 29° closer, compared with corresponding segments of SERT. As a result, the intracellular gate—measured as the Cα–Cα distance between the conserved W103 on TM1a and V315 on TM5—is by 4 Å more closed than that of the inward-open structure of ibogaine-bound SERT (Extended Data Fig. 4b, e ). On the extracellular side, a Cα–Cα distance of 8.9 Å between R125 of TM1a and D528 of TM10, and a close packing of the extracellular vestibule around W124 in the NSS-conserved NVWRFPY motif of TM1, indicates a closed extracellular gate (Fig. 2a , Extended Data Fig. 3 and Supplementary Fig. 1 ). The conformation-specific sybody binds through several interactions to the extracellular segment of GlyT1, involving EL2, EL4, TM5 and TM7 (Fig. 1c and Extended Data Fig. 5a ). Sb_GlyT1#7 is selective for the inward-open conformation of GlyT1 and has a conformation-stabilizing effect, as shown by an increase of 10 °C in thermal stability and an apparent affinity increase for [ 3 H]Org24598 of almost twofold in a scintillation proximity assay 28 . In addition to stabilizing the inhibition state, the sybody takes a central role in forming the lattice contacts, packing against the neighbouring sybody in the crystal (Extended Data Fig. 5d ). Unique binding mode among NSS transporters An unambiguous electron density for the inhibitor Cmpd1 was observed in proximity to the central binding pocket of GlyT1, between transmembrane helices 1, 3, 6 and 8 (Fig. 1c and Extended Data Fig. 5b ). Comparison of the inhibitor-binding site in GlyT1 with the equivalent site of other NSS structures shows that Cmpd1 is within 6.0 ± 0.5 Å of the core, with its centre of mass located 14 Å from the cytosolic face of the transporter, while inhibitors of SERT and DAT bind at the central binding site within 21–22 Å of the cytosolic face (Fig. 2b, c ). Furthermore, the inhibitor binds GlyT1 in a unique binding mode, lodged in proximity to the centre of the transporter and extending into the intracellular release pathway for substrate and ions between TM6b and TM1a, accessible to solvent. This mode of inhibition is not observed in other NSS–inhibitor complexes (Fig. 2b, c ). Cmpd1 is from the benzoylisoindoline class of selective GlyT1 inhibitors 25 , and inhibits the uptake of glycine in mammalian cells (Flp-in-CHO cells) expressing mouse 26 or human GlyT1 with an IC 50 of 7.0 ± 0.4 nM and 26.4 ± 0.6 nM, respectively (Fig. 2d ). The isoindoline scaffold of Cmpd1 forms a π-stacking interaction with Y116 of TM1. The phenyl ring is engaged in an edge-to-face stacking interaction with the aromatic ring of W376 located on the unwound region of TM6. The inhibitor is further stabilized by hydrogen-bond and van der Waals interactions with residues from TM1, TM3, TM6 and TM8 (Fig. 3a and Extended Data Figs. 5 c, 6a, b ). Fig. 3: Binding pocket. a , Close-up view of the Cmpd1-binding pocket in GlyT1. The two ends of the inhibitor are stabilized by hydrogen-bond interactions with residues from TM1 and TM6; the backbone amine groups of G121 and L120 form hydrogen bonds with sulfonyl oxygen atoms, and N386 from TM6 forms a hydrogen bond with the oxygen atom of the tetrahydropyran moiety of the inhibitor. From TM8, the hydroxyl group of T472 participates in a hydrogen-bonding interaction with the carbonyl oxygen of the scaffold. The aromatic ring of Y116, localized 4.2 Å from the isoindoline scaffold of the compound (a π-stacking interaction), is shown. The hydroxyl group of Y196 from TM3 probably forms a weaker hydrogen-bond interaction with the methyl sulfone moiety of the inhibitor. Inhibitor binding is also supported by an edge-to-face stacking interaction between the phenyl ring of the ligand and the aromatic sidechain of W376. The residues that form the binding pocket, G373, L379 and M382 (TM6) and I399 (TM7), are also depicted. b , Docking of bitopertin (orange) into the inhibitor-binding pocket of GlyT1, comparing the binding modes of bitopertin and Cmpd1 (green). c , Comparison of Cmpd1 (benzoylisoindoline series, top) and bitopertin (benzoylpiperazine series, bottom). The scaffolds of the compounds are marked with grey dashed lines, and the three R groups are marked with orange dashed lines. Full size image We generated a stable construct with a single point mutation, I192A, that was not able to bind the inhibitor. Notably, I192 is within van der Waals distance of the W376 side chain, which is stabilized in a rotamer perpendicular to the phenyl ring of the inhibitor (Extended Data Fig. 6c–e ). W376 is the bulky hydrophobic residue of a conserved (G/A/C)ΦG motif in the unwound segment of TM6 that determines the substrate selectivity of SLC6 transporters 30 , 31 , 32 , and the AWG sequence observed in GlyT1 is indeed fitting for a small glycine substrate. I192, although not in direct interaction with the inhibitor, plays an important part in the binding of Cmpd1 by reducing the rotational freedom of the W376 side chain, which may also further restrict the binding pocket for glycine. Adding a lichenase fusion protein construct 33 (PDB ID 2CIT) to the N terminus of the GlyT1 construct, we generated and crystallized a GlyT1–Lic fusion protein in complex with Sb_GlyT1#7, and obtained a dataset at 3.9 Å resolution collected from 1,222 mounted loops containing microcrystals (Extended Data Fig. 5d ). The electrogenic reuptake of glycine via GlyT1 is coupled to the transport of two Na + and one Cl − ions. Both the GlyT1 and the GlyT1–Lic constructs were purified and crystallized in the presence of 150 mM NaCl and adopt the same inward-open, inhibitor-bound conformation. However, we observe electron density for Na + and Cl − ions only in the lower-resolution map of the GlyT1–Lic crystal structure, which may have captured a preceding state in transitions associated with ion release to the intracellular environment (Extended Data Fig. 7a, b ). Plasticity of the binding pocket Similar to reported benzoylisoindolines 25 , Cmpd1 is more than 1,000-fold selective for GlyT1 against GlyT2 (Extended Data Fig. 6f ). Comparing the binding-pocket residues of GlyT1 with corresponding residues in GlyT2 points to direct clues. G373 in GlyT1 corresponds to S497 in GlyT2. Notably, N -methyl glycine (sarcosine) and N -ethyl glycine are substrates of GlyT1 and the S497G mutant of GlyT2, but not of wild-type GlyT2 31 , 34 , 35 , which can be explained readily by a steric clash with S497. Furthermore, GlyT1 residues M382 and I399 correspond to leucine and valine, respectively, in GlyT2; the latter two diminish the van der Waals interactions between the inhibitor and the transporter. Molecular docking places bitopertin in the binding pocket of GlyT1, with its benzoylpiperazine scaffold matching the benzoylisoindoline scaffold of Cmpd1 (Fig. 3b ). The binding mode and scaffold substituent interactions (R 1 –R 3 ) are supported by the previously reported structure–activity relationships of the benzoylpiperazine and benzoylisoindoline series 12 , 25 . The R 1 pocket (hosting a methyl sulfone moiety) is spatially constrained and prefers small, polar substituents with a hydrogen-bond-acceptor group. The pocket harbouring R 2 substituents (O–C 3 F 5 ) is mainly hydrophobic and accommodates linear and cyclic substituents up to a ring size of six. The R 3 (tetrahydropyran) pocket is fairly large and exposed to solvent, and can accommodate diverse groups with different functionalities (Fig. 3c ). We observed a higher flexibility for the tetrahydropyran moiety, as the corresponding portion of the electron density was not well resolved. Considering the size and solvent exposure of this pocket, the R 3 position is the favourable handle to fine-tune the physicochemical properties of the inhibitor. Superposition of glycine-bound LeuT and tryptophan-bound MhsT on inhibitor-bound GlyT1 shows that the sulfonyl moiety of the inhibitor probably mimics the carboxylate group of the glycine substrate, interacting with TM1 and TM3 (Extended Data Fig. 7c, d ). We observe that at a glycine concentration of more than 0.1 mM, selective inhibitors of GlyT1 are outcompeted, further supporting the existence of overlapping binding sites (Extended Data Fig. 7e ). Mechanism of inhibition Although GlyT1’s binding site for bitopertin and Cmpd1 appears to overlap with its glycine-binding site, these are not competitive glycine-reuptake inhibitors 4 , 24 (Extended Data Figs. 7 c–e, 8 ). It is likely that, owing to their hydrophobic nature 12 , 25 , Cmpd1, bitopertin and related chemotypes diffuse across the cell membrane and bind from the cytoplasmic side to an inward-open structure, involving unwinding of the TM5 segment and a hinge-like motion of TM1a to fit the bulky inhibitor (Fig. 4 ). Glycine, on the other hand, binds to the outward-open conformation, which is exposed to high concentrations of the driving Na + and Cl − ions at the synaptic environment. Following binding of glycine and ions, the transporter transforms to an inward-open conformation with low affinity for glycine, and this is where direct binding competition can occur, with the inhibitor having a high affinity for the site. Fig. 4: Mechanism of inhibition of GlyT1. Left, glycine (purple) binds with high affinity to the outward-open conformation of GlyT1 (homology model based on dDAT, PDB ID 4M48), which is exposed to high concentrations of the driving Na + and Cl − ions (orange and green spheres, respectively) in the synaptic environment. Right, the inhibitor Cmpd1 (green) can diffuse across the synaptic cell membrane and reach the intracellular side of GlyT1. Cmpd1 locks the transporter in an inward-open conformation, with the characteristic hinge-like motion of TM1a and unwinding of TM5. Cmpd1 inhibits GlyT1 by shifting the conformational equilibrium to the inward-open state. Full size image Release of ions and glycine from the inward-open state enables bitopertin, Cmpd1 and similar transport inhibitors to bind and shift the conformational equilibrium towards an inward-open conformation. As with the inhibition of inward-open SERT by ibogaine 36 , the binding sites of glycine and non-competitive inhibitors of GlyT1 explore two distinct conformational states, outward and inward oriented (Fig. 4 ). Considering the high membrane permeability measured for Cmpd1 and bitopertin 12 , 25 , it is likely that the inhibitor dissipates into locations other than the synapse. In fact, GlyT1 is also expressed in peripheral tissues, including erythrocytes where glycine plays a key part in the biosynthesis of haem. Inhibition of GlyT1 by bitopertin in these cells results in a tolerable decrease in the level of haemoglobin. However, the possible risks associated with such an effect were a prohibitory factor in phase III clinical trials of bitopertin, which was therefore administered at a lower dose than in the proof-of-concept phase II clinical studies. It also remains unclear whether administration of bitopertin reached optimal GlyT1 occupancy in trial subjects, or whether a higher placebo response in clinical trials resulted in an indistinguishable efficacy of bitopertin 10 , 37 , 38 . The sybody Sb_GlyT1#7 is also highly selective for the inhibited, inward-open conformation of GlyT1. Recent efforts to engineer antibodies that achieve effective targeting and efficient crossing of the blood–brain barrier 39 to deliver an inhibition-state-specific sybody represent an alternative approach to small-molecule inhibitors of GlyT1. The structure of human GlyT1 presented here provides a platform for the rational design of new small-molecule inhibitors and antibodies that target the glycine-reuptake transporter. Methods No statistical methods were used to predetermine sample size. The experiments were not randomized, and investigators were not blinded to allocation during experiments and outcome assessment. GlyT1 constructs The human GlyT1 complementary DNA sequence was codon optimized and synthesized by Genewiz for expression in mammalian cells, and the GlyT1–Lic sequence for insect cell expression. Both constructs contain N- and C-terminal deletions of residues 1–90 and 685–706, respectively (minimal construct GlyT1 minimal ) as well as a deletion in the extracellular loop 2 (EL2) between residues 240 and 256. To improve the thermal stability of the constructs, we introduced single point mutations to the transmembrane helices of GlyT1 minimal , and screened the constructs on the basis of their expression level, thermal stability and ability to bind inhibitor. In total, we introduced 329 single mutations into the minimal construct, of which we combined the point mutations L153A, S297A, I368A and C633A in the final construct for crystallization. In addition, we omitted the N-terminal residue 91 from the GlyT1–Lic sequence, and residues 9–281 of lichenase (PDB ID 2CIT) have been fused at the N terminus in order to increase the hydrophilic surface area of the transporter and to facilitate crystallization. The sequences of GlyT1 and GlyT1–Lic followed by a C-terminal enhanced green fluorescent protein (eGFP) and a decahistidine tag were cloned into a pCDNA3.1 vector for transient transfection in human embryonic kidney (HEK293) cells (Invitrogen; not authenticated and not tested for mycoplasma contamination), and a pFastBac vector for baculovirus expression in Spodoptera frugiperda ( Sf9 ) insect cells (American Type Culture Collection (ATCC), catalogue number CRL-1711; authenticated and free of mycoplasma contamination), respectively. Transporter expression and purification GlyT1 was expressed in FreeStyle 293 expression medium (Thermo Fisher Scientific) in 1-litre scale in 600 ml TubeSpin bioreactors, incubating in an orbital shaker at 37 °C, 8% CO 2 and 220 rpm in a humidified atmosphere. The cells were transfected at a density of 1 × 10 6 cells per ml and a viability of above 95%. A 25 kDa linear polyethylenimine (LPEI) was used as the transfection reagent, at a GlyT1 DNA:LPEI ratio of 1:2. The cells were typically collected 60 h post-transfection at a viability of around 70%, and stored at −80 °C until purification. GlyT1–Lic was expressed in 20–25-litre scale in 50-litre single-use WAVE bioreactors (CultiBag RM, Sartorius Stedim Biotech) at 27 °C with 18–25 rocks per minute in a 40% oxygenated Sf900-III medium (Gibco by Life Technologies). The cells were typically infected with a 0.25% volume of infection of the virus at a density of 2–3 × 10 6 cells per ml and viability of above 95%. The cells were collected 72 h post-infection at a viability of around 80%, and stored at −80 °C until purification. Purification of GlyT1 constructs has been described previously 28 . In brief, the biomass was solubilized in 50 mM Tris-HCl pH 7.5, 150 mM NaCl, 100 μM Cmpd1 ([5-methanesulfonyl-2-(2,2,3,3,3-pentafluoro-propoxy)-phenyl]-[5-tetrahydro-pyran-4-yloxy)-1,3-dihydro-isoindol-2-yl]-methanone) and 15–25 μM brain polar lipids extract (Avanti), containing either 1% (w/v) lauryl maltose neopentyl glycol (LMNG) or 1% (w/v) decyl maltose neopentyl glycol (DMNG) and 0.1% cholesteryl hemisuccinate (CHS). The protein was purified by batch purification using TALON affinity resin (GE Healthcare), then treated with HRV-3C protease (Novagen) to cleave the eGFP–His tag and Roche PNGase F (from Flavobacterium meningosepticum ) to trim glycosylation (Supplementary Fig. 2 ). The transporter was concentrated typically to 15–30 mg ml −1 in the final buffer, containing 50 mM Tris-HCl pH 7.5, 150 mM NaCl, 50 μM inhibitor and 15–25 μM brain polar lipids extract, and 0.01% LMNG (w/v) plus 0.001% CHS for GlyT1, and either 0.05% (w/v) LMNG plus 0.005% CHS or 0.01% DMNG plus 0.001% CHS for the GlyT1–Lic construct. Lipidic cubic phase crystallization Before crystallization, the concentrated GlyT1 was incubated with Sb_GlyT1#7 in a 1:1.2 molar ratio (GlyT1:sybody) and 1 mM inhibitor. The protein solution was reconstituted into mesophase using molten monoolein (Molecular Dimensions) spiked with 5% (w/w) cholesterol (Sigma) at a 2:3 ratio of protein solution:lipid, using two coupled Hamilton syringes. Crystallization trials were carried out in 96-well glass sandwich plates (VWR) by a Gryphon LCP crystallization robot or a Mosquito LCP dispensing robot in a humidified chamber, using 50–100 nl of mesophase overlaid with 800 nl of crystallization solution. The plates were incubated at 19.6 °C and inspected manually. Crystals appeared within 3–10 days in 0.1 M ADA pH 7, 13–25% PEG600 and 4–14% v/v (±)-1,3-butanediol, with the longest crystal dimension being 2–5 μm. For crystallization of GlyT1 with Sb_GlyT1#7, we also used 3% v/v dimethyl sulfoxide, 3% v/v glycerol, 0.2 M NDSB-201, 0.2 M NDSB-211, 0.2 M NDSB-221, 0.05% w/v 1,2,3-heptanetriol or 4% v/v 1,3-propanediol (Hampton research) as additives. The micrometre-sized crystals werecollected from the LCP matrix using MiTeGen MicroMounts, and flash frozen in liquid nitrogen. Data collection and structure determination Crystallographic data were collected on the P14 beamline operated by EMBL Hamburg at the PETRA III storage ring (DESY, Hamburg), using the 5 × 10 μm 2 (vertical × horizontal) microfocus beam, with a total photon flux of 1.3 × 10 13 photons per second at the sample position. Diffraction data were recorded on an EIGER 16M detector. In our data-collection strategy, we typically defined a region of interest of 60 × 14 μm 2 to 290 × 340 μm 2 on the loop, containing crystals oriented perpendicularly to the incoming beam. Diffraction data were collected using serial helical line scans 41 , with a sample displacement of 1 μm along the rotation axis during the acquisition of one frame, an oscillation of 0.2°, and an exposure time of 0.1 s, with 100% transmission. Dozor 42 , 43 was used for the first step of data processing to identify diffraction patterns within the large set of frames. Each diffraction image was analysed by Dozor, which determined a list of coordinates for diffraction spots and their partial intensities, and generated a diffraction heat map. Diffraction data were indexed and integrated using XDS 44 , 45 , and the resulting partial mini datasets, containing 3–20 consecutive images, were scaled with XSCALE 45 . In some cases, mini datasets with adjacent frame numbers were merged into longer datasets (more than 20 frames) manually. One rotation dataset of 20 frames with an oscillation of 1.0° is included in the GlyT1–Lic dataset. Our choice of partial mini datasets to be merged into a high-quality complete dataset was guided by an inhouse script, Ctrl-d, which measured the correlation of each mini dataset to the rest of the mini datasets. The important criterion was the requirement for enough collected datasets to have a scaling model for robust estimation of outliers. We carried out a total of 514 two-dimensional (2D) helical scans on 409 mounted loops containing microcrystals of GlyT1, resulting in the collection of 1,365,232 diffraction patterns, of which 30,837 frames contained more than 15 diffraction spots. We indexed and integrated 229 mini datasets, of which 207, containing 3,400 frames, with a correlation of above 0.7 were scaled and merged (Extended Data Figs. 9a, c ). For GlyT1–Lic, a total of 1,733 2D helical scans were performed on 1,222 mounted loops containing microcrystals, resulting in the collection of 3,190,397 diffraction images of which 225,037 contained 15 spots or more. We indexed and integrated 249 mini datasets, of which 213, containing 3,906 diffraction patterns, with a correlation of above 0.5 were scaled and merged (Extended Data Fig. 9b, d ). The structure of the GlyT1–sybody complex was solved by molecular replacement using modified models of MhsT (PDB ID 4US3) and SERT (PDB ID 6DZZ) (with the loops, TM12 and C-terminal tail removed from the original models), as well as an ASC-binding nanobody (PDB ID 5H8D), as separate search models in Phaser. To solve the structure of GlyT1–Lic, we used the lichenase fusion protein structure (PDB ID 2CIT) as the third search model. The models were refined with Buster, followed by visual examination and manual rebuilding in Coot and ISOLDE 46 , 47 , 48 . The final model of GlyT1 lacks the first 8 residues of the N terminus, residues 235–237 in EL2, residues 309–314 in TM5 and the last 20 residues of the C terminus. Of two lichenase fusion proteins in the asymmetric unit of the GlyT1–Lic structure, only one chain (with higher B factors compared with the other protein chains) has been modelled, owing to the high flexibility of the chains and poor density of the region. The final model of GlyT1–Lic further lacks the first 13 residues of N terminus, residues 235–239 in EL2, residues 309–315 in TM5 and the last 34 residues of the C terminus in chain A; and the first 15 residues of the N terminus, residues 235–239 in EL2, residues 309–315 in TM5 and the last 20 residues of the C terminus in chain B. Of the GlyT1 and GlyT1–Lic residues, 95.4% and 95.01%, respectively, are within the Ramachandran favoured region, with 0.15% (one residue) and 0.26% (four residues) being outliers. The final data and refinement statistics are presented in Extended Data Table 1 . Statistics on data collection were calculated using phenix.table_one 49 . Scintillation proximity assays Scintillation proximity assays (SPAs) were carried out in 96-well plates (Optiplate, Perkin Elmer) using copper His-tag YSi SPA beads (Perkin Elmer) and [ 3 H]Org24598 (80 Ci mmol −1 ). Reactions took place in assay buffer containing 50 mM Tris-HCl pH 7.5, 150 mM NaCl and 0.001% LMNG supplemented with solubilized GlyT1 cell membrane/SPA mix (0.3 mg per well) and for competition experiments, a tenfold serial dilution series of nonlabelled inhibitor Cmpd1 (final concentration 0.001 nM to 10 μM), bitopertin (0.001 nM to 10 μM), or glycine (0.1 nM to 1 mM). Assays were incubated for 1 h at 4 °C before values were read out using a top count scintillation counter at room temperature. In thermal shift (TS) scintillation proximity assays (SPA–TS), solubilized protein was incubated for 10 min with a temperature gradient of 23–53 °C across the wells in a Techne Prime Elite thermocycler before mixing with SPA beads. FSEC–TS A fluorescence-detection size-exclusion chromatography thermostability (FSEC–TS) assay was used to evaluate the thermostability of constructs 50 . We dispensed 180-μl aliquots of solubilized GlyT1-containing cell membrane in a 4 °C cooled 96-well polymerase chain reaction (PCR) plate (Eppendorf) in triplicates. A gradient of 30–54 °C for 10 min was applied on the plate in a BioRad Dyad thermal cycler. The plate was cooled on ice and 40 μl of the samples were injected into a 300-mm Sepax column in 50 mM Tris-HCl pH 7.5, 150 mM NaCl and 0.001% LMNG; the SEC profile was monitored using the fluorescence signal from the eGFP tag. Thermofluor stability assay For Thermofluor stability assays, we used a GlyT1 minimal construct (containing N- and C-terminal deletions of residues 1–90 and 685–706, respectively), expressed in Sf9 insect cells and purified as above. Purified GlyT1 minimal was diluted in 50 mM Tris-HCl pH 7.5, 150 mM NaCl and 0.001% LMNG to a final concentration of 0.73 μM and distributed into the wells of a 96-well PCR plate on ice. The inhibitor was added to the wells at a final concentration of 10 μM, and a corresponding amount of dimethylsulfoxide (DMSO) was added to the control wells. The plate was sealed and incubated for 30 min on ice. A 1:40 (v/v) working solution of the CPM ( N -[4-(7-diethylamino-4-methyl-3-coumarinyl)phenyl]maleimide) dye stock (4 mg ml −1 in DMSO) was prepared; 10 μl of this solution was added to 75 μl of protein sample in each well and mixed thoroughly. We adapted a published assay 51 based on CPM dye to perform the stability tests. The melting profiles were recorded using a real-time PCR machine (Rotor-Gene Q, Qiagen) with temperature ramping from 15 °C to 95 °C at a heating rate of 0.2 °C s −1 . The melting temperatures ( T m ) were calculated from the point of inflection, based on a fit to the Boltzmann equation. Molecular modelling We used the 3D conformer generator Omega (OpenEye) to generate a conformational ensemble for bitopertin. Each conformer was superimposed via ROCS (OpenEye) 52 onto the transporter-bound conformation of Cmpd1, and the overlay was optimized with respect to similarity of 3D shapes. The highest-scoring conformer was retained and energy-minimized within the binding pocket using MOE 53 . Docking was performed using the software GOLD 54 from the Cambridge Crystallographic Data Centre (CCDC) with default settings and the standard scoring function ChemPLP. An additional energy minimization within the binding pocket was performed using the five best docking poses. Rapido was used for structure superpositions 55 . A total number of 513, 414 and 393 residues were used to align the structures of SERT (PDB ID 6DZZ), LeuT (PDB ID 3TT3) and MhsT (PDB ID 4US3) on that of GlyT1. Residue ranges used for alignment were 104–224, 226–232, 259–306, 316–353, 357–388, 390–433, 438–489, 491–632 and 636–652 of GlyT1 and 83–152, 154–204, 206–212, 222–239, 242–271, 281–318, 322–353, 355–398, 404–597 and 600–616 of SERT in the SERT–GlyT1 superposition; 115, 117–211, 215–219, 262–270, 272–278, 281, 288–307, 317–352, 354–374, 376–387, 390–421, 429–489, 496–519, 522–530, 532–559 and 568–592 of GlyT1 and 21–68, 71–73, 76–80, 82–87, 90–123, 126–130, 141–156, 160, 166–185, 196–217, 222–240, 242–257, 259–270, 273–291, 293–305, 307–312, 318–372, 374–406, 408–435 and 444–468 of LeuT in the GlyT1–LeuT superposition; and 119–173, 176–210, 264–271, 318–352, 358–422, 432–487, 532–554, 568–595, 493–517 and 287–306 of GlyT1 and corresponding residues 28–82, 88–122, 134–141, 178–212, 218–282, 284–339, 389–411, 421–448, 343–367 and 148–167 of MhsT in the GlyT1–MhsT superposition. [ 3 H]Glycine-uptake assay We carried out glycine-uptake assays for the wild-type and crystallization constructs of GlyT1 and for untransfected cells in n = 5, n = 4 and n = 3 independent experiments, respectively, each performed with 6–11 replicate measurements of total and nonspecific uptake. Mammalian HEK293-MSR cells (Invitrogen; not authenticated and not tested for mycoplasma contamination) were plated at a density of 40% in 96-well plates and were transfected with 0.1 μg of DNA (in pXOON plasmids) per well in complex with Ecotransfect transfection reagent (OZ Bioscience), along with untransfected cells, 48 h before uptake assays. The medium was aspirated after 48 h and the cells were washed with uptake buffer containing 10 mM HEPES-Tris pH 7.4, 150 mM NaCl, 1 mM MgSO 4 , 5 mM KCl and 10 mM (+) d -glucose. The cells were incubated for 30 min at 22 °C with the uptake buffer containing no inhibitor (total uptake) or 10 μM Cmpd1 (nonspecific uptake). Glycine uptake was initiated by adding either [ 3 H]glycine (15 Ci mmol −1 ) to a final concentration of 1 μM for total uptake or [ 3 H]glycine (15 Ci mmol −1 ) and Cmpd1 to a final concentration of 1 μM and 10 μM, respectively, for nonspecific uptake. The plates were incubated for 10 min or for variable time points and radiotracer-uptake reactions were stopped by aspiration of the substrate followed by washing with 200 μl of the uptake buffer in an automated plate washer. The cells were then lysed with Microscint 20 (Perkin Elmer) and shaken for 1 h; radioactivity was measured by a Topcounter NXT (Packard). Specific uptake was determined by subtracting nonspecific uptake from total uptake. Statistical significance was determined using one sample t -tests with alpha = 0.05. [ 3 H]Glycine-uptake-inhibition assay Glycine-uptake-inhibition assays were performed in quadruplicate and according to a method previously described 24 . In brief, mammalian Flp-in-CHO cells (Invitrogen; authenticated and free of mycoplasma contamination) were transfected with human and mouse GlyT1 and human GlyT2 cDNA and were plated at a density of 40,000 cells per well in complete F-12 medium 24 h before uptake assays. The medium was aspirated the next day and the cells were washed twice with uptake buffer containing 10 mM HEPES-Tris pH 7.4, 150 mM NaCl, 1 mM CaCl 2 , 2.5 mM KCl, 2.5 mM MgSO 4 and 10 mM (+) d -glucose. The cells were incubated for 20 min at 22 °C with no inhibitor, 10 mM nonradioactive glycine, or a concentration range of the inhibitor to calculate IC 50 value. A solution containing 25 μM nonradioactive glycine and 60 nM [ 3 H]glycine (11–16 Ci mmol −1 ) (hGlyT1 and mGlyT1) or 200 nM [ 3 H]glycine (hGlyT2) was then added. Nonspecific uptake was determined with 10 μM Org24598 (a hGlyT1 and mGlyT1 inhibitor) 27 , or 5 μM Org25543 (a hGlyT2 inhibitor) 56 . The plates were incubated for 15 min (hGlyT1) or 30 min (mGlyT1 and hGlyT2) with gentle shaking, and reactions were stopped by aspiration of the mixture and washing three times with ice-cold uptake buffer. The cells were lysed and shaken for 3 h; radioactivity was measured by a scintillation counter. The assays were performed in quadruplicate. To evaluate the mode of inhibition of Cmpd1, we carried out glycine-uptake assays for wild-type GlyT1 as described in the section ‘[ 3 H]Glycine-uptake assay’ above. The assays were performed in four independent experiments, each with two replicate measurements for total uptake and one replicate measurement for nonspecific uptake. Experiments to generate all four K m − V max curves for the inhibitor were performed simultaneously on the same 96-well plate. Glycine uptake was initiated by adding the specified concentrations of [ 3 H]glycine (15 Ci mmol −1 ) mixed with unlabelled glycine in a 1:1,000 ratio (10 μM, 25 μM, 50 μM, 100 μM, 200 μM, 350 μM, 500 μM and 700 μM) and mixed with 0 nM, 60 nM, 240 nM and 960 nM of Cmpd1. The plate was incubated for 10 min, and radiotracer-uptake reactions were stopped by aspiration of the substrate followed by washing with 200 μl of the uptake buffer in an automated plate washer. The cells were then lysed with Microscint 20 (Perkin Elmer) and shaken for 1 h; radioactivity was measured by a Topcounter NXT (Packard). Specific uptake was determined by subtracting nonspecific uptake from total uptake. Statistical significance was determined using one sample t -tests with alpha = 0.05. Data were fitted to Michaelis–Menten kinetics using nonlinear regression and transformed to Eadie–Hofstee plots with subsequent linear regression analysis using GraphPad Prism 9. [ 3 H]Org24598-binding assay [ 3 H]Org24598-binding experiments were performed in quadruplicate as described 24 . Membranes from Chinese hamster ovary (CHO) cells expressing hGlyT1 and membranes extracted from mouse forebrains (expressing mGlyT1) were used for binding assays. Saturation isotherms were determined by adding [ 3 H]Org24598 to mouse forebrain membranes (40 μg per well) and cell membranes (10 μg per well) in a total volume of 500 μl for 3 h at room temperature. Membranes were incubated with 3 nM [ 3 H]Org24598 and ten concentrations of Cmpd1 for 1 h at room temperature. Reactions were terminated by filtering the mixture onto a Unifilter with bonded GF/C filters (PerkinElmer) presoaked in binding buffer containing 50 mM sodium-citrate pH 6.1, for 1 h and washed three times with 1 ml of the same cold binding buffer. Filtered radioactivity was counted on a scintillation counter. Nonspecific binding was measured in the presence of 10 μM Org24598. Figure preparation Figures showing protein structures were prepared using the PyMOL 2.3.3 Incentive Product from Schrodinger, LLC. Sequences were aligned using ClustalOmega 57 and the relevant figure prepared using BOXSHADE 3.2. Binding and uptake data were analysed and figures prepared using GraphPad Prism 9. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Coordinates and structure factors for the structures of GlyT1 at 3.4 Å and 3.9 Å resolution have been deposited in the Protein Data Bank ( ) under accession codes 6ZBV and 6ZPL , respectively. The executable Ctrl-d is available via webapps.embl-hamburg.de.
Glycine can stimulate or inhibit neurons in the brain, thereby controlling complex functions. Unraveling the three-dimensional structure of the glycine transporter, researchers have now come a big step closer to understanding the regulation of glycine in the brain. These results, which have been published in Nature, open up opportunities to find effective drugs that inhibit GlyT1 function, with major implications for the treatment of schizophrenia and other mental disorders. Glycine is the smallest amino acid and a building block of proteins, and also a critical neurotransmitter that can both stimulate or inhibit neurons in the brain and thereby control complex brain functions. Termination of a glycine signal is mediated by glycine transporters that reuptake and clear glycine from the synapses between neurons. Glycine transporter GlyT1 is the main regulator of neurotransmitter glycine levels in the brain, and also important for e.g. blood cells, where glycine is required for the synthesis of heme. The N-methyl-D-aspartate (NMDA) receptor is activated by glycine, and its poor performance is implicated in schizophrenia. Over the past twenty years, many pharmaceutical companies and academic research laboratories therefore have focused on influencing glycinergic signaling and glycine reuptake delay as a way of activating the NMDA receptor in search of a cure for schizophrenia and other psychiatric disorders. Indeed, several potent and selective GlyT1 inhibitors achieve antipsychotic and pro-cognitive effects alleviating many symptoms of schizophrenia, and have advanced into clinical trials. However, a successful drug candidate has yet to emerge, and GlyT1 inhibition in blood cells is a concern for side effects. Structural insight into the binding of inhibitors to GlyT1 would provide insight in finding new strategies in drug design. To gain better knowledge about the three-dimensional structure and inhibition mechanisms of the GlyT1 transporter, researchers from the companies Roche and Linkster, and from the European Molecular Biology Laboratory (EMBL) Hamburg, University of Zurich and Aarhus University, have therefore collaborated on investigating one of the most advanced GlyT1 inhibitors. Using a synthetic single-domain antibody (Linkster therapeutics' sybody) for GlyT1, the research team managed to grow microcrystals of the inhibited GlyT1 complex. By employing a Serial Synchrotron Crystallography (SSX) approach, the team lead by Assistant Professor Azadeh Shahsavar and Professor Poul Nissen from the Department of Molecular Biology and Genetics/DANDRITE, Aarhus University, determined the structure of human GlyT1 using X-ray diffraction data from hundreds of microcrystals. The SSX method is particularly suitable as a method fornew, powerful X-ray sources and opens up for new approaches to, among other things, the development of drugs for various purposes. The structure is reported in the leading scientific journal Nature and also unveils a new mechanism of inhibition in neurotransmitter transporters in general. Mechanisms have previously been uncovered for, for example, inhibition of the serotonin transporter (which has many similarities to GlyT1) with antidepressant drugs, but it is a quite different inhibition mechanism now found for GlyT1. It provides background knowledge for the further development of small molecules and antibodies as selective inhibitors targeted at GlyT1 and possibly also for new ideas for the development of inhibitors of other neurotrandmitter carriers that can be used to treat other mental disorders. Azadeh Shahsavar's team continues the studies of GlyT1 and will be investigating further aspects of its function and inhibition and the effect of GlyT1 inhibitors in the body.
10.1038/s41586-021-03274-z
Nano
New study reveals design clues for silver-based superatomic molecules
Sayuri Miyajima et al, Key factors for connecting silver-based icosahedral superatoms by vertex sharing, Communications Chemistry (2023). DOI: 10.1038/s42004-023-00854-0
https://dx.doi.org/10.1038/s42004-023-00854-0
https://phys.org/news/2023-04-reveals-clues-silver-based-superatomic-molecules.html
Abstract Metal nanoclusters composed of noble elements such as gold (Au) or silver (Ag) are regarded as superatoms. In recent years, the understanding of the materials composed of superatoms, which are often called superatomic molecules, has gradually progressed for Au-based materials. However, there is still little information on Ag-based superatomic molecules. In the present study, we synthesise two di-superatomic molecules with Ag as the main constituent element and reveal the three essential conditions for the formation and isolation of a superatomic molecule comprising two Ag 13− x M x structures (M = Ag or other metal; x = number of M) connected by vertex sharing. The effects of the central atom and the type of bridging halogen on the electronic structure of the resulting superatomic molecule are also clarified in detail. These findings are expected to provide clear design guidelines for the creation of superatomic molecules with various properties and functions. Introduction Metal nanoclusters (NCs) 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 composed of noble metal elements such as gold (Au) and silver (Ag) are stabilised when the total number of valence electrons satisfies the closed-shell electronic structure, as in conventional atoms 15 , 16 . Such metal NCs are regarded as superatoms (artificial atoms). If superatoms are used to assemble materials, it might be possible to create materials with physicochemical properties and functions that are different from those of conventional materials 17 . Regarding such materials composed of superatoms (often called superatomic molecules 18 , 19 ), since the 1980s, there have been many reports of Au-based superatomic molecules, which Teo and Zhang called clusters of clusters 20 . Subsequent work by groups such as Tsukuda 21 , Nobusada 22 , Jin 23 and Zhu 24 has gradually improved our understanding of the types of superatomic molecules that can be produced and the electronic structures that can be created 25 . Ag NCs have multiple properties and functions that are superior to those of Au NCs, including photoluminescence (PL) with high quantum yield 26 and selective catalytic activity for carbon dioxide reduction 27 . However, there are only a limited number of reports, including the report 28 by the authors, on Ag-based superatomic molecules 29 , 30 , 31 , 32 . To construct substances using superatomic molecules and create new materials, it is essential to gain a deeper understanding of the types of superatomic molecules that can be produced and the electronic structures that can be created, even for Ag-based superatomic molecules. In the present study, we focus on Ag-based 13-atom NCs (Ag 13− x M x ; M = Ag or other metal; x = number of M) as superatoms, and aim to elucidate the key factors in the formation of di-superatomic molecules by vertex sharing 33 and the electronic structure of the obtained di-superatomic molecules. Platinum (Pt) or palladium (Pd) was used as the element that substitutes part of the Ag, and chloride (Cl) or bromide (Br) was used as the bridging ligand to support the connection of the two 13-atom NCs. To achieve our purpose, in addition to two previously reported di-superatomic molecules ([Ag 23 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 ( 1 ); Fig. 1a ; PPh 3 = triphenylphosphine) 31 and ([Ag 23 Pd 2 (PPh 3 ) 10 Cl 7 ] 0 ( 2 ); Fig. 1b ) 28 , we synthesised two new superatomic molecules with Br as the bridging ligand ([Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 ( 3 ) and [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 ( 4 ); Table 1 ). We investigated their geometric/electronic structures and their stabilities with regard to degradation in solution. Consequently, we confirmed that 3 and 4 both have a geometric/electronic structure that qualifies them as superatomic molecules. Regarding the electronic structure, we further observed that (1) there is a peak attributable to the metal core at approximately 600 nm in the optical absorption spectra of all the superatomic molecules; (2) such peaks shift to longer wavelengths when M is changed from Pt to Pd; (3) all 1 − 4 exhibit PL in visible-to-near infra-red (NIR) region; and (4) PL peaks shift to longer wavelengths when M is changed from Pt to Pd. With respect to the stability of the superatomic molecule described by [Ag 23 M 2 (PPh 3 ) 10 X 7 ] z (M = Ag, Pd, or Pt; X = Cl or Br; z = 2+ or 0), we found that the stability decreases in the order 1 > 3 > 2 > 4 (which can be synthesised) > [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br; which are not so stable in solution). Based on these results and reports on the related superatomic molecules, we concluded that the following three conditions are essential for the formation and isolation of a superatomic molecule consisting of two Ag 13− x M x structures (M = Ag or other metal) connected by vertex sharing ([Ag 25− x M x (PR 3 ) 10 X y ] z ; PR 3 = phosphine; y = number of X): (1) a halogen ligand of a size that can maintain a moderate distance between two Ag 13− x M x structures is used as the bridging ligand; (2) an icosahedral core, which is stronger than Ag 13 , is formed by heteroatom substitution; and (3) [Ag 25− x M x (PR 3 ) 10 X y ] z comprises substituted heteroatoms and bridging halogens such that the total number of valence electrons is 16 when they are cationic or neutral. Fig. 1: Comparison of the geometric structures. a 1 . b 2 . c 3 . d 4 . The geometric structure of 1 and 2 are reproduced from ref. 28 , 31 , respectively (grey = Ag; orange = Pt; blue = Pd; green = Cl; dark grey = Br; magenta = P). The positions of the Pd atoms are the predicted positions based on DFT calculations. Full size image Table 1 NC number, chemical composition, number of bridging halogens, number of total valence electrons, and references to literature on Ag-based di-superatomic molecules described in the present paper. Full size table Results and discussion Synthesis and geometric structure An NC mixture containing 3 was first prepared by adding sodium borohydride (NaBH 4 ) methanol solution to a methanol solution containing silver nitrate (AgNO 3 ), PtBr 2 , PPh 3 , and NaBr in the dark. The by-products were then removed by washing with the solvent, and the product was crystallised to obtain high-purity 3 (Fig. S1a ) 28 . Electrospray ionisation-mass spectrometry (ESI-MS) of the product showed that 3 has a chemical composition of [Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 (Fig. S2 ). X-ray photoelectron spectroscopy (XPS; Fig. S3 ) confirmed the presence of Pt in 3 . We also obtained 4 as single crystals using a process similar to that used to synthesise 3 , except that PdBr 2 was used instead of PtBr 2 (Fig. S1b ). X-ray photoelectron spectroscopy (XPS; Ag : Pd = 23 : 1.5; Fig. S4 ) confirmed the presence of Pd in 4 . Figure 1c shows the geometric structure of 3 determined by single crystal X-ray diffraction (SC-XRD) analysis (Fig. S5 , Table S1 and Supplementary Data 1 , 2 ). We found that 3 has a geometric structure in which two icosahedral Ag 12 Pt molecules are connected by vertex sharing. Pt was located at the central position in each icosahedral Ag 12 Pt molecule, as often seen in the literatures 34 , 35 . This structure is similar to the geometric structure of 1 , as previously reported (Fig. 1a ) 31 . The SC-XRD analysis of 3 did not confirm the presence of counter ions (Fig. S5 ), again supporting the interpretation that 3 was isolated as a neutral NC ([Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 ). These results demonstrate that both 1 and 3 have 16 valence electrons (Table 1 ) 16 . Therefore, the two Ag 12 Pt structures in 3 are described as [Ag 12 Pt] 4+ , indicating that both have a closed-shell electronic structure that satisfies the 1S 2 1P 6 superatom orbital (Fig. S6 ) 15 , 19 . We concluded from these results that 3 is a NC that can be regarded as a di-superatomic molecule, similar to 1 . Figure 1d shows the geometric structure of 4 (Fig. S7 , Table S1 and Supplementary Data 3 , 4 ). As you can see, 4 has a geometric structure in which two icosahedral Ag 12 Pd structures are connected by vertex sharing, which is similar to the geometric structure of the previously reported 2 (Fig. 1b ) 28 . In addition, 4 was also isolated as a neutral molecule (Fig. S7 ), indicating that each Ag 12 Pd structure in 4 has a closed-shell electronic structure that satisfies 1S 2 1P 6 (Fig. S6 ) 15 , 19 . We concluded from these results that 4 is also a NC that can be considered a di-superatomic molecule. Unfortunately, it is difficult to determine the Pd position for 4 by SC-XRD alone because Pd ( 46 Pd) and Ag ( 47 Ag) have a similar number of electrons. However, Pd (1.920 J m −2 for Pd(111)) 36 has a higher surface energy than Ag (1.172 J m −2 for Ag(111)) 36 , and Pd is generally located in the centre of the icosahedral structure in Ag 12 Pd 28 , 34 , 35 , 37 , 38 . We performed density functional theory (DFT) calculations for [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 with different Pd positions using the Perdew–Burke–Ernzerhof (PBE) functional to confirm that Pd is located at the centre of the two icosahedral structures in 4 as in 2 . The results showed that [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 is stable for Pd positions in the order of the icosahedral centre (i) > the icosahedral surface (ii) > the shared vertex (iii) (Fig. S8 ). Based on these results, we concluded that the two Pd atoms are located in the centre of the icosahedral structure in 4 (Fig. 1d ), as in 2 . In this way, overall 1 − 4 have similar geometric structures. However, a detailed look at their geometric structures revealed some differences between 1 and 2 , which use Cl as the bridging halogen, as well as between 3 and 4 , which use Br as the bridging halogen. The most striking difference is that there is a twist between the two Ag 12 M structures (M = Pt or Pd) in 3 (dihedral angles θ = 9.02 − 11.85°) and 4 ( θ = 9.90 − 12.97°), unlike in 1 and 2 (both θ = 0°) (Fig. 2a–d and S9 ). Br − (1.95 Å) 39 has a larger ionic radius than Cl − (1.81 Å) 39 , and the Ag−Br bond (2.619−2.659 Å for 3 ) has a longer bond length than the Ag−Cl bond (2.444−2.532 Å for 1 ) (Fig. S10 ). Therefore, if there is no twist in the two Ag 12 M structures (M = Pt or Pd) in 3 and 4 , the distance between the two Ag 12 M structures in those molecules should be longer than in 1 and 2 (Fig. S11 ). This would induce: (1) an increase in the distance between the shared Ag and the Ag bonded to it; and (2) a structural distortion of the individual Ag 12 M cores (Fig. S11 ), ultimately leading to the instability of the individual Ag 12 M structures (M = Pt or Pd). For 3 and 4 , it can be considered that the formation of such an unstable geometric structure is suppressed by twisting between the two Ag 12 M structures (M = Pt or Pd) (Fig. 2e and S12 ). Fig. 2: Structural analysis of the twist between the two Ag 12 M structures (M = Pt or Pd). a – d View from the long-axis direction for 1 , 2 , 3 and 4 , respectively, showing the twist between the two Ag 12 M structures (M = Pt or Pd) in the cores of 3 and 4 (grey = Ag; orange = Pt; blue = Pd; green = Cl; dark grey = Br; magenta= P). The geometric structure of 1 and 2 are reproduced from ref. 28 , 31 , respectively. In a − d , θ ave indicates the average dihedral angle between the two Ag 12 M structures. e Comparison of the Ag−Ag bond length between the joining Ag and the neighbouring Ag (green line), showing that the bond lengths are quite similar in 1 − 4 . Full size image Regarding such superatomic molecules using Br as the bridging halogen, a similar twist between two icosahedral metal cores was not observed in [Au 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 ( 5 ) 40 using Au as the base element, as reported by Zhu and colleagues (Fig. S13a ). The Au−Br bond (2.569−2.583 Å for 5 ) is shorter than the Ag−Br bond (2.619−2.659 Å for 3 ) (Fig. S13b ). Therefore, there may be no need for a twist between the two Au 12 Pd structures in 5 to preserve the individual icosahedral structures (Fig. S13c ). Namely, in [Au 23 Pd 2 (PPh 3 ) 10 X 7 ] 0 (X = halogen), the distance between the two Au 12 Pd structures is estimated to be relatively moderate when Br is used as a bridging halogen. Indeed, to the best of our knowledge, there have been no reports of the isolation of [Au 23 M 2 (PPh 3 ) 10 Cl 7 ] 0 (M = Pt or Pd) using Cl, which has a smaller ionic radius to Br, as the bridging halogen. It is assumed that [Au 23 M 2 (PPh 3 ) 10 Cl 7 ] 0 (M = Pt or Pd) is difficult to isolate because the distance between the two Au 12 M structures is too small, and it is excessively structurally stressful for the individual Au 12 M structures. A second notable difference is that the PPh 3 structure is located slightly further from the long axis of the superatomic molecule in 3 and 4 compared with in 1 and 2 (Fig. 3 ). Because Br has a larger ionic radius than Cl, there might be a slight steric hindrance between the terminal Br and PPh 3 in 3 and 4 . This seems to produce less variation in the spread angle of the long axis and PPh 3 of the superatomic molecule (Fig. S14a ), and the length of the Ag−P (P = phosphorus) bond (Fig. S14b ) in 3 and 4 compared with in 1 and 2 . Note that 3 has more variation in Ag−Ag bond length than 1 (Fig. S15 ). Fig. 3: Structural analysis of the ligand positions. a – d View from the long-axis direction for 1 , 2 , 3 and 4 , respectively, showing the average distance between the central long axis and the positions of the P atoms (Dave) (grey = Ag; orange = Pt; blue = Pd; green = Cl; dark grey = Br; magenta = P). The geometric structures of 1 and 2 are reproduced from ref. 28 , 31 , respectively. Full size image Therefore, the type of bridging halogen induces slight differences in the geometric structures of the obtained superatomic molecules. However, in all of 1 − 4 , Ag 12 M structures (M = Pt or Pd) is bridged by five halogens, which is common to all of the geometric structures in 1 − 4 . Meanwhile, in previous work on the connection of Au 7 Ag 6 , Teo et al. successfully synthesised [Au 13 Ag 12 (P( p -Tol) 3 ) 10 Cl 7 ](SbF 6 ) 2 (P( p -Tol) 3 = tri( p -tolyl)phosphine; SbF 6 − = hexafluoroantimonate; 6 ) 41 bridged by five Cl atoms, and [Au 13 Ag 12 (PPh 3 ) 10 Cl 8 ](SbF 6 ) ( 7 ) 42 bridged by six Cl atoms. In addition, when Br was used as the bridging halogen, they successfully synthesised [Au 13 Ag 12 (PPh 3 ) 10 Br 8 ](SbF 6 ) ( 8 ) 43 , [Au 12 Ag 13 (P( p -Tol) 3 ) 10 Br 8 ](PF 6 ) (PF 6 − = hexafluorophosphonate; 9 ) 44 , and [Au 13 Ag 12 (PPh 3 ) 10 Br 8 ]Br ( 10 ) 45 bridged by six Br atoms, and even [Au 13 Ag 12 (PMePh 2 ) 10 Br 9 ] 0 (PMePh 2 = methyldiphenylphosphine; 11 ) 46 bridged by seven Br atoms. Although 7 − 11 are connected by a different number of bridging halogens from 1 − 4 and 6 (Fig. S16 and Table 1 ), the total number of valence electrons is estimated to be 16 in all cases 16 . Therefore, 7 − 11 are also considered to be a di-superatomic molecule with two Au 7 Ag 6 or Au 6 Ag 7 structures connected by vertex sharing. However, in the present study, the formation of superatomic molecules bridging two Ag 12 M structures (M = Pt or Pd) with six or seven halogens (X = Cl or Br), such as [Ag 23 M 2 (PPh 3 ) 10 X 8 ] − (the total number of valence electrons = 16) or [Ag 23 M 2 (PPh 3 ) 10 X 9 ] 2− (the total number of valence electrons = 16), was not observed. These anions would be readily oxidised under atmospheric conditions 47 , 48 , leading to a change in the total number of valence electrons of [Ag 23 M 2 (PPh 3 ) 10 X 8 ] 0 and [Ag 23 M 2 (PPh 3 ) 10 X 9 ] 0 from 16 16 to 15 or 14, respectively. In these cases, the individual Ag 12 M structures do not necessarily have closed-shell electronic structures. This explains why [Ag 23 M 2 (PPh 3 ) 10 X 8 ] − and [Ag 23 M 2 (PPh 3 ) 10 X 9 ] 2− were not produced in our study. Similarly, Teo et al. only reported the formation of [Au 11 Ag 12 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 ( 12 ) bridged by five Cl atoms for a superatomic molecule with Pt at the centre of the metal core 49 . Kappen et al. also only reported [Au 10 Ag 13 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 ( 13 ) 50 bridged by five Cl atoms for superatomic molecules containing Pt at the centre of the metal core. It is assumed that [Au 11 Ag 12 Pt 2 (PPh 3 ) 10 Cl 7 ] − and [Au 10 Ag 13 Pt 2 (PPh 3 ) 10 Cl 6 ] − could not be isolated in their study for the same reason. Electronic structure Figure 4a–d shows the optical absorption spectra of dichloromethane solutions of 1 − 4 , respectively. The optical absorption spectra are generally similar in shape, but the peak structure shifts to a longer wavelength when the central atom is changed from Pt to Pd. Fig. 4: Optical absorption spectra and analyses. a – d Optical absorption spectra of 1 , 2 , 3 and 4 , respectively. e , f Density of states of 3′ and 4′ , respectively. In e and f , a, b, a′ and b′ correspond to the peaks labelled as such in c and d (red = ligand; green = Ag 23 M 2 (sp); blue = Ag 23 M 2 (d)). Full size image Both 1 and 2 belong to the D 5h point group 28 , 31 , 32 . Based on the calculated electronic structures of [Ag 23 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 ( 1′ ) and [Ag 23 Pd 2 (PPh 3 ) 10 Cl 7 ] 0 ( 2′ ), the peak of the first absorption band on the longer wavelength side is attributed to an allowed transition between the orbitals originated from the core ( a 2 ʹʹ→ a 1 ʹ) (Fig. 5 ) 28 , 31 , 32 . The second peak that appears on the shorter wavelength side in the absorption spectrum is attributed to a charge transfer transition from the a 2 ʹʹ orbital originating from the core to the orbital with charge distribution around PPh 3 (Table S2 ). With regard to the change in peak position due to the difference in the central atom, our previous studies have shown that changing the central atom from Pt to Pd causes a red shift in the peak structure due to a decrease in the energy of the orbitals near the lowest unoccupied molecular orbital (LUMO) 28 . Fig. 5: Orbital energies and Kohn–Sham orbital diagram related to the first peak in the optical absorption spectrum. a , b , c , and d are Kohn–Sham orbital diagram of 1’ , 2’ , 3’ , and 4’ , respectively. The transition dipole moment from HOMO to LUMO becomes zero. This is the reason why the HOMO−LUMO transition is forbidden for 1 − 4 . Full size image We also performed DFT calculations for 3 and 4 in the present study. The geometric structures ([Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 ( 3′ ; Fig. S17 ) and [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 ( 4′ ; Fig. S18 )), and the electronic structures (Fig. S19 ) calculated using the PBE functional both reproduced the experimental results well. Both 3′ and 4 ′ belong to the D 5 point group. Based on the calculated electronic structures of 3′ and 4′ , the peak of the first absorption band on the longer wavelength side is attributed to an allowed transition between orbitals originating from the core ( a 2 → a 1 ) (Fig. 5 ). The second peak that appears on the shorter wavelength side is attributed to a charge transfer transition from the a 2 orbital originating from the core to the orbital with charge distribution around PPh 3 (Table S2 ). There was no significant difference in the energy of the highest occupied molecular orbital (HOMO) between 3′ and 4′ , and the HOMOs were similar in energy compared with those of 1′ and 2′ . However, the orbital ( a 1 ) energy on the LUMO side was much lower in 4′ than in 3′ (Figs. 4e, f , 5c, d ). The red shift in the peak structure caused by the change of the central atom from Pt to Pd was found to be due to these factors, similar to the case when Cl is used as the bridging halogen. We have also investigated the possibility that the red shift in the peak structure is caused by the twist due to the use of Br as a bridging halogen. Specifically, we have calculated the optical absorption spectra also for [Ag 23 M 2 (PPh 3 ) 10 Br 7 ] 0 (M = Pt or Pd) without distortion (Figs. S20 , S21 ). The results demonstrated that the optical absorption spectrum changes only a little depending on the twist, supporting the above interpretation that the main reason for the red shift in the peak structure is the change of the central atom from Pt to Pd. For 1 − 4 , it is difficult to estimate the HOMO−LUMO gap of each superatomic molecule from its optical absorption spectrum because the HOMO−LUMO transition is forbidden (Fig. 5 ). Therefore, we estimated the HOMO−LUMO gap of each di-superatomic molecule based on the 1′ − 4′ electronic structure obtained by DFT calculations. As a result, 1′ − 4′ were estimated to have HOMO−LUMO gaps of 1.66, 1.55, 1.66 and 1.52 eV, respectively (Table S3 ). These results indicate that the change of the central atom from Pt to Pd also induces a decrease in the HOMO−LUMO gap. Although we also attempted to conduct the electrochemical experiment 11 to experimentally determine the HOMO−LUMO gap, unfortunately, we could not obtain a reliable voltammogram due to the lack of the required amount of the obtained crystals. Regarding the electronic structure, we also measured PL spectra of 1 − 4 (Fig. 6 ). The results demonstrated that (1) 1 − 4 exhibit PL in the visible-to-NIR region and (2) PL peak positions of 2 and 4 are red-shifted compared to those of 1 and 3 . This trend is well consistent with that of optical absorption, implying that HOMO and LUMO regions (Fig. 5 ) are related to the PL of 1 − 4 . Fig. 6: PL spectra obtained for the toluene solution of 1 − 4 at 25 °C. The toluene solutions of 1 − 4 were excited by the light of 451, 487, 459 and 496 nm, respectively (orange = 1 , dark blue = 2 , yellow = 3 , purple = 4 ). In this figure, the vertical axis is normalised to eliminate the effect of the difference of the concentration of the 1 − 4 on the PL intensity. Full size image Stability We investigated the stabilities of 1 − 4 with regard to degradation in toluene or dichloromethane solution by optical absorption spectroscopy. Ag NCs generally have low photostability 31 . Furthermore, in the present study, we also dealt with less stable and less easily formed superatomic molecules to clarify the necessary conditions for the formation of a superatomic molecule composed of Ag 13− x M x . Therefore, the solutions were kept in the dark during the stability measurements. None of the superatomic molecules was very stable in the dichloromethane solution, and the shapes of their spectra changed dramatically over time (Fig. S22 ). Figure 7a–d shows the time-dependent changes of the optical absorption spectra of the toluene solutions of 1 − 4 , respectively. As shown in Fig. 7a , 1 was quite stable in the toluene solution for three days. On the other hand, the shapes of the spectra of 2 − 4 changed gradually over time (Fig. 7e ). We found that 2 and 4 were particularly unstable. Figure 7 demonstrates that the stability of 1 − 4 decreases in the order 1 > 3 > 2 > 4 . In the present study, we also attempted synthesis using only AgNO 3 as the metal salt. The results demonstrated that somethings were synthesised just after adding NaBH 4 into the solution even when the precursor salt of heteroatoms (H 2 PtCl 6 , Pd(PPh 3 )Cl 6 , PtBr 2 or PdBr 2 ) was not included in the solution: the solution colour became yellow just after adding NaBH 4 into solution. However, the solution soon became colourless and the black precipitate was obtained. According to these results, it can be considered that the stability of [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br) is quite low even if those clusters could be formed in solution. Similar results were reported by ref. 31 . Taking into account all the results mentioned above, the superatomic molecules described by [Ag 23 M 2 (PPh 3 ) 10 X 7 ] z (M = Ag, Pd, or Pt; X = Cl or Br; z = 2+ or 0) are interpreted to decrease in stability in the order [Ag 23 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 ( 1 ) > [Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 ( 3 ) > [Ag 23 Pd 2 (PPh 3 ) 10 Cl 7 ] 0 ( 2 ) > [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 ( 4 ) (which are experimentally synthesisable) > [Ag 25 (PPh 3 ) 10 X 7 ] 0 (X = Cl or Br; which are not so stable in solution). Fig. 7: Time dependence of the optical absorption spectra. a – d are time-dependent optical absorption spectra of 1 , 2 , 3 and 4 , respectively. e Time dependence of plots of absorbance of the first and second peaks in the optical absorption spectra of 1 (446 and 560 nm), 2 (489 and 630 nm), 3 (452 and 560 nm) and 4 (497 and 630 nm) (orange = 1 , dark blue = 2 , yellow = 3 , purple = 4 ). In these spectra, the peak positions are little shifted compared to those in Fig. 4 , probably due to the difference in solvent (dichloromethane for Fig. 4 vs. toluene for Fig. 7). Full size image Key factors for formation and isolation The substitution of the central atom of each icosahedral core by Pt or Pd is very effective for forming a superatomic molecule consisting of two Ag 13− x M x structures (M = Ag or other metal) connected by vertex sharing. Based on the DFT calculations by Baraiya et al., the Pt or Pd substitution of the central atom of Ag 13 leads to an increase in the average binding energy in the NCs 37 . The fact that it was possible to generate 1 − 4 , whereas [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br) was difficult to isolate, seems to be largely related to the individual icosahedral cores of 1 − 4 being stronger than those of [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br) owing to the increase in the average binding energy. The fact that [Ag 23 Pt 2 (PPh 3 ) 10 X 7 ] 0 is more stable than [Ag 23 Pd 2 (PPh 3 ) 10 X 7 ] 0 (X = Cl or Br) can also be explained by the difference in average binding energy. Regarding these heteroatomic substitutions of the central atom, Kang et al. pointed out that (1) they also affect the charge state of Ag in the L4 and L6 layers in Fig. S23a ; and (2) without central atom substitution, [Ag 25 (PPh 3 ) 10 Cl 7 ] 2+ would not form stably owing to high charge repulsion between the L4 and L6 layers 40 . We therefore estimated the natural charges 51 , 52 of Ag in the L4 and L6 layers for 1′ − 4′ , and [Ag 25 (PPh 3 ) 10 Cl 7 ] 2+ ( 5′ ; Table 1 ) and [Ag 25 (PPh 3 ) 10 Br 7 ] 2+ ( 6′ ). The results showed no strong correlation between the charge repulsion in the L4−L6 layer and the stability of 1 − 4 and [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br): the magnitude of charge repulsion was estimated to be in the order 1′ > 5′ > 2′ = 3′ > 4′ = 6′ (Fig. S23b ) and this order does not consistent with that of the stability ( 1 > 3 > 2 > 4 ). These results suggest that, although central atom substitution certainly affects the charge state of Ag in the L4 and L6 layers, such charge repulsion is not the main reason of the fact that [Ag 25 (PPh 3 ) 10 X 7 ] 2+ (X = Cl or Br) is difficult to isolate. The present study also revealed that a superatomic molecule consisting of two Ag 13− x M x structures can be formed even when Br is used as the bridging halogen instead of Cl. As mentioned above, in 3 and 4 , the twist between the two Ag 12 M structures (Fig. 2c, d ) prevents each Ag 12 M structure from becoming unstable. Based on the results obtained in the present study, the type of bridging halogen appears to have little effect on whether superatomic molecules can be formed or not, as long as the bridging halogen is large enough to maintain a moderate distance between the two Ag 12 M structures. It should be noted that the type of bridging halogen has a slight effect on the binding energy of the Ag−X bond. That 1 is slightly more stable than 3 , and 2 is slightly more stable than 4 may be related to the Ag−Cl bond (314 kJ mol −1 ) being stronger than the Ag−Br bond (293 kJ mol −1 ) 53 . In addition, the type of bridging halogen also has a slight effect on the variation in Ag−Ag bond length within each Ag 12 M molecules (Fig. S15 ). These results suggest that the type of bridging halogen affects the stability depending on the binding energy of the Ag−halogen bond and the variation in Ag−Ag bond length within each Ag 12 M molecules. In [Ag 23 M 2 (PPh 3 ) 10 X 7 ] z , in addition to the bridging sites, the halogen is also coordinated at both ends of the long axis of the superatomic molecule. Therefore, the type of halogen also affects the length of the Ag−P bond (Fig. 3 ), and consequently, for example, some of the Ag−P bonds are longer in 1 (Fig. S14b ). This means that some Ag−P bonds are more easily dissociated in [Ag 23 Pt 2 (PPh 3 ) 10 Cl 7 ] 0 . However, as mentioned above, 1 is the most stable against degradation among 1 − 4 . These results indicate that the slight difference in Ag−P bond length caused by the difference in the halogen species at both ends in the superatomic molecule does not determine the stability order of [Ag 23 M 2 (PPh 3 ) 10 X 7 ] z , although the detachment of PPh 3 seems to be also included in the degradation of the superatomic molecules, as shown in Fig. 7 (Fig. S24 ). Finally, in the present study, we were only able to confirm the formation of superatomic molecules with five bridging halogens. This is considered to be largely because when the number of bridging halogens ( y − 2) is higher than five in [Ag 25− x M x (PR 3 ) 10 X y ] z (M = Pt or Pd; X = Cl or Br; y = number of X), the total number of valence electrons is 16 only if the molecule is an anion. Anions are generally not highly resistant to oxidation in air 47 , 49 . These results suggest that isolatable [Ag 25− x M x (PR 3 ) 10 X y ] z must have a substitutional heteroatom species and a certain number of bridging halogens such that the total number of valence electrons is 16 in the cationic or neutral state (Table 1 ). The factors discussed above suggest that the following three conditions are required to stabilise superatomic molecules consisting of two Ag 13− x M x structures (M = Ag or other metal) connected by vertex sharing ([Ag 25− x M x (PR 3 ) 10 X y ] z ): (1) a halogen of sufficient size to maintain a moderate distance between the two Ag 13− x M x structures is used as the bridging halogen (Fig. 8a ); (2) an icosahedral core, which is stronger than Ag 13 , is formed by heteroatom substitution (Fig. 8b ); and (3) the combination of the substituted heteroatoms and the number of bridging halogens is such that the total number of valence electrons is 16 when the molecule is cationic or neutral (Fig. 8c ). For (1), halogens with ionic radii equal or larger than that of Cl fall into this category, and for (2), the central atom substitution with Pt or Pd satisfies this condition. Based on the reports by refs. 41 , 42 , 43 , 44 , 45 , 46 , condition (2) is also satisfied by multiple atom substitution with Au 37 . For (3), the number of bridging halogens ( y − 2) is limited to 5 when Pt or Pd is the heteroatom, but when Au is the heteroatom 41 , 42 , 43 , 44 , 45 , 46 , the number of bridging halogens can be in a range of 5 to 7 (Fig. S16 ). As long as these three essential conditions are met at the same time, it is possible to stabilise and thereby isolate a superatomic molecule with two Ag 13− x M x structures connected by vertex sharing. To increase the stability of the resulting superatomic molecule, it is preferable to use Cl as the bridging halogen, and to combine multiple heteroatomic substitutions to stabilise the metal core. Therefore, it is assumed that 12 and 13 are even more stable than 1 . Moreover, it is expected that it will be possible to isolate [Ag 23 PtPd(PPh 3 ) 10 X 7 ] 0 , [Ag 23 Ni 2 (PPh 3 ) 10 X 7 ] 0 , [Ag 23 PtNi(PPh 3 ) 10 X 7 ] 0 , and [Ag 23 PdNi(PPh 3 ) 10 X 7 ] 0 (X = Cl or Br) superatomic molecules in the future 20 , 37 , 54 . According to our experiment on the stability (Fig. S24 ), the addition of an excess PPh 3 to the solution seems to help isolate new superatomic molecules. So far, we have only discussed cases in which PR 3 and X are used as ligands. However, if recently reported multidentate ligands 29 , 30 are used as bridging ligands, it may be possible to create even more types of superatomic molecules, such as those consisting of two Ag 13 structures connected by vertex sharing. The knowledge obtained in this study is expected to be also useful to stabilise and thereby isolate the longer superatomic molecules composed of three or four superatoms. Fig. 8: Indispensable requirements for stabilising and thereby isolating [Ag 25− x M x (PR 3 ) 10 X y ] z . a Use of a bridging X with a relevant ion radius (grey = Ag or M; red = bridging X). b Metal substitution for strengthening each Ag 13− x M x (grey = Ag; red = M). c Combination of types of M and the number of X (y − 2) for a charge state (z) with high resistance to oxidation (grey = Ag; red = M or bridging X). Full size image Although the present study is concerned with superatomic molecules composed of Ag 13− x M x (M = Ag or other metal), the above-mentioned conditions ( 1 – 3 ) also seem to be requirements for stabilising and thereby isolating superatomic molecules consisting of two Au 13− x M x structures (M = Ag or other metal) connected by vertex sharing. The point of difference from the case of Ag 13− x M x is the threshold in (1). In the case of Au 13− x M x , the halogens with ionic radii equal or larger than that of Br are assumed to fall under condition 1) 40 . Because Au differs from Ag in its formation of strong bonds with thiolate (SR) and selenolate (SeR) 55 , 56 , 57 , for superatomic molecules composed of Au 13− x M x , even more stable superatomic molecules can be obtained if SR and SeR are used as bridging ligands 58 . In fact, it has been reported that [Au 25 (PPh 3 ) 10 (SR) 5 Cl 2 ] 2+ (SR = alkanethiolate 21 or PET 59 ) and [Au 25 (PPh 3 ) 10 (SePh) 5 Br 2 ] +/2+ (SePh = phenylselenolate) 60 connected with Au 13 without heteroatom substitution can also be isolated when SR or SeR is used as the bridging ligand. We have also successfully isolated [Au 24 Pd(PPh 3 ) 10 (PET) 5 Cl 2 ] + in which Pd substitution occurred only in one icosahedral core 61 . It is expected that such superatomic molecules connected with Au 13− x M x by SR or SeR, [Au 24 Pt(PR 3 ) 10 (SR) 5 ] + and [Au 23 PtPd(PR 3 ) 10 (SR) 5 ] 0 (PR 3 = PPh 3 , P( p -Tol) 3 41 , 43 , 45 or PMePh 2 46 ), will be isolated in the future. Methods Synthesis [Ag 23 Pt 2 (PPh 3 ) 10 Br 7 ] 0 (3) All syntheses were performed at 25 °C. First, 30 mg (0.18 mmol) of AgNO 3 and 5.1 mg (0.05 mmol) of NaBr were dissolved in 5 mL of methanol, and then 5 mL of methanol containing 2.1 mg (0.006 mmol) of PtBr 2 was added to the solution. The mixed solution was stirred for 15 min, and then 30 mL of methanol containing 262 mg (1 mmol) of PPh 3 , which was sonicated to disperse in methanol, was added. After stirring for 15 min, 1 mL of a methanol solution containing 20 mg (0.529 mmol) of NaBH 4 was rapidly added to the solution and the resulting solution was stirred for another 24 h. All experiments up to this point were performed in the dark. The solvent was then removed from the solution by rotary evaporation. Then, toluene was added to extract the product, and then water was added to the solution. After centrifugation, the toluene layer was separated to eliminate the excess NaBH 4 , and the solvent of the solution was evaporated using an evaporator to obtain the desired NC ( 3 ) (Fig. S25 ) (See S1.1 for chemicals). The chemical composition was confirmed by ESI-MS (Fig. S2 ), XPS (Fig. S3 ) and SC-XRD (See S1.2 for crystallographic method). [Ag 23 Pd 2 (PPh 3 ) 10 Br 7 ] 0 (4) First, 30 mg (0.18 mmol) of AgNO 3 was dissolved in 5 mL of methanol, and then 5 mL of methanol containing 1.6 mg (0.006 mmol) of PdBr 2 was added to the solution. After 15 min of stirring, 30 mL of methanol containing 262 mg (1 mmol) of PPh 3 , which was sonicated to disperse in methanol, was added to the solution. After stirring for 15 min, 1 mL of a methanol solution containing 20 mg (0.529 mmol) of NaBH 4 was rapidly added to the solution and the resulting solution was stirred for another 24 h. All experiments up to this point were performed at 0 °C in the dark. Note that, unlike in the synthesis of 3 , it was not necessary to increase the quantity of Br ions in the solution by adding TOABr during the synthesis of 4 . The solvent was then evaporated from the mixed solution using a rotary evaporator. Then, toluene was added to extract the product, and then water was added to the solution. After centrifugation, the toluene layer was separated to eliminate the excess NaBH 4 , and the solvent of the solution was evaporated using an evaporator to obtain the desired NC ( 4 ) (Fig. S26 ) (See S1.1 for chemicals). SC-XRD (See S1.2 for crystallographic method) was used for confirming the geometry and composition of 4 except Pd atoms, which were confirmed by XPS (Fig. S4 ) and ICP-MS. ESI-MS of 4 was not succeeded owing to the instability of 4 . Crystallisation Compounds 3 and 4 were crystallised using the liquid−liquid diffusion method. 3 or 4 was first dissolved in ethanol and the solution was placed in a crystallisation vial. Six equivalent amount of hexane was then gently placed on the ethanol solution of 3 or 4 . The crystallisation vial was covered with a lid and the vial was allowed to stand at 25 °C. Orange needle-like crystals were obtained after a few days. Characterisation ESI-MS was performed with an ESI-Qq-TOF-MS compact (Bruker, MA, USA). In the experiment, first, multiple crystals of 3 were dissolved in toluene with PPh 3 (1 mM), which suppresses the detachments of PPh 3 from the superatomic molecules in the solution (Fig. S24 ). Then, methanol was added to this solution (toluene:methanol = 3 : 1 (v/v)). Finally, 5 mM caesium carbonate (Cs 2 CO 3 ) methanol solution was added to the solution. The obtained solution was electrosprayed at a flow rate of 200 µL/min. The SCs were immersed in cryoprotectant Parabar 10312 (Hampton Research, California, USA) and mounted on a MicroLoops E Inclined Assortment™ (MiTeGen, New York, USA). The SC-XRD data sets were collected in a Bruker D8 QUEST, using monochromated MoKα radiation ( λ = 0.71073 Å). Bruker Apex 3 62 suite was used for solving preliminary structures by following the sequential steps: indexing, data integration, reduction, absorption correction (multi-scan), space group determination and structure solution (with the intrinsic-phasing method). Final refinement was performed by SHELXL -2018/3 63 using the Olex 2 platform 64 (Tables S1 , S2 ). The optical absorption spectra of the dichloromethane solutions of 3 and 4 were obtained at 25 °C using a V-630 spectrometer (JASCO, Tokyo, Japan). Multiple crystals were dissolved in dichloromethane for the measurement. PL spectra of the toluene solution of 1 − 4 were measured using an FP-6300 spectrofluorometer (JASCO, Tokyo, Japan) at 25 °C. PL intensity ( F nor. ( λ )) was normalised using the following equation to eliminate the effect of the difference in the concentration of 1 − 4 on the PL intensity. $${F}_{{{{{{\rm{nor}}}}}}.}(\lambda )=F({\lambda }_{{{{{{\rm{em}}}}}}})/[1{-}{10}^{{-}A(\lambda {{{{{\rm{ex}}}}}})}]$$ Where λ em , λ ex , A and F represent the emission wavelength, excitation wavelength, absorbance and PL intensity, respectively. XPS spectra were collected using a JPS-9010MC electron spectrometer (JEOL, Tokyo, Japan) at a base pressure of ∼ 2 × 10 −8 Torr. X-rays from the Mg-Kα line (1253.6 eV) were used for excitation. An indium plate was used as a substrate. The spectra were calibrated with the peak energies of In 3d 3/2 (451.2 eV) 65 . Stability experiments To investigate the stability of 1 − 4 with regard to decomposition in solution, solutions of 1 , 2 , 3 or 4 were prepared and measured in the following three different ways. 1. Dichloromethane solutions of each sample were placed in the glass cell of a spectrophotometer at 25 °C. The optical absorption spectrum of each solution was regularly measured for 1 h (Fig. S22 ). 2. Toluene solutions of each sample were left in a test tube with a lid at 30 °C. The optical absorption spectra were measured regularly for 3 days of the solutions of 1 and 3 , 1 day of the solution of 2 , and 8 h of the solution of 4 . (Fig. 7 ). 3. Toluene solutions with PPh 3 (95 mM) of each sample were placed in the glass cell of a spectrophotometer at 25 °C. The optical absorption spectrum of each solution was regularly measured for 1 week (Fig. S24 ). DFT calculations We performed DFT calculations on 3′ , 4′ and 6′ using the structures of the experimentally synthesised 3 and 4 ; Pd was replaced with Ag for the calculation of 6′ . All DFT calculations were performed with TURBOMOLE 66 under the resolution of identity approximation with the PBE 67 functional using the def-SV(P) basis sets 68 along with the relativistic effective core potentials for Pd, Ag, and Pt 69 . Optimised structures with different Pd positions were obtained at the same level of theory. The electronic absorption spectra were simulated in the framework of time-dependent DFT 70 , 71 , 72 , 73 , in which the line spectra were convoluted by a Lorentz function with a width of 10 nm. PBE was used as a function to calculate the absorption spectrum. Optimised structures with different Pd positions ( 3′ ) and absorption spectra ( 3′ , 4′ and 6′ ) were also calculated using CAM-B3LYP as a functional (Figs. S27 , S28 ), which produced similar overall results to those obtained using PBE as a functional. Data availability The X-ray crystallographic coordinates for structures reported in the present study have been deposited at the Cambridge Crystallographic Data Centre (CCDC), under deposition numbers 2195306−2195307. These data can be obtained free of charge from The Cambridge Crystallographic Data Centre via . cif and cif check of 3 and 4 are provided in supplementary Data 1 – 4 . The atomic coordinates of the DFT-optimised structures of 1′-4′, 6′ have been provided in Supplementary Data 5 – 9 , respectively. All other data are available from the corresponding authors on reasonable request.
Superatomic molecules containing noble metal elements like gold and silver are studied for their potential in the synthesis of superatomic materials. However, the understanding of silver-based superatomic molecules has been limited. Addressing this gap, researchers from Japan studied two bimetallic superatomic molecules with silver as a main constituent to determine the key factors that enabled their formation. Their findings are expected to advance the development of novel materials in the future. In the past few decades, metal nanoclusters composed of noble metal elements such as gold (Au) and silver (Ag) have gained attention as superatoms for the synthesis of materials with unique properties and potential new applications. These superatoms (also known as "artificial atoms") typically consist of a cluster of a few to several hundred atoms and exhibit properties that are significantly different from their bulk, conventional counterparts. However, much like real atoms, the stability of these superatoms is determined by the formation of a closed-shell electron structure. Ag-based superatoms are known for their superior properties and functions, including photoluminescence and selective catalytic activity, compared to those of Au-based superatoms. However, most of the research in this field has been primarily focused on Au-based superatomic molecules. To overcome this research gap, researchers from Japan studied the formation of superatomic molecules composed of Ag and evaluated the factors involved in this formation. This study was published in the journal Communications Chemistry on March 28, 2023. Speaking of the motivation behind studying Ag-based superatoms, Prof. Negishi says, "So far, we humans have created a variety of useful materials from the elements available to us on Earth. However, looking at a future with complex energy and environmental issues, the development of materials with new properties and functions is desired." To this end, the researchers synthesized two di-superatomic molecules with bromine (Br) as the bridging ligand: ([Ag23Pt2(PPh3)10Br7]0 and [Ag23Pd2(PPh3)10Br7]0 (PPh3 = Triphenylphosphine). The former consisted of two icosahedral Ag12Pt superatoms connected by vertex sharing with platinum atoms (Pt) occupying the central position in each superatom. In contrast, the other superatomic molecule consisted of two icosahedral Ag12Pd structures with palladium (Pd) as the central atom. The geometric/electronic structure and stability of these two nanoclusters was then analyzed and compared with [Ag23Pt2(PPh3)10Cl7]0 (1) and [Ag23Pd2(PPh3)10Cl7]0 (2)—two nanoclusters with geometrical similarity to the synthesized nanoclusters, consisting of chlorine (Cl) as the bridging atom. On examining the geometric structures of the four nanoclusters, the researchers observed a twist between the two icosahedral structures containing Br as the bridging ligand. The researchers suggest that this twist stabilizes the nanocluster by shortening the distance between the two icosahedral structures. Additionally, the larger Br atom was found to introduce steric hindrance in the molecule, causing both the PPh3 molecule to be positioned further from the long axis of the metal nanocluster and, a change in the bond length of the Ag-P and Ag-Ag bonds. These findings indicate that although the type of bridging halogen slightly affects the geometric structures of the metal nanoclusters, it does not hinder their formation. "The type of bridging halogen appears to have little effect on whether superatomic molecules can be formed or not, as long as the bridging halogen is large enough to maintain a moderate distance between the two Ag12M structures," explains Prof. Negishi. However, the stability of the nanocluster was largely dependent on the number of bridging halogens attached to it. Like atoms, stable metallic nanoclusters require a filled valence shell. In the case of the prepared nanoclusters—which had a total of 16 valence electrons—the researchers were only able to attach a maximum of five bridging halogens to maintain the metal nanocluster in a stable neutral or cationic state. The presence of Pd and Pt central atoms was found to be due to the formation of metallic nanoclusters. Substituting the central atom of Ag13 with Pt or Pd led to an increase in the average binding energy within the nanoclusters, making it favorable for the formation of superatomic molecules. Overall, the researchers identified three key requirements for the formation and isolation of superatomic molecules consisting of two Ag13−xMx structures connected by vertex sharing. These include the presence of a bridging halogen that can maintain an optimal distance between the two structures, a combination of heteroatoms and bridging halogens that results in 16 valence electrons, and the formation of an icosahedral core that is stronger than Ag13. In the words of Prof. Negishi, "These findings offer clear design guidelines for the creation of molecular devices with various properties and functions, and can potentially contribute to resolving pressing concerns regarding clean energy and the environment."
10.1038/s42004-023-00854-0
Biology
Reading the entire human genome – one long sentence at a time
Miten Jain et al. Linear assembly of a human centromere on the Y chromosome, Nature Biotechnology (2018). DOI: 10.1038/nbt.4109 Journal information: Nature Biotechnology
http://dx.doi.org/10.1038/nbt.4109
https://phys.org/news/2018-04-entire-human-genome-sentence-atatime.html
Abstract The human genome reference sequence remains incomplete owing to the challenge of assembling long tracts of near-identical tandem repeats in centromeres. We implemented a nanopore sequencing strategy to generate high-quality reads that span hundreds of kilobases of highly repetitive DNA in a human Y chromosome centromere. Combining these data with short-read variant validation, we assembled and characterized the centromeric region of a human Y chromosome. Main Centromeres facilitate spindle attachment and ensure proper chromosome segregation during cell division. Normal human centromeres are enriched with AT-rich ∼ 171-bp tandem repeats known as alpha satellite DNA 1 . Most alpha satellite DNAs are organized into higher order repeats (HORs), in which chromosome-specific alpha satellite repeat units, or monomers, are reiterated as a single repeat structure hundreds or thousands of times with high (>99%) sequence conservation to form extensive arrays 2 . Characterizing both the sequence composition of individual HOR structures and the extent of repeat variation is crucial to understanding kinetochore assembly and centromere identity 3 , 4 , 5 . However, no sequencing technology (including single-molecule real-time (SMRT) sequencing or synthetic long-read technologies) or a combination of sequencing technologies has been able to assemble centromeric regions because extremely high-quality, long reads are needed to confidently traverse low-copy sequence variants. As a result, human centromeric regions remain absent from even the most complete chromosome assemblies. Here we apply nanopore long-read sequencing to produce high-quality reads that span hundreds of kilobases of highly repetitive DNA ( Supplementary Fig. 1 ). We focus on the haploid satellite array present on the Y centromere (DYZ3), as it is particularly suitable for assembly owing to its tractable size, well-characterized HOR structure, and previous physical mapping data 6 , 7 , 8 . We devised a transposase-based method that we named 'longboard strategy' to produce high-read coverage of full-length bacterial artificial chromosome (BAC) DNA with nanopore sequencing (MinION sequencing device, Mk1B, Oxford Nanopore Technologies). In our longboard strategy, we linearize the circular BAC with a single cut site, then add sequencing adaptors ( Fig. 1a ). The BAC DNA passes through the pore, resulting in complete, end-to-end sequence coverage of the entire insert. Plots of read length versus megabase yield revealed an increase in megabase yield for full-length BAC DNA sequences ( Fig. 1b and Supplementary Fig. 2 ). We present more than 3,500 full-length '1D' reads (that is, one strand of the DNA is sequenced) from ten BACs (two control BACs from Xq24 and Yp11.2; eight BACs in the DYZ3 locus 9 ; Supplementary Table 1 ). Figure 1: BAC-based longboard nanopore sequencing strategy on the MinION. ( a ) Optimized strategy to cut each circular BAC once with transposase results in a linear and complete DNA fragment of the BAC for nanopore sequencing. ( b ) Yield plot of BAC DNA (RP11-648J18). ( c ) High-quality BAC consensus sequences were generated by multiple alignment of 60 full-length 1D reads (shown as blue and yellow for both orientations), sampled at random with ten iterations, followed by polishing steps (green) with the entire nanopore long-read data and Illumina data. ( d ) Circos representation 20 of the polished RP11-718M18 BAC consensus sequence. Blue arrowheads indicate the position and orientation of HORs. Purple tiles in yellow background mark the position of the Illumina-validated variants. Additional purple highlight extending from select Illumina-validated variants are used to identify single-nucleotide-sequence variants and mark the site of the DYZ3 repeat structural variants (6 kb) in tandem. Full size image Correct assembly across the centromeric locus requires overlap among a few sequence variants, meaning that accuracy of base-calls is important. Individual reads (MinION R9.4 chemistry, Albacore v1.1.1) provide insufficient sequence identity (median alignment identity of 84.8% for control BAC, RP11-482A22 reads) to ensure correct repeat assembly 10 . To improve overall base quality, we produced a consensus sequence from 10 iterations of 60 randomly sampled alignments of full-length 1D reads that spanned the full insert length for each BAC ( Fig. 1c ). To polish sequences, we realigned full-length nanopore reads to each BAC-derived consensus (99.2% observed for control BAC, RP11-482A22; and an observed range of 99.4–99.8% for vector sequences in DYZ3-containing BACs). To provide a truth set of array sequence variants and to evaluate any inherent nanopore sequence biases, we used Illumina BAC resequencing (Online Methods ). We used eight BAC-polished sequences (e.g., 209 kb for RP11-718M18; Fig. 1d ) to guide the ordered assembly of BACs from p-arm to q-arm, which includes an entire Y centromere. We ordered the DYZ3-containing BACs using 16 Illumina-validated HOR variants, resulting in 365 kb of assembled alpha satellite DNA ( Fig. 2a and Supplementary Data 1 ). The centromeric locus contains a 301-kb array that is composed of the DYZ3 HOR, with a 5.8-kb consensus sequence, repeated in a head-to-tail orientation without repeat inversions or transposable element interruptions 6 , 11 , 12 . The assembled length of the RP11 DYZ3 array is consistent with estimates for 96 individuals from the same Y haplogroup (R1b) ( Supplementary Fig. 3 ; mean: 315 kb; median: 350 kb) 13 , 14 . This finding is in agreement with pulsed-field gel electrophoresis (PFGE) DYZ3 size estimates from previous physical maps, and from a Y-haplogroup matched cell line ( Supplementary Fig. 4 ). Figure 2: Linear assembly of the RP11 Y centromere. ( a ) Ordering of nine DYZ3-containing BACs spanning from proximal p-arm to proximal q-arm. The majority of the centromeric locus is defined by the DYZ3 conical 5.8-kb HOR (light blue). Highly divergent monomeric alpha satellite is indicated in dark blue. HOR variants (6.0 kb) indicated in purple. ( b ) The genomic location of the functional Y centromere is defined by the enrichment of centromere protein A (CENP-A), where enrichment ( ∼ 5–6×) is attributed predominantly to the DYZ3 HOR array. Full size image Pairwise comparisons among the 52 HORs in the assembled DYZ3 array revealed limited sequence divergence between copies (mean 99.7% pairwise identity). In agreement with a previous assessment of sequence variation within the DYZ3 array 6 , we detected instances of a 6.0-kb HOR structural variant and provide evidence for seven copies within the RP11 DYZ3 array that were present in two clusters separated by 110 kb, as roughly predicted by previous restriction map estimates 8 . Sequence characterization of the DYZ3 array revealed nine HOR haplotypes, defined by linkage between variant bases that are frequent in the array ( Supplementary Fig. 5 ). These HOR haplotypes were organized into three local blocks that were enriched for distinct haplotype groups, consistent with previous demonstrations of short-range homogenization of satellite-DNA-sequence variants 6 , 15 , 16 . Functional centromeres are defined by the presence of inner centromere proteins that epigenetically mark the site of kinetochore assembly 17 , 18 , 19 . To define the genomic position of the functional centromere on the Y chromosome, we examined the enrichment profiles of inner kinetochore centromere protein A (CENP-A), a histone H3 variant that replaces histone H3 in centromeric nucleosomes, using a Y-haplogroup-matched cell line that offers a similar DYZ3 array sequence ( Fig. 2b and Supplementary Data 2 ) 5 , 14 , 19 . We found that CENP-A enrichment was predominantly restricted to the canonical DYZ3 HOR array, although we did identify reduced centromere protein enrichment extending up to 20 kb into flanking divergent alpha satellite on both the p-arm and q-arm side. Thus, we provide a complete genomic definition of a human centromere, which may help to advance sequence-based studies of centromere identity and function. We applied a long-read strategy to map, sequence, and assemble tandemly repeated satellite DNAs and resolve, for the first time to our knowledge, the array repeat organization and structure in a human centromere. Previous modeled satellite arrays 14 are based on incomplete and gapped maps, and do not present complete assembly data across the full array. Our complete assembly enables the precise number of repeats in an array to be robustly measured and resolves the order, orientation, and density of both repeat-length variants across the full extent of the array. This work could potentially advance studies of centromere evolution and function and may aid ongoing efforts to complete the human genome. Methods BAC DNA preparation and validation. Clones of bacterial artificial chromosomes (BACs) used in this study were obtained from BACPAC RPC1-11 library, Children's Hospital Oakland Research Institute in Oakland, California, USA ( ). BACs that span the human Y centromere, RP11-108I14, RP11-1226J10, RP11-808M02, RP11-531P03, RP11-909C13, RP11-890C20, RP11-744B15, RP11-648J16, RP11-718M18, and RP11-482A22, were determined based on previous hybridization with DYZ3-specific probes, and confirmed by PCR with STSs sY715 and sY78 (ref. 9 ). Notably, DYZ3 sequences, unlike shorter satellite DNAs, have been observed to be stable and cloned without bias 5 , 21 . The RP11-482A22 BAC was selected as our control since it had previously been characterized by nanopore long-read sequencing 22 , and presented ∼ 134 kb of assembled, unique sequence present in the GRCh38 reference assembly to evaluate our alignment and polishing strategy. BAC DNA was prepared using the QIAGEN Large-Construct Kit (Cat No./ID: 12462). To ensure removal of the Escherichia coli genome, it was important to include an exonuclease incubation step at 37 °C for 1 h, as provided within the QIAGEN Large-Construct Kit. BAC DNAs were hydrated in TE buffer. BAC Insert length estimates were determined by pulsed-field gel electrophoresis (PFGE) (data not shown). Longboard MinION protocol. MinIONs can process long fragments, as has been previously documented 22 . While these long reads demonstrate the processivity of nanopore sequencing, they offer insufficient coverage to resolve complex, repeat-rich regions. To systematically enrich for the number of long reads per MinION sequencing run, we developed a strategy that uses the Oxford Nanopore Technologies (ONT) Rapid Sequencing Kit (RAD002). We performed a titration between the transposase from this kit (RAD002) and circular BAC DNA. This was done to achieve conditions that would optimize the probability of individual circular BAC fragments being cut by the transposase only once. To this end, we diluted the 'live' transposase from the RAD002 kit with the 'dead' transposase provided by ONT. For PFGE-based tests, we used 1 μl of 'live' transposase and 1.5 μl of 'dead' transposase per 200 ng of DNA in a 10-μl reaction volume. This reaction mix was then incubated at 30 °C for 1 min and 75 °C for 1 min, followed by PFGE. Our PFGE tests used 1% high-melting agarose gels and were run with standard 180° field inversion gel electrophoresis (FIGE) conditions for 3.5 h. An example PFGE gel is shown in Supplementary Figure 6 . For MinION sequencing library preparation, we used 1.5 μl of 'live' transposase and 1 μl of 'dead' transposase (supplied by ONT) per 1 μg of DNA in a 10-μl reaction volume. Briefly, this reaction mix was then incubated at 30 °C for 1 min and 75 °C for 1 min. We then added 1 μl of the sequencing adaptor and 1 μl of Blunt/TA Ligase Master Mix (New England BioLabs) and incubated the reaction for 5 min. This was the adapted BAC DNA library for the MinION. R9.4 SpotON flow cells were primed using the protocol recommended by ONT. We prepared 1 ml of priming buffer with 500 μl running buffer (RBF) and 500 μl water. Flow cells were primed with 800 μl priming buffer via the side loading port. We waited for 5 min to ensure initial buffering before loading the remaining 200 μl of priming buffer via the side loading port but with the SpotON open. We next added 35 μl RBF and 28 μl water to the 12 μl library for a total volume of 75 μl. We loaded this library on the flow cell via the SpotON port and proceeded to start a 48 h MinION run. When a nanopore run is underway, the amplifiers controlling individual pores can alter voltage to get rid of unadapted molecules that can otherwise block the pore. With R9.4 chemistry, ONT introduced global flicking that reversed the potential every 10 min by default to clear all nanopores of all molecules. At 450 b.p.s., a 200 kb BAC would take around 7.5 min to be processed. To ensure sufficient time for capturing BAC molecules on the MinION, we changed the global flicking time period to 30 min. This is no longer the case with an update to ONT's MinKNOW software, and on the later BAC sequencing runs we did not change any parameters. We acknowledge that generating long (>100 kb) reads presents challenges, given the dynamics of high-molecular-weight (HMW) DNA for ligation, chemistry updates, and delivery of free ends to the pore, reducing the effective yield. We found that high-quality and a large quantity of starting material (i.e., our strategy is designed for 1 μg of starting material that does not show signs of DNA shearing and/or degradation when evaluated by PFGE) and reduction of smaller DNA fragments were necessary for the longboard strategy. Protocol to improve long-read sequence by consensus and polishing. BAC-based assembly across the DYZ3 locus requires overlap among a few informative sequence variants, thus placing great importance on the accuracy of base-calls. Therefore, we employed the following strategy to improve overall base quality. First, we derived a consensus from multiple alignments of 1D reads that span the full insert length for each BAC. Further, polishing steps were performed using realignment of all full-length nanopore reads for each BAC. As a result, each BAC sequencing project resulted in a single, polished BAC consensus sequence. To validate single-copy variants, useful in an overlap-layout-assembly strategy, we included Illumina data sets for each BAC. Illumina data were not used to correct or validate variants observed multiple times within a given BAC sequence due to the reduced mapping quality. MinION base-calling. All of the BAC runs were initially base-called using Metrichor, ONT's cloud basecaller. Metrichor classified reads as pass or fail using a Q-value threshold. We selected the full-length BAC reads from the pass reads. We later base-called all of the BAC runs again using Albacore 1.1.1, which included significant improvements on homopolymer calls. This version of Albacore did not contain a pass/fail cutoff. We reperformed the informatics using Albacore base-calls for full-length reads selected from the pass Metrichor base-calls. We selected BAC full-length reads as determined by observed enrichment in our yield plots (shown in Supplementary Fig. 7 the read versus read length plots converted to yield plots to identify BAC length min-max selection thresholds). Full-length reads used in this study were determined to contain at least 3 kb of vector sequence, as determined by BLASR 23 ( -sdpTupleSize 8 -bestn 1 -nproc 8 -m 0 ) alignment with the pBACe3.6 vector (GenBank Accession: U80929.2 ). Reads were converted to the forward strand. Reads were reoriented relative to a fixed 3-kb vector sequence, aligning the transition from vector to insert. Derive BAC consensus sequence. Reoriented reads were sampled at random (blasr_output.py). Multiple sequence alignment (MSA) was performed using kalign 24 . We determined empirically that sampling greater than 60 reads provided limited benefit to consensus base quality ( Supplementary Fig. 8 ). We computed the consensus from the MSA whereas the most prevalent base at each position was called. Gaps were only considered in the consensus if the second most frequent nucleotide at that position was present in less than ten reads. We performed random sampling followed by MSA iteratively 10×, resulting in a panel of ten consensus sequences, observed to provide a ∼ 1% boost in consensus sequence identity ( Supplementary Fig. 8 ). To improve the final consensus sequence, we next performed a final MSA on the collection of ten consensus sequences derived from sampling. Polish BAC consensus sequence. Consensus sequence polishing was performed by aligning full-length 1D nanopore reads for each BAC to the consensus (BLASR 25 , -sdpTupleSize 8 -bestn 1 -nproc 8 -m 0 ). We used pysamstats ( ) to identify read support for each base call. We determined the average base coverage for each back, and filtered those bases that had low-coverage support (defined as having less than half of the average base coverage). Bases were lower-case masked if they were supported by sufficient sequence coverage, yet had <50% support for a given base call in the reads aligned. Variant validation. We performed Illumina resequencing (MiSeq V3 600bp; 2 × 300 bp) for all nine DYZ3-containing BACs to validate single-copy DYZ3 HOR variants in the nanopore consensus sequence. Inherent sequence bias is expected in nanopore sequencing 22 , therefore we first used the Illumina matched data sets to evaluate the extent and type of sequence bias in our initial read sets, and our final polished consensus sequence. Changes in ionic current, as individual DNA strands are read through the nanopore, are each associated with a unique 5-nucleotide k-mer. Therefore in an effort to detect inherent sequence errors due to nanopore sensing, we compared counts of 5-mers. Alignment of full-length HORs within each polished BAC sequence to the canonical DYZ3 repeat demonstrated that these sequences are nearly identical, where in RP11-718M18 we detected 1,449 variant positions (42% mismatches, 27% deletions, and 31% insertions) across 202,582 bp of repeats (99.5% identity). Although the 5-mer frequency profiles between the two data sets were largely concordant ( Supplementary Fig. 9 ), we found that poly(dA) and poly(dT) homopolymers were overrepresented in our initial nanopore read data sets, a finding that is consistent with genome-wide observations. These poly(dA) and poly(dT) over-representations were reduced in our quality-corrected consensus sequences especially for 6-mers and 7-mers. K-mer method. Using a k-mer strategy (where k = 21 bp), we identified exact matches between the Illumina and each BAC consensus sequence. Illumina read data and the BAC-polished consensus sequences were reformatted into respective k-mer library (where k = 21 bp, with 1 bp slide using Jellyfish v2 software 25 ), in forward and reverse orientation. K-mers that matched the pBACe3.6 sequence exactly were labeled as 'vector'. K-mers that matched the DYZ3 consensus sequence exactly 14 were labeled as 'ceny'. We first demonstrated that the labeled k-mers were useful in predicting copy number. Initially, we showed how the ceny k-mer frequency in the BACs predicted the DYZ3 copy number, relative to the number observed in our nanopore consensus ( Supplementary Fig. 10a ). DYZ3 copy number in each consensus sequence derived from nanopore reads was determined using HMMER3 (ref. 26 ) (v3.1b2) with a profile constructed from the DYZ3 reference repeat. By plotting the distribution of vector k-mer counts ( Supplementary Fig. 10b for RP11-718M18), we observed a range of expected k-mer counts for single-copy sites. DYZ3 repeat variants (single-copy satVARs) were determined as k-mers that (1) did not have an exact match with either the vector or DYZ3 reference repeat, (2) spanned a single DYZ3-assigned variant in reference-polished consensus sequence (i.e., that particular k-mer was observed only once in the reference), (3) and had a k-mer depth profile in the range of the corresponding BAC vector k-mer distribution. As a final conservative measure, satVARs used in overlap-layout-consensus assembly were supported by two or more overlapping Illumina k-mers ( Supplementary Fig. 10 ). To test if it was possible to predict a single-copy DYZ3 repeat variant by chance, or by error introduced in the Illumina read sequences, we ran 1,000 simulated trials using our RP11-718M18 Illumina data. Here, we randomly introduced a single variant into the polished RP11-718M18 DYZ3 array (false positive). We generated 1,000 simulated sequences, each containing a single randomly introduced single-copy variant. Next, we queried if the 21-mer spanning the introduced variant was (a) found in the corresponding Illumina data set and (b) if so, we monitored the coverage. Ultimately, none of the simulated false-positive variants (21-mer) met our criteria of a true variant. That is, although the simulated variants were identified in our Illumina data, they had insufficient sequence coverage to be included in our study. Greater than 95% of the introduced false variants had ≤100× coverage, with only one variant observed to have the maximum value of 300×. True variants were determined using this data set with values from 1,100–1600×, as observed in our vector distribution. Alignment method. We employed a short-read alignment strategy to validate single-copy variants in our polished consensus sequence. Illumina-merged reads (PEAR, standard parameters 27 ) were mapped to the RP11 Y-assembled sequence using BWA-MEM 28 . BWA-MEM is a component of the BWA package and was chosen because of its speed and ubiquitous use in sequence mapping and analysis pipelines. Aside from the difficulties of mapping the ultra-long reads unique to this work, any other mapper could be used instead. This involves mapping Illumina data to each BAC consensus sequence. After filtering those alignments with mapping quality less than 20, single-nucleotide DYZ3 variants (i.e., a variant that is observed uniquely, or once in a DYZ3 HOR in a given BAC) were considered “validated” if they had support of at least 80% of the reads and had sequence coverage within the read depth distribution observed in the single-copy vector sequence for each BAC data set. To explore Illumina sequence coverage necessary for our consensus polishing strategy we initially investigated a range (20–100×) of simulated sequence coverage relative to a 73-kb control region (hg38 chrY:10137141–10210167) within the RP11-531P03 BAC data. Simulated paired read data using the ART Illumina simulator software 29 was specified for the MiSeq sequencing system (MiSeq v3 (250 bp), or 'MSv3′), with a mean size of 400 bp DNA fragments for paired-end simulations. Using our polishing protocol, where reads are filtered by mapping quality score (i.e., at least a score of 20: that the probability of correctly mapping is log 10 of 0.01 * -10, or 0.99), base frequency was next determined for each position using pysamstats, and a final, polished consensus was determined by taking the base call at any given position that is represented by sufficient coverage (at least half of the determined average across the entire BAC) and is supported by a percentage of Illumina reads mapped to that location (in our study, we required at least 80%). If we require at least 80% of mapped reads to support a given base call, we determine that 30× coverage is sufficient to reach 99% sequence identity (or the same as our observed identity using our entire Illumina read data set, indicated as a gray dotted line in Supplementary Fig. 11 ). If we require at least 90% of mapped reads to support a particular variant it is necessary to increase coverage to 70× to reach an equivalent polished percent identity. To evaluate our mapping strategy, we performed a basic simulation using an artificially generated array of ten identical DYZ3 (5.7 kb) repeats. We then randomly introduced a single base change resulting in a new sequence with nine identical DYZ3 repeats and one repeat distinguished by a single-nucleotide change ( Supplementary Fig. 12 ). We first demonstrate that we are able to confidently detect the single variant by simulating reads from the reference sequence containing the introduced variant of varying coverage and Illumina substitution error rate. Additionally, we investigated whether we would detect the variant as an artifact due to Illumina read errors. To test this, we next simulated Illumina reads from a DYZ3 reference array that did not contain the introduced variant (i.e., ten exact copies of the DYZ3 repeat). We performed this simulation 100×, thus creating 100× reference arrays each with a randomly placed single variant. Within each evaluation we mapped in parallel simulated Illumina reads from (a) the array containing introduced variant sequence and (b) the array that lacked the variant. In experiments where reads containing the introduced variant were mapped to the reference containing the variant, we observed the introduced base across variations of sequence coverage and increased error rates. To validate a variant as “true,” we next evaluated the supporting sequence coverage. For example in 100× coverage, using the default Illumina error rate we observed 96 “true” calls out of 100 simulations, where in each case we set a threshold such that at least 80% of reads that spanned the introduced variant supported the base call. We found that Illumina quality did influence our ability to confidently validate array variants by reducing the coverage. When the substitution error was increased by 1/10th we observed a decrease to only 75 “true” variant calls out of 100× simulations. Therefore, we suspect that Illumina sequencing errors may challenge our ability to completely detect true-positive variants. In our alternate experiments, although simulated Illumina reads from ten identical copies of the DYZ3 repeat were mapped to a reference containing an introduced variant, we did not observe a single simulation and/or condition with sufficient coverage for “true” validation. We do report an increase in the percentage of reads that support the introduced variant as we increase the Illumina substitution error rate, however, the range of read depth observed across all experiments was far below our coverage threshold. We obtained similar results when we repeated this simulation using sequences from the RP11-718M18 DYZ3 array. Finally, standard quality Illumina-based polishing with pilon 21 was applied strictly to unique (non-satellite DNA) sequences on the proximal p and q arms to improve final quality. Alignment of polished consensus sequences from our control BAC from Xq24 (RP11-482A22) and non-satellite DNA in the p-arm adjacent to the centromere (Yp11.2, RP11-531P03) revealed base-quality improvement to >99% identity. Prediction and validation of DYZ3 array. BAC ordering was determined using overlapping informative single-nucleotide variants (including the nine DYZ3 6.0 kb structural variants) in addition to alignments directly to either assembled sequence on the p-arm or q-arm of the human reference assembly (GRCh38). Notably, physical mapping data were not needed in advance to guide our assembly. Rather these data were provided to evaluate our final array length predictions. Full-length DYZ3 HORs (ordered 1–52) were evaluated by MSA (using kalign 24 ) between overlapping BACs, with emphasis on repeats 28–35 that define the overlap between BACs anchored to the p-arm or q-arm ( Supplementary Fig. 13 ). RPC1-11 BAC library has been previously referenced as derived from a known carrier of haplogroup R1b 30 , 31 . We compared our predicted DYZ3 array length with 93 R1b Y-haplogroup-matched individuals by intersecting previously published DYZ3 array length estimates for 1000 Genome phase 1 data 13 , 14 with donor-matched Y-haplogroup information 32 . To investigate the concordancy of our array prediction with previous physical maps of the Y-centromere we identified the positions of referenced restriction sites that directly flank the DYZ3 array in the human chromosome Y assembly (GRCh38) 6 , 7 , 33 . It is unknown if previously published individuals are from the same population cohort as the RPC1-11 donor genome, therefore we performed similar PFGE DYZ3 array PFGE length estimates using the HuRef B-lymphoblast cell line (available from Coriell Institute as GM25430), previously characterized to be in the R1-b Y-haplogroup 34 . PFGE alpha satellite Southern. High-molecular-weight HuRef genomic DNA was resuspended in agarose plugs using 5 × 10 6 cells per 100 μL 0.75% CleanCut Agarose (CHEF Genomic DNA Plug Kits Cat #: 170-3591 BIORAD). A female lymphoblastoid cell line (GM12708) was included as a negative control. Agarose plug digests were performed overnight (8–12 h) with 30–50 U of each enzyme with matched NEB buffer. PFGE Southern experiments used 1/4–1/2 agarose plug per lane ( ∼ 5–10 μg) in an 1% SeaKem LE Agarose gel and 0.5× TBE. CHEF Mapper conditions were optimized to resolve 0.1–2.0 Mb DNAs: voltage 6V/cm, runtime: 26:40 h, in angle: 120, initial switch time: 6.75 s, final switch time: 1 m 33.69 s, with a linear ramping factor. We used the Lambda (NEB; N0340S) and Saccharomyces cerevisiae (NEB; N0345S) as markers. Methods of transfer to nylon filters, prehybridization, and chromosome-specific hybridization with 32P-labeled satellite probes have been described 35 . Briefly, DNA was transferred to nylon membrane (Zeta Probe GT nylon membrane; CAT# 162-0196) for ∼ 24 h. DYZ3 probe (50 ng DNA labeled ∼ 2 c.p.m./mL; amplicon product using previously published STS DYZ3 Y-A and Y-B primers 36 ) was hybridized for 16 h at 42 °C. In addition to standard wash conditions 35 , we performed two additional stringent wash (buffer: 0.1% SDS and 0.1× SSC) steps for 10 min at 72 °C to remove non-specific binding. Image was recovered after 20 h exposure. Sequence characterization of Y centromeric region. The DYZ3 HOR sequence and chromosomal location of the active centromere on the human chromosome Y is not shared among closely related great apes 37 . However, previous evolutionary dating of specific transposable element subfamilies (notably, L1PA3 9.2–15.8 MYA 38 ) within the divergent satellite DNAs, as well as shared synteny of 11.9 kb of alpha satellite DNA in the chimpanzee genome Yq assembly indicate that the locus was present in the last common ancestor with chimpanzee ( Supplementary Fig. 14 ). Comparative genomic analysis between human and chimpanzee were performed using UCSC Genome Browser liftOver 39 between human (GRCh38, or hg38 chrY:10,203,170–10,214,883) and the chimpanzee genome (panTro5 chrY:15,306,523–15,356,698, with 100% span at 97.3% sequence identity). Alpha satellite and adjacent repeat in the chimpanzee genome that share limited sequence homology with human were determined used UCSC repeat table browser annotation 40 . The location of the centromere across primate Y-chromosomes was determined by fluorescence in situ hybridization (FISH) ( Supplementary Fig. 14 ). Preparation of mitotic chromosomes and BAC-based probes were carried according to standard procedures 41 . Primate cell lines were obtained from Coriell: Pan paniscus (Bonobo) AG05253; Pan troglodytes (Common Chimpanzee) S006006E. Male gorilla fibroblast cells were provided by Stephen O'Brien (National Cancer Institute, Frederick, MD) as previously discussed 42 . The HuRef cell line 34 (GM25430) was provided through collaboration with Samuel Levy. BAC DNAs were isolated from bacteria stabs obtained from CHORI BACPAC. Metaphase spreads were obtained after a 1 h 15 min colcemid/karyomax (Gibco) treatment followed by incubation in a hypotonic solution. Cells were counterstained with 4′,6-diamidino-2-phenylindole (DAPI) (Vector). BAC DNA probes were labeled using Alexa flour dyes (488, green and 594, red) (ThermoFisher). The BAC probes were labeled with biotin 14-dATP by nick translation (Gibco). And the chromosomes were counterstained with DAPI. Microscopy, image acquisition, and processing were performed using standard procedures. Epigenetic mapping of centromere proteins. To evaluate similarity between the HuRef DYZ3 reference model (GenBank: GJ212193) and our RP11 BAC-assembly we determined the relative frequency of each k-mer in the array (where k = 21, with a 1-bp slide taking into account both forward and reverse sequence orientation using Jellyfish software) normalized by the total number of observed k-mers ( Supplementary Fig. 15 ), with the Pearson correlation coefficient. Enrichment across the RP11 Y assembly was determined using the log-transformed relative enrichment of each 50-mer frequency relative to the frequency of that 50-mer in background control (GEO Accession: GSE45497 ID: 200045497 ), as previously described 5 . If a 50-mer is not observed in the ChIP background the relative frequency was determined relative to the HuRef Sanger WGS read data (AADD00000000 WGSA) 34 . Average enrichment values were calculated for windows size 6 kb ( Fig. 2 ). Additionally, CENP-A and C paired read data sets (GEO Accession: GSE60951 ID: 200060951 ) 43 were merged (PEAR 27 , standard parameters) and mapped to all alpha satellite reference models in GRCh38. Reads that mapped specifically to the DYZ3 reference model were selected to study enrichment to the HOR array. The total number of bases mapped from CENP-A and CENP-C data versus the input controls was used to determine relative enrichment. Second, reads that mapped specifically to the DYZ3 reference model were aligned to the DYZ3 5.7 kb in consensus (indexed in tandem to avoid edge-effects), and read depth profiles were determined. To characterize enrichment outside of the DYZ3 array CENP-A, CENP-C and Input data were mapped directly to the RP11 Y-assembly. Reads mapping to the DYZ3 array were ignored. Read alignments were only considered outside of the DYZ3 array if no mismatches, insertions, or deletions were observed to the reference and if the read could be aligned to a single location (removing any reads with mapping score of 0). Sequence depth profiles were calculated by counting the number of bases at any position and normalizing by the total number of bases in each respective data set. Relative enrichment was obtained by taking the log-transformed normalized ratio of centromere protein (A or C) to Input. Statistics. The Pearson correlation coefficient was used to determine a positive linear relationship in our data sets (as shown in Supplementary Figs. 10a and 15a ). Simulation experiments using Illumina short read data were performed using 100 replicates. Representative gel image shown ( Supplementary Fig. 6 ) was repeated ten times, or once for each BAC in our study, with consistent results. Representative Southern Blot (shown in Supplementary Fig. 4a ), was repeated twice with different restriction enzymes with the same results. Centromere Y position analysis using FISH on a panel of primates were repeated at least two times, and results were invariable between experiments and between hybridization patterns within multiple metaphase spreads within a given experiment. Code availability. This study used previously published software: alignments were performed using BLASR 23 (version 1.3.1.124201) and BWA MEM 28 (0.7.12-r1044). Consensus alignments were obtained using kalign 24 (version 2.04). Global alignments of HORs used needle 44 (EMBOSS:6.5.7.0). Repeat characterization was performed using RepeatMasker (Smit, AFA, Hubley, R & Green, P. RepeatMasker Open-4.0 . 2013-2015; ). Satellite monomers were determined using profile hidden Markov model (HMMER3) 26 . Jellyfish (version 2.0.0) 25 was used to characterize k-mers. Illumina read simulations was performed using ART (version 2.5.8) 29 . PEAR 27 (version 0.9.0) was used to merge paired read data. Comparative genomic analysis between human and chimpanzee were performed using UCSC Genome Browser liftOver 39 . Additional scripts used in preparing sequences before consensus generation are deposited in GitHub: . Life Sciences Reporting Summary. Further information on experimental design is available in the Life Sciences Reporting Summary . Data availability. Sequence data that support the findings of this study have been deposited in GenBank with reference to BioProject ID PRJNA397218 , and SRA accession codes SRR5902337 and SRR5902355 . BAC consensus sequences and RP11-CENY array assembly are deposited under GenBank accession numbers MF741337 – MF741347 . Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Accession codes Primary accessions BioProject PRJNA397218 GenBank/EMBL/DDBJ MF741337 MF741347 Sequence Read Archive SRR5902337 SRR5902355 Referenced accessions GenBank/EMBL/DDBJ U80929.2 Gene Expression Omnibus 200045497 200060951 GSE45497 GSE60951
Fifteen years ago, the Human Genome Project announced they had cracked the code of life. Nonetheless, the published human genome map was incomplete and parts of our DNA remained to be deciphered. Now, a new study published in the journal Nature Biotechnology brings us closer to a complete genetic blueprint by using a nanotechnology-based sequencing technique. Like ancient Egyptian ruins covered in mysterious hieroglyphics, the letters and words in our genetic code remained unutterable for a long time. In an effort to solve this genetic cipher, the Human Genome Project, a collaborative international consortium, was created. The goal was to read out the DNA sequence – made up of four letters, or bases, A,T,G and C – of all human genes (genome). In 2003, a near-complete map of the human genome was reported. The scientific community hailed the momentous event as a turning point, perhaps overshadowed only by the discovery of the double-helix structure of DNA. Indeed, for the first time in human history, we could read and understand the language of our "being". Yet, the assembled genome represented only 92% of all human genes. Gaps remained that could not be easily decrypted. For many researchers, that elusive 8% of the genome is a holy grail. The dark matter inside us all The unmappable genome is associated with "heterochromatin" (dark matter of the genome, highly condensed), unlike "euchromatin" (light matter, more loosely wound part of the genome). Euchromatin is gene-rich while heterochromatin refers to the silent, repressed regions of our DNA. Euchromatin is full of unique DNA sequences. This means that finding a single- or low-copy DNA sequence, with all the same DNA bases in the same order, at more than one location in our genome is highly unlikely. These discrete DNA sequences are easily distinguishable and serve distinct purposes within our cells. No wonder the human genome has almost 20,000 different genes with limited redundancy. Now, visualize a human chromosome as a big "X", made of coiled-up DNA, with two arms attached at a constriction. Heterochromatin is mostly localised near the point of attachment (centromere) and the tips of the arms (telomeres). In fact, the centromere becomes indispensable when cells divide, dragging along one chromosome arm into each of the newly formed daughter cells. DNA sequencing technologies operate by reading each base of DNA, one at a time, and spitting out short "reads" that spell out the sequence being read. Thus, decoding unique, non-identical euchromatic DNA is facile because one stretch apart from other with little ambiguity. The problem arises when we try to enunciate heterochromatic sequences comprising strings of DNA that look like each other. Arranged in tandem arrays or dispersed throughout our genome, these highly repetitive stretches of DNA amount to garbled gibberish after conventional DNA sequencing. One small chunk of DNA (monomer) at the centromere resembles other identical chunks flanking it and so on. In the resulting quagmire, the base-composition & precise position of any given repeated sequence cannot be ascertained in a long polymer of repeats. Made up of millions of repeating A,T,G,C bases, the centromeres of human chromosomes evaded biologists and explain holes in our current DNA map. Threading the genome into a tiny needle The new study, from the team of Dr. Karen Miga at University of California (Santa Cruz), has managed to uncover the centromere of the Y chromosome – the male-specific chromosome and also the smallest chromosome in our genome (something worth thinking about). The researchers were able to insert a longer stretch of DNA into a nano-pore (like thread passed through the eye of a needle), "resulting in complete, end-to-end sequence coverage of the entire insert". Using this nanopore-sequencing method, the researchers can now decipher a long, muddled DNA stretch full of repeats. This "long-read" strategy allowed them to string together longer pieces of DNA (made up of variable repeat monomer lengths). It turns out that when all these chunks are laid out, certain clues help reconstruct the repetitive-sequence. Walking along the centromere, from left to right, context is provided by surrounding monomers in the same tandem array and by flanking non-repetitive DNA. Like a neatly laid section of railroad, the authors pieced together a chain of contiguous DNA sequences and solved the jigsaw puzzle of the Y chromosome centromere. This recent work, published in Nature Biotechnology journal, plugs holes in the existing human DNA map. In the future, finding out the DNA sequences that define other centromeres will allow researchers to rewrite, manipulate, alter or duplicate these key structures. Given that the centromere is essential for cells to divide and segregate their genetic content to future generations, the Y centromere assembly represents an exciting step forward in modern biology.
10.1038/nbt.4109
Physics
Symmetry-enforced three-dimension Dirac phononic crystals
Xiangxi Cai et al, Symmetry-enforced three-dimensional Dirac phononic crystals, Light: Science & Applications (2020). DOI: 10.1038/s41377-020-0273-4 Journal information: Light: Science & Applications
http://dx.doi.org/10.1038/s41377-020-0273-4
https://phys.org/news/2020-03-symmetry-enforced-three-dimension-dirac-phononic-crystals.html
Abstract Dirac semimetals, the materials featuring fourfold degenerate Dirac points, are critical states of topologically distinct phases. Such gapless topological states have been accomplished by a band-inversion mechanism, in which the Dirac points can be annihilated pairwise by perturbations without changing the symmetry of the system. Here, we report an experimental observation of Dirac points that are enforced completely by the crystal symmetry using a nonsymmorphic three-dimensional phononic crystal. Intriguingly, our Dirac phononic crystal hosts four spiral topological surface states, in which the surface states of opposite helicities intersect gaplessly along certain momentum lines, as confirmed by additional surface measurements. The novel Dirac system may release new opportunities for studying elusive (pseudo) and offer a unique prototype platform for acoustic applications. Introduction The discovery of new topological states of matter has become a vital goal in fundamental physics and material science 1 , 2 . A three-dimensional (3D) Dirac semimetal (DSM) 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , accommodating many exotic transport properties such as anomalous magnetoresistance and ultrahigh mobility 14 , 15 , is an exceptional platform for exploring topological phase transitions and other novel topological quantum states. It is also of fundamental interest to serve as a solid-state realization of a (3 + 1)-dimensional Dirac vacuum. A DSM phase may appear accidentally at the quantum transition between normal and topological insulators 16 , 17 . The approach to such a single critical point demands the fine-tuning of the alloy’s chemical composition, which limits the experimental accessibility to the fascinating physics of 3D Dirac fermions. 3D DSMs can also emerge without fine-tuning parameters and are distinguished into two classes 3 , 4 . The first one, already realized in Na 3 Bi 7 , 8 and Cd 3 As 2 9 , 10 , occurs due to band inversion 5 , 6 . The Dirac points, lying on the generic momenta of a specific rotation symmetry axis, always come in pairs and could be eliminated by their merger and pairwise annihilation through the continuous tuning of parameters 3 , 4 that preserve the symmetry of the system. The second class features Dirac points that are pinned stably to discrete high-symmetry points on the surface of the Brillouin zone (BZ). Markedly different from the first class of DSMs, the occurrence of Dirac points is an unavoidable result of the nonsymmorphic space group of the material 11 , 12 , 13 , which cannot be removed without changing the crystal symmetry. Although some solid-state candidate materials have been proposed 4 , 11 , 12 , symmetry-enforced 3D DSMs have never been experimentally realized because of the great challenge in synthesizing materials 4 , 7 . Recently, numerous distinct topological states have been demonstrated in classical wave systems 18 , 19 , such as photonic crystals 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 and phononic crystals 29 , 30 , 31 , 32 , 33 , 34 , which offer opportunities for exploring topological physics in a fully controllable manner. Here, we report an experimental realization of a 3D phononic crystal that hosts symmetry-enforced Dirac points at the BZ corners. The fourfold degeneracy is protected by a nonsymmorphic space group that couples point operations (rotations and mirrors) with nonprimitive lattice translations. In addition to the Dirac points identified directly by angle-resolved transmission measurements, highly intricate quad-helicoid surface states are unveiled by our surface measurements and associated Fourier spectra. Specifically, the surface states are composed of four gaplessly crossed spiral branches 13 and thus are strikingly different than the double Fermi arc surface states observed recently in electronic 8 and photonic systems 28 . Excellent agreement is found between our experiments and simulations. As illustrated in Fig. 1a , our Dirac phononic crystal has a body-centered-cubic (bcc) lattice associated with the lattice constant a = 2.8 cm. The main body of the building block consists of four inequivalent resin cylinders, which are labeled with different colors and oriented along different bcc lattice vector directions. All the cylinders have a regular hexagonal cross section with a side length of 0.42 cm. To facilitate sample fabrication, these cylinders are connected with short hexagonal bars with side lengths of 0.21 cm. The remainder of the volume is filled with air. Numerically, the photosensitive resin material used for printing the acoustic structure is treated as rigid, and sound propagates only in air (at speed 342 m/s), considering the great acoustic impedance mismatch between the resin and air. Fig. 1: Symmetry-enforced Dirac points and quad-helicoid topological surface states in a nonsymmorphic phononic crystal. a Schematics of the bcc unit (left panel) of the phononic crystal and its (010) surface (right panel) featured with two glide mirrors G x and G z . b 3D bcc BZ and its (010) surface BZ. The colored spheres highlight the bulk Dirac points with equal frequency and their projections onto the surface BZ. c Bulk bands simulated along several high-symmetry directions. d Schematic of the quad-helicoid surface state dispersions (color surfaces), where the gray cone labels the projection of bulk states. e Surface bands simulated along a circular momentum loop of radius 0.4 π / a (as shown in f ) centered at \({\bar{\mathrm P}}\) . The shadow regions indicate the projected bulk states. f 3D plot of the surface dispersion simulated in the first quadrant of the surface BZ. Bulk band projections are not shown for clarity Full size image The crosslinked network structure belongs to the nonsymmorphic space group 230 \((Ia\bar 3d)\) , featuring inversion symmetry and multiple screw rotations and glide reflections. The crystal symmetry enables rich point and line degeneracies (see Supplementary Materials). Interestingly, the small group at P and P’, a pair of time-reversal related Brillouin zone (BZ) corners (Fig. 1b ), has 24 group elements and supports only fourfold degeneracy. This finding is confirmed by the band structure in Fig. 1c , which is stabilized with two distinct kinds of Dirac points at P (P’). The first kind of Dirac points, crossed with bands of different slopes and thus called generalized Dirac points 22 (e.g., the lowest ones at P and P’ in Fig. 1c ), corresponds to a four-dimensional irreducible representation, whereas the second kind, crossed with bands of identical slopes (see Fig. S1 in Supplementary Materials), corresponds to two inequivalent two-dimensional irreducible representations stuck with time-reversal symmetry. Hereafter, we focus on the latter case (as specified with color spheres in Fig. 1c ), around which the bands are rather clean and carry a wide frequency window of linear dispersion. The system can be captured by a simple four-band effective Hamiltonian derived from \(k \cdot p\) theory, \({\cal{H}} = \left( {\begin{array}{*{20}{c}} O & H \\ {H^\dagger } & O \end{array}} \right)\) , where \(H = \eta \left( {\delta k_y\sigma _x - \delta k_x\sigma _y + \delta k_z\sigma _z} \right)\) , η is a complex parameter determined by the acoustic structure, \((\delta k_x,\delta k_y,\delta k_z)\) characterizes the momentum deviation from P, and σ i are Pauli matrices (see Supplementary Materials). The Dirac model gives isotropic linear dispersions around the Dirac point, which are much different from those anisotropic ones observed previously 7 , 8 , 9 , 10 , 28 . A nontrivial Z 2 topological invariant, defined on a momentum sphere enclosing the Dirac points, can be used to depict the topology of such fourfold band closing points 13 . This invariant is derived by considering the pseudo anti-unitary symmetry ( \(\vartheta\) ) composited by a glide reflection ( G ) and time-reversal symmetry ( T ), i.e., \(\vartheta = G \ast T\) with \(\vartheta ^2 = - 1\) . In addition, markedly different from the Dirac points created by band inversion 5 , 6 , which can be annihilated pairwise without changing the crystal symmetry, here the Dirac points are guaranteed completely by the nonsymmorphic symmetries. The topological robustness of the Dirac points against symmetry-preserving perturbations has been identified numerically by two detailed examples (Supplementary Materials, Fig. S2 ). Unlike Weyl semimetals that host topologically nontrivial Fermi arcs on their surfaces 35 , the presence of topological surface states in a DSM is more subtle because the Dirac points carry a zero Chern number 3 , 4 , 36 . However, for a nonsymmorphic DSM that has Dirac points featuring a nontrivial Z 2 index, the band crossing points will be pairwise connected by symmetry-protected Fermi arcs on the surface, associated with a unique connectivity determined by the nontrivial Z 2 topological charge 13 . The dispersion of the topological surface states can be mapped to an intersecting multihelicoid structure, where the intersections between the helicoids are protected from being gapped by the glide symmetries preserved on the specific surface. In our case, the Dirac phononic crystal supports elusive quad-helicoid surface states 13 if truncated with the (010) surface or its equivalents, which can be characterized by the wallpaper group p 2 gg . Below, we focus on the (010) surface that preserves the two glide mirrors \(G_x = \left\{ {M_z\left| {\left( {a/2} \right)\hat x + \left( {a/2} \right)\hat z} \right.} \right\}\) and \(G_z = \left\{ {M_x\left| {\left( {a/2} \right)\hat z} \right.} \right\}\) of the bulk crystal (Fig. 1a ). For this specific crystal surface, the two inequivalent Dirac points are projected onto the four equivalent surface BZ corners \({\bar{\mathrm P}}\) (Fig. 1b ). As schematically illustrated in Fig. 1d , the quad-helicoid surface states feature two crucial signatures. First, there are four branches of spiral surface states for any given momentum loop enclosing \({\bar{\mathrm P}}\) : two with positive helicities and two with negative helicities. Figure 1e shows the gapless surface bands simulated along a circular loop centered at \({\bar{\mathrm P}}\) . Second, the surface states of opposite helicities intersect along certain momentum lines, in which the intersecting double degeneracies are protected by the glides G x and G z assisted with time-reversal symmetry 13 . This effect is exhibited clearly in the simulated global dispersion profile (Fig. 1f ), which shows nodal line degeneracies along the surface BZ boundaries \({\bar{\mathrm {P}}}{\bar{\mathrm {X}}}\) and \({\bar{\mathrm {P}}}{\bar{\mathrm {Z}}}\) . (Only ¼ of the surface BZ is provided due to the presence of the two glides.) For a generic selection of the crystal surface, the nodal line degeneracy of the surface dispersion disappears due to the absence of glide symmetries (Supplementary Materials, Fig. S3 ). The presence of symmetry-enforced Dirac points was confirmed by angle-resolved transmission measurements. Figure 2a demonstrates our experimental setup. The sample, fabricated precisely by the 3D printing technique, has a size of 47.6 cm, 14.0 cm, and 47.6 cm along the x , y, and z directions, respectively. A rectangular acoustic horn was used to launch a collimated beam upon the (010) surface of the sample, where the incident direction can be characterized by the angles θ and φ . As illustrated in Fig. 2b , a bulk state is expected to be excited when its in-plane momentum \(\vec{k} _{||}\) matches that of the incident wavevector \(\vec{k} _{in}\sin \theta\) at the same frequency. The transmitted sound signal was scanned by a 1/4 inch microphone (B&K Type 4958-A) and recorded by a multi-analyzer system (B&K Type 3560B). The averaged sound intensities were normalized to those measured in the absence of the sample. The bulk states were mapped out by varying θ and φ . Here, only \(\varphi \in [0,45^\circ ]\) was focused thanks to the multiple glide mirrors of the system. (For completeness, similar data for \(\varphi \in [45^\circ ,90^\circ ]\) are provided in Supplementary Materials, Fig. S4 .) Specifically, at φ = 45°, the incident beam scans through the Dirac point. Figure 2c shows the transmission data measured for six representative φ values compared with the numerical bulk dispersions projected along the corresponding directions (insets). All the transmission spectra agree reasonably well with the numerical band structures, where the low transmission near the sound cone can be attributed to the smaller effective cross-sectional area of the sample at large θ . In particular, as expected in the case of φ = 45°, a conic touch is observed at approximately 15.3 kHz in frequency and 0.71 π / a in the wavevector. The point crossing is lifted gradually as φ decreases from 45°. Fig. 2: Experimental identification of the symmetry-enforced Dirac points. a Experimental setup for measuring sound transmission. b Schematic of exciting bulk states according to the momentum conservation \(\vec{k} _{||} =\vec{k} _{in}{\rm{sin}} \theta\) . c θ -resolved transmission spectra measured for different φ values. The slanted boundary (green line) in each panel corresponds to the ‘sound cone’ \(|\vec{k} _{||}| = |\vec{k} _{in}|\) , beyond which no transmission can be measured. Insets: Simulated bulk states (shadow regions) projected along the y direction, scaled to the same range and ratio as the measured data Full size image Furthermore, we performed surface measurements to identify the highly intricate topological quad-helicoid surface states, which have not been experimentally observed in any topological system to date. Figure 3a shows our experimental setup. To mimic the rigid boundary condition involved in our simulations, an additional resin plate with a thickness of 0.2 cm was integrated on the (010) surface, which served as a trivial acoustic insulator to guarantee the presence of topological surface states. Since the typical air channels of the sample are too narrow to accommodate the sound source and probe directly, the plate was perforated with a square lattice of holes (see inset), one of which was reserved for inserting sound source, and one of which was reserved for locating the probe during the measurement; the other holes not in use were sealed to avoid coupling with the air background surrounding the sample. To excite surface states, a broadband point-like sound source launched from a subwavelength-sized tube was injected into one hole near the center of the sample surface. The localized surface field was scanned hole-by-hole by manually moving the probe, where the scanning step was given by the lattice spacing of the holes (1.4 cm). By Fourier transforming the surface pressure field, we mapped out the nontrivial surface arc for any desired frequency 31 . Figure 3b shows such data for a sequence of frequencies. As predicted by the simulations, the measured surface arcs (bright color) exhibit clear crossings at the surface BZ boundaries \({\bar{\mathrm {X}}}{\bar{\mathrm {X}}}^{\prime}\) and \({\bar{\mathrm {Z}}}{\bar{\mathrm {Z}}}^{\prime}\) . Our experimental results effectively capture the simulated isofrequency contours of the topological surface states (black lines), despite the band broadening due to the finite-size effect. Note that the amplitude signals of the bulk states (enclosed by white dashed lines) are much weaker than those of the topological surface states that are highly confined to the surface. To further identify the gapless quad-helicoid surface states, we present the surface spectra (Fig. 3c ) measured along the momentum loop specified in the first panel of Fig. 3b . Compared with the loop used in Fig. 1e , this square loop enclosing the \({\bar{\mathrm P}}\) point is larger and favored to demonstrate the gapless intersection of the surface bands in a wide bulk gap. As expected, two pairs of surface bands with opposite helicities traverse the bulk gap and cross stably at the high-symmetry momenta \({\bar{\mathrm X}}\) ( \({\bar{\mathrm {X}}}^{\prime}\) ) and \({\bar{\mathrm Z}}\) ( \({\bar{\mathrm {Z}}}^{\prime}\) ). Again, excellent agreement is found between our experiment and simulation. Fig. 3: Experimental observation of quad-helicoid topological surface states. a Experimental setup for the surface field measurements. The inset shows the details of the cover plate with circular holes opened or sealed. The plugs that sealed the holes were opened one-by-one during the measurement. b Isofrequency contours plotted in one surface BZ centered at \({\bar{\mathrm P}}\) (see the first panel). The color scale shows the experimental data compared with the corresponding simulation results (black curves). The orange spheres label the projected Dirac points, and the white dashed lines enclose the bulk band projections. c Frequency-dependent surface spectra (color scale) measured along the momentum path specified in the first panel of b Full size image In conclusion, we have constructed and identified a spinless Dirac crystal working for airborne sound, which exhibits highly intricate properties in both the bulk and surface states, in sharp contrast to those realized previously in condensed matter systems 7 , 8 , 9 , 10 . The topological origin of quad-helicoid surface states deserves to be further investigated. Notably, in a very recent study, S. Zhang et al. have made the first step towards an experimental study of 3D Dirac points in classical wave systems 28 . Interestingly, the Dirac points are constructed by electromagnetic duality symmetry (which is unique in electromagnetic systems), which is also strikingly different from the crystalline symmetry involved here. Starting with our structure, one can design various interesting 3D acoustic topological states (e.g., Weyl points 29 , 31 and line nodes 37 , 38 ) through symmetry reduction. This study may open up new manners for controlling sound, such as realizing unusual sound scattering and radiation, considering the conical dispersion and vanishing density of states around the Dirac points. Last but not least, the dispersion around the Dirac point is isotropic, and thus, our macroscopic system serves as a good platform to simulate relativistic Dirac physics. Methods Numerical simulations All simulations were performed using COMSOL Multiphysics, a commercial solver package based on the finite element method. The bulk band structure in Fig. 1c was calculated by a single unit cell imposed with specific Bloch boundary conditions. Similar calculations gave the projected bulk states along the y direction (Fig. 2c , shadowed region). A ribbon structure was used to calculate the surface band for a desired surface (Fig. 1e, f and Fig. 3b, c ), imposed with Bloch boundary conditions along the x and z directions and a rigid boundary condition along the y direction, respectively. The ribbon was long enough to avoid coupling between the opposite surfaces. Surface states were distinguished from the projected bulk states by inspecting the surface localizations of the eigenstates. Experimental measurements Our experiments were performed for airborne sounds at audible frequencies. The slab-like sample, consisting of 17 × 5 × 17 structural units along the x , y, and z directions, was prepared by photosensitive resin via 3D printing. The macroscopic characteristics of our acoustic system enable precise sample fabrication and less demanding signal detection. To excite the bulk states, a rectangular acoustic horn (with a surface area of 24.0 cm × 10.0 cm) was used to launch Gaussian beams at controllable orientations (Fig. 2a ), whereas a narrow tube (with a diameter of 0.8 cm) was used to export point-like sound signals to excite the topological surface states (Fig. 3a ). During both measurements, a portable microphone was moved on the x–z plane to scan the pressure fields, together with another identical microphone fixed for phase reference. Both the amplitude and phase information of the input and output signals, swept from 11.8 kHz to 18.2 kHz with an increment of 0.032 kHz, were recorded and analyzed by a multi-analyzer system. To map out each surface arc of a given frequency (Fig. 3b ), two-dimensional Fourier transformation was performed on the scanned surface field; this further gave the frequency-dependent surface spectra along the specific momentum loop (Fig. 3c ). Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. Code availability All codes that support this study are available from the corresponding author upon reasonable request.
Dirac semimetals are critical states of topologically distinct phases. Such gapless topological states have been accomplished by a band-inversion mechanism, in which the Dirac points can be annihilated pairwise by perturbations without changing the symmetry of the system. Here, scientists in China report an experimental observation of Dirac points that are enforced completely by the crystal symmetry using a nonsymmorphic phononic crystal. Novel topological surface states are demonstrated in their experiments. The discovery of new topological states of matter has become a vital goal in fundamental physics and material science. A three-dimensional (3-D) Dirac semimetal (DSM), accommodating many exotic transport properties such as anomalous magnetoresistance and ultrahigh mobility, is an exceptional platform for exploring topological phase transitions and other novel topological quantum states. It is also of fundamental interest to serve as a solid-state realization of a (3+1)-dimensional Dirac vacuum. So far the realized Dirac points always come in pairs and could be eliminated by their merger and pairwise annihilation through the continuous tuning of parameters that preserve the symmetry of the system. In a new paper published in Light Science & Applications, scientists from the Key Laboratory of Artificial Micro- and Nano-Structures of the Ministry of Education and School of Physics and Technology, Wuhan University, China, we report an experimental realization of a 3-D phononic crystal that hosts symmetry-enforced Dirac points at the Brillouin zone corners. Markedly different from existing DSMs, the occurrence of Dirac points is an unavoidable result of the nonsymmorphic space group of the material, which cannot be removed without changing the crystal symmetry. In addition to the Dirac points identified directly by angle-resolved transmission measurements, highly intricate quad-helicoid surface states are unveiled by our surface measurements and associated Fourier spectra. Specifically, the surface states are composed of four gaplessly crossed spiral branches and thus are strikingly different than the double Fermi arc surface states observed recently in electronic and photonic systems. "This study may open up new manners for controlling sound, such as realizing unusual sound scattering and radiation, considering the conical dispersion and vanishing density of states around the Dirac points. The dispersion around the Dirac point is isotropic, and thus, our macroscopic system serves as a good platform to simulate relativistic Dirac physics," the scientists forecast.
10.1038/s41377-020-0273-4
Biology
'Dream' discovery could sow crops better equipped to weather the climate change storm
Graham Farquhar, Humidity gradients in the air spaces of leaves, Nature Plants (2022). DOI: 10.1038/s41477-022-01202-1. www.nature.com/articles/s41477-022-01202-1 Journal information: Nature Plants
https://dx.doi.org/10.1038/s41477-022-01202-1
https://phys.org/news/2022-08-discovery-crops-equipped-weather-climate.html
Abstract Stomata are orifices that connect the drier atmosphere with the interconnected network of more humid air spaces that surround the cells within a leaf. Accurate values of the humidities inside the substomatal cavity, w i , and in the air, w a , are needed to estimate stomatal conductance and the CO 2 concentration in the internal air spaces of leaves. Both are vital factors in the understanding of plant physiology and climate, ecological and crop systems. However, there is no easy way to measure w i directly. Out of necessity, w i has been taken as the saturation water vapour concentration at leaf temperature, w sat , and applied to the whole leaf intercellular air spaces. We explored the occurrence of unsaturation by examining gas exchange of leaves exposed to various magnitudes of w sat − w a , or Δ w , using a double-sided, clamp-on chamber, and estimated degrees of unsaturation from the gradient of CO 2 across the leaf that was required to sustain the rate of CO 2 assimilation through the upper surface. The relative humidity in the substomatal cavities dropped to about 97% under mild Δ w and as dry as around 80% when Δ w was large. Measurements of the diffusion of noble gases across the leaf indicated that there were still regions of near 100% humidity distal from the stomatal pores. We suggest that as Δ w increases, the saturation edge retreats into the intercellular air spaces, accompanied by the progressive closure of mesophyll aquaporins to maintain the cytosolic water potential. Main The question of whether the internal spaces of a leaf can become undersaturated under high evaporative conditions has remained unresolved for decades. In such a situation, the transpiration rate has to be reduced by mechanisms other than stomatal closure. Jarvis and Slatyer 1 discussed mechanisms proposed to account for non-stomatal control of transpiration, should it occur. The preferred option was “incipient drying” 2 —the retreat of evaporation sites into the mesophyll cell walls, relying on increasingly smaller pore throats and the Kelvin effect. Jarvis and Slatyer 1 measured the resistances to the diffusion of nitrous oxide introduced to one side of a cotton leaf and compared them with the corresponding water vapour resistances of the same leaf, and suggested that relative humidity (RH) inside the leaf could be as low as 70%. This RH would require a water potential of the liquid water of −49 MPa, but most plants lose turgor at −2 to −5 MPa and reach a lethal leaf water potential not much after the turgor loss point. These researchers expressed the reduction in humidity as the product of the transpiration flux and a “wall resistance” but gave no explanation of the resistance. In contrast, Farquhar and Raschke 3 performed a similar experiment with cotton and other species using helium but saw no evidence of a resistance to transpiration within the leaves, suggesting that the humidity inside the substomatal cavity, w i , was near saturation and the water potential was close to zero. Egorov and Karpushkin 4 measured transpiration rates in air and in mixtures of helium and oxygen and concluded that the intercellular RH could be 90% to 85%. Canny and Huang 5 collected Eucalyptus pauciflora leaf discs at midday during late summer and concluded that intercellular RH could be as low as 90%. If the substomatal cavities are unsaturated, then in the standard gas exchange calculations 6 , 7 , the estimation of critical values such as apparent leaf conductance to water vapour, g , and the CO 2 concentration in substomatal cavities, c i , would give lower values than the true ones. In 2008, we found gradients of c i in several species that seemed to be incompatible with the 100% RH assumption of the gas exchange calculations. This led to further efforts over subsequent years to corroborate these results. As a consequence, Cernusak et al. 8 measured the oxygen isotope composition of transpiration and CO 2 assimilation and concluded that water-stressed conifers were experiencing intercellular RH as low as 80%. This was followed by the examination, using the same isotopic techniques, of wild-type Populus leaves as well as a transgenic variety insensitive to abscisic acid that fails to close stomata at high transpiration rates. As humidity decreased, the abscisic-acid-insensitive plants lost saturation 9 . Holloway-Phillips et al. 10 used a two-source method of contrasting oxygen isotopic composition and similarly found that in some cases intercellular RH appeared less than 100%. Despite these recent findings, there has been considerable scepticism 11 because of the lack of a known mechanism 12 to enforce the very low water potentials required to sustain unsaturation in the intercellular mesophyll air space. Theory For evaporation to occur, there will be a water vapour concentration gradient from the sites of evaporation through the stomatal pores to the ambient air. The question becomes: at what depth in the interior of the leaf is the air space saturated? The need for humidity gradients within the substomatal cavity to support vapour flux to the stomatal pore implies that where the saturation water vapour concentration at leaf temperature, w sat , is found depends on the stomatal aperture and the difference between w sat and the humidity in the air ( w a ), Δ w . However, it is reasonable to assume that under low Δ w (for example, Δ w < 8 mmol mol −1 ), the saturation edge is a surface around the entry of the stomatal pore within the leaf. We will refer to this surface as the saturation front ( w sat ). Thus, under a low Δ w , we assume that the whole intercellular air space is saturated so that w i = w sat (Fig. 1a ). In this condition, we assume that the pathway for water vapour and CO 2 between w sat and the atmosphere over the leaf surface ( w s ) is the same 13 , through the stomatal pore (Fig. 1a ). We can then estimate the stomatal resistance to water vapour ( r sw ) directly and estimate the stomatal resistance to CO 2 ( r sc ) from the ratio of water vapour and CO 2 diffusivities in air as r sc = 1.6 r sw . We denote the resistance to diffusion between stomatal cavities inside the upper and lower surfaces of the leaf as the intercellular air space resistance ( R ias ). A list of the abbreviations used is found in Supplementary Section 1 . Fig. 1: Diagram of the saturation front moving deeper within the leaf as Δ w increases. ( a ) represents the resistances in the water vapour path at low Δ w , ( b ) under moderate Δ w and ( c ) under high Δ w . Δ w stands for w sat - w a , the subscripts u and l denote upper and lower, w a is air humidity, w s is humidity at the leaf surfaces, w i is humidity at the substomatal cavity, w sat is the saturation front, r bw is the boundary layer resistance to water vapour, r sw is the stomatal resistance to water vapour and r unsat is the unsaturated mesophyll air space resistance. Full size image We postulate that under high Δ w , w sat moves farther into the leaf, departing from w i (Fig. 1b,c ), so when w i is assumed equal to w sat , we are adding to the estimation of r sw part of the resistance within the intercellular air space that is now unsaturated ( r unsat )—that is, incorrectly estimating r sw as r sw + r unsat . This generates an overestimation of r sc by the addition of an equivalent resistance to CO 2 , resulting in an underestimated c i . Note that the apparent CO 2 concentration is unlikely to be realized at the same place as w sat , as the CO 2 and water probably follow different pathways after crossing the stomatal pore. However, by measuring the two surfaces of a leaf separately, we can track the apparent CO 2 concentration in the upper and lower substomatal cavities (Fig. 2 ). Fig. 2: Uncorrected gas exchange measurements of a cotton leaf using a double-sided, clamp-on chamber. a , A , E and g as functions of increasing Δ w = w sat − w a . b , The apparent c i of the upper and lower substomatal cavities, and the c i difference (upper minus lower) between them, are plotted as functions of Δ w = w sat − w a . The dotted line denotes zero difference between upper (adaxial) c i and lower (abaxial) c i . Photosynthetically active radiation was fixed at 1,000 µmol m −2 s −1 . Full size image Experimental approach to reconciling unsaturation Our approach was to follow Sharkey et al. 14 and Parkhurst et al. 15 by measuring gas exchange separately on the two sides of a leaf, and then reducing the ambient CO 2 concentration ( c a ) at the lower surface until the assimilation rate at that surface was zero. By repeating this measurement with both sides of the leaf exposed to an increasing w sat − w a (Δ w ), the gradient of [CO 2 ] from the upper ‘fed’ surface to the lower surface ( c iu − c il , where ‘u’ and ‘l’ denote upper and lower) could be examined as a function of Δ w . The gradient should be positive, but the results were striking. As Δ w increased, the apparent c iu − c il gradient decreased, even becoming negative at large Δ w , which is impossible, as the CO 2 has been fed through the upper surface. As w i is the only input in the calculation that is not directly measured, we argue that this is evidence of unsaturation in the substomatal cavity—that is, that w i is less than w sat , and therefore w i − w a < Δ w . The true c iu − c il at each Δ w should reflect the normal diffusion of gases, assuming there are no structural changes in the leaf. Thus, here we took the observed, apparent value of c iu − c il at low Δ w , multiplied by the CO 2 assimilation rate ( A ) at the current Δ w , divided by the value of A at lowest Δ w (Supplementary Section 2 ). This yielded the target value of c iu − c il , and so w i was then adjusted in the gas exchange calculations until the calculated value of c iu − c il matched the target. Figure 2a shows the rates of CO 2 assimilation ( A ), transpiration ( E ) and g of a cotton leaf, plotted as functions of Δ w . The c au in the upper leaf chamber was around 400 μmol mol −1 . The c al was reduced to set A l to 0, while A u remained relatively constant at around 22 μmol CO 2 m −2 s −1 until Δ w exceeded 20.9 mmol mol −1 . E increased initially as Δ w increased and then began to decrease after Δ w reached 20.9 mmol mol −1 . Apparent water vapour conductance ( g ) decreased almost linearly as Δ w increased. Figure 2b presents the calculated c i of the upper and lower sides and the apparent c i difference ( c iu − c il ) of the cotton leaf plotted against Δ w . At Δ w of 7.1 mmol mol −1 , the cotton c i difference was 20.4 μmol mol −1 . The difference in c i decreased as Δ w increased, and at Δ w of 26.2 mmol mol −1 , the apparent c i difference became negative (−18.8 μmol mol −1 ). Similar data for a sunflower leaf are shown in Extended Data Fig. 1 . The experiment has been replicated 48 times, also by feeding the CO 2 on either the upper or lower surface, finding consistent results (see Supplementary Data 1 for ten examples). Other plant species tested, in which unsaturation was induced, include Phaseolus vulgaris, Xanthium strumarium, Eucalyptus pauciflora and Glycine max . Table 1 shows the values of c a , A , E , apparent g and c i at various Δ w for the cotton leaf shown in Fig. 2 , and Table 2 shows these values for the sunflower leaf shown in Extended Data Fig. 1 , as well as the values corrected to provide the target magnitude of c iu − c il . For the cotton leaf, at Δ w of 20.9 mmol mol −1 , the calculated RH of the substomatal cavity was 96%, and this decreased to 90% at 26.2 mmol mol −1 . Similarly, for the sunflower leaf, at Δ w of 10.1 mmol mol −1 , RH was 95%, and this decreased to 80% at 24 mmol mol −1 . Table 1 Gas exchange values estimated in a cotton leaf using routine calculations ( w i = w sat ) and corrected values adjusting w i to match the target c iu − c il Full size table Table 2 Gas exchange values estimated in a sunflower leaf using routine calculations ( w i = w sat ) and corrected values adjusting w i to match the target c iu − c il Full size table Measurements combining the c iu − c il technique (gas exchange) with the oxygen isotope technique presented by Cernusak et al. 8 and Holloway-Phillips et al. 10 (measuring 18 O in CO 2 and water) show close agreement in estimating the unsaturation of w i (Extended Data Fig. 2 ). Intercellular air space resistance to diffusion We measured the diffusion resistances across leaves using He, Ne and Ar. These gases have no biological interactions and are only trivially soluble in the leaf tissues. The noble gas was fed into the upper chamber and crossed through the upper and lower boundary layers, the upper and lower stomata, and the mesophyll air space resistances ( r bu , r su , r bl , r sl and R ias ) in series, giving a total resistance to diffusion ( R x , where x is the noble gas): $$\begin{array}{l}R_{{x}} = r_{{{{\mathrm{bu}} - x}}} + r_{{{{\mathrm{su}} - x}}} + R_{{{{\mathrm{ias}} - x}}} + r_{{{{\mathrm{sl}} - x}}} + r_{{{{\mathrm{bl}} - x}}}\\ R_{{x}} = R_{{{{\mathrm{b}} - x}}} + R_{{{{\mathrm{s}} - x}}} + R_{{{{\mathrm{ias}} - x}}}.\end{array}$$ (1) We rearranged equation ( 1 ) to estimate mesophyll air space resistances to the noble gas as $$R_{{{{\mathrm{ias}} - x}}} = R_{{x}} - R_{{{{\mathrm{b}} - x}}} - R_{{{{\mathrm{s}} - x}}}.$$ (2) From water vapour measurements, correcting w i as in Table 1 and Table 2 , we obtain the upper plus lower corrected stomatal resistance ( \(cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = cr_{{\mathrm{su}} - {\mathrm{H}}_2{\mathrm{O}}} + cr_{{\mathrm{sl}} - {\mathrm{H}}_2{\mathrm{O}}}\) ) and the boundary layer resistance ( \(R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) ). Then, using the ratio of the diffusion coefficient of water in air ( \(D_{{\mathrm{H}}_2{\mathrm{O}}}\) ) over the diffusion coefficient of the noble gas in air ( D x ), and the corrected stomatal resistances to water vapour, we estimated the stomatal resistances to the noble gas as $$R_{{{{\mathrm{s}} - x}}} = cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\left( {\frac{{D_{{\mathrm{H}}_2{\mathrm{O}}}}}{{D_{{x}}}}} \right),$$ (3) and, using the 2/3 power of this ratio, we calculated the boundary layer resistances to the noble gas from those known for water vapour as $$R_{{{{\mathrm{b}} - x}}} = R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\left( {\frac{{D_{{\mathrm{H}}_2{\mathrm{O}}}}}{{D_{{x}}}}} \right)^{2/3}.$$ (4) Then, from our measurement of R x , we used equation ( 2 ) to deduce R ias− x : $$R_{{{{\mathrm{ias}} - x}}} = R_{{x}} - R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\left( {\frac{{D_{{\mathrm{H}}_2{\mathrm{O}}}}}{{D_{{x}}}}} \right)^{2/3} - cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\left( {\frac{{D_{{\mathrm{H}}_2{\mathrm{O}}}}}{{D_{{x}}}}} \right),$$ (5) and through multiplying by ( \(D_{{\mathrm{H}}_2{\mathrm{O}}}\) / D x ) −1 , we estimated the mesophyll air space resistance to water from noble gas diffusion measurements ( \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) ): $$R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = R_{{{{\mathrm{ias}} - x}}}\left( {\frac{{D_{{\mathrm{H}}_2{\mathrm{O}}}}}{{D_{{x}}}}} \right)^{ - 1}.$$ (6) Analogous to equation ( 1 ), using equation ( 6 ), we obtained the total resistance to water including internal air space resistance from the noble gas measurements (Fig. 3 , squares): $$R_{{{x - {\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}},$$ (7) and the resistance of the intercellular air spaces, \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) , can be rewritten as $$R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = R_{{{x - {\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} - cR_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}},$$ (8) where \(cR_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) . \(cR_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) is less than the uncorrected value ( \(R_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = R_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) ) by an amount that Jarvis and Slatyer 1 called the “wall resistance”. We note that the term suggested in the 1970s might now be better thought of as ‘unsaturated mesophyll air space’ resistance ( R unsat ), as explained below. R unsat is then calculated as $$R_{{{{\mathrm{unsat}}}}} = R_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} - cR_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}.$$ (9) Fig. 3: Series resistances to the diffusion of water vapour across a cotton leaf. a , b , Series resistances (upper case notation) to the diffusion of water vapour across a cotton leaf as inferred from measurements of neon ( R neon , squares); uncorrected water measurements, assuming w i = w sat , ( \(R_{{\mathrm{H}}_2{\mathrm{O}}}\) , open circles); ‘corrected’ water measurements ( \(cR_{{\mathrm{H}}_2{\mathrm{O}}}\) , solid circles); intercellular air space ( \(R_{\mathrm{ias}} = R_{\mathrm{neon}} - cR_{{\mathrm{H}}_2{\mathrm{O}}}\) , diamonds); and unsaturated mesophyll ( \(R_{\mathrm{unsat}} = R_{{\mathrm{H}}_2{\mathrm{O}}} - cR_{{\mathrm{H}}_2{\mathrm{O}}}\) , triangles) versus Δ w ( a ) and E ( b ). Full size image The inclusion of the unsaturated mesophyll air space resistance leads to $$R_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} = cR_{{{{\mathrm{s - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + R_{{{{\mathrm{b - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}} + R_{{{{\mathrm{unsat}}}}}$$ (10) In Fig. 3a , we see in a cotton leaf that as Δ w increases, \(R_{{{{\mathrm{neon - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) remains the greatest resistance because it includes \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) . However, \(R_{{{{\mathrm{H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) approaches \(R_{{{{\mathrm{neon - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) as R unsat increases from zero at low Δ w . The feature of real interest is that R unsat almost reaches \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) at the greatest Δ w , indicating that unsaturation is spreading through the entire mesophyll air space. Our interpretation of the results is that as the vapour pressure difference increases, the saturated front behind the stomatal pores retreats from the substomatal cavities into the intercellular spaces. The extra pathlength for the water flux causes an extra resistance: an increasing R unsat . In amphistomatous leaves, the retreating front can go only as far as the front retreating from the other surface, and the maximum R unsat occurs when it equals \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) . Stomatal resistance is thought to respond to transpiration rates rather than humidity, at least at low and moderate Δ w (ref. 16 ). Figure 3b shows the dependencies of the series of resistances on E . The responses are not simple at high Δ w , presumably reflecting processes causing the reduction in E with increasing Δ w . Similar results are shown in Extended Data Fig. 3a,b in cotton using argon rather than neon. Similar results are also found in sunflower using neon (Extended Data Fig. 3c,d ). The data with helium (Extended Data Fig. 3e,f ) show an analogous pattern, but R unsat , while always less than \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) , does not reach it at the greatest Δ w . The binary diffusivities with air ( D x ) at 20 °C of He, Ne and Ar are 69.7, 31.3 and 18.9 mm 2 s −1 , respectively, so the conversions of noble gas data to water equivalence are likely to be more reliable with Ne and Ar than with He, as He diffusivity is almost three times H 2 O diffusivity in air ( \(D_{{\mathrm{H}}_2{\mathrm{O}}}\) = 24.7 mm 2 s −1 ). We confirmed that the resistances are of the correct order of magnitude, using an independent experiment to estimate \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) where A l is held at zero and \(R_{{{{\mathrm{ias - CO}}}}_{{{\mathrm{2}}}}}\) is calculated from c iu − c il (Supplementary Section 2 ). The value of \(R_{{{{\mathrm{ias - CO}}}}_{{{\mathrm{2}}}}}\) was then converted to a water basis using the factor 1.6 ( \(D_{{\mathrm{H}}_2{\mathrm{O}}}/D_{{\mathrm{CO}}_2}\) ), \(R_{{{{\mathrm{ias - H}}}}_2{{{\mathrm{O}}}}} = R_{{{{\mathrm{ias - CO}}}}_2}/1.6\) . Extended Data Fig. 4 shows a comparison of \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) derived from noble gas measurements with that derived more crudely but more simply from c iu − c il measurements and confirms the general magnitudes. Our calculations presented in Tables 1 and 2 assume that there are no structural changes in the leaf and that \(R_{{{{\mathrm{ias - CO}}}}_{{{\mathrm{2}}}}}\) remains constant. In Fig. 3 and Extended Data Fig. 3 , we can see small increases in \(R_{{{{\mathrm{ias - H}}}}_{{{\mathrm{2}}}}{{{\mathrm{O}}}}}\) with the increase of Δ w (thereby increasing \(R_{{{{\mathrm{ias - CO}}}}_{{{\mathrm{2}}}}}\) ), which might indicate structural changes. However, the modest decreases in leaf water potential of about 0.2 MPa as Δ w increases (Table 3 ) suggest that these changes are small or negligible. If we impose the increase in \(R_{{{{\mathrm{ias - CO}}}}_{{{\mathrm{2}}}}}\) shown in Fig. 3 on our calculations, it impacts the estimation of w i / w sat by less than 0.01. Table 3 Bulk leaf water potential, Ψ bulk , and wall water potential, ψ wall , in cotton leaves Full size table Implications of w i < w sat Turning now to the required modification of gas exchange equations 17 , the calculation for relating transpiration rate to stomatal resistance for one surface becomes (Supplementary Section 3 ) $$\begin{array}{l}E = E_{\mathrm{c}} + E_{\mathrm{s}} = \frac{{w_{{\mathrm{sat}}} - w_{\mathrm{s}}}}{{r_{\mathrm{c}}}}\\\qquad + \frac{{\left( {w_{{\mathrm{sat}}} - w_{\mathrm{s}}} \right)\left( {1 - w_{\mathrm{i}}} \right)}}{{\left( {r_{\mathrm{s}} + r_{{\mathrm{unsat}}}} \right)\left( {1 - \frac{{\left( {w_{\mathrm{i}} + w_{\mathrm{s}}} \right)}}{2}} \right)\left( {1 - \frac{{\left( {w_{{\mathrm{sat}}} + w_{\mathrm{i}}} \right)}}{2}} \right)}},\end{array}$$ (11) where E c and E s are the cuticular and stomatal transpiration rates, respectively; w sat signifies the water vapour mole fraction saturation humidity at leaf temperature; w s is the water vapour mole fraction over the leaf surface; w i is the mole fraction humidity inside the substomatal cavity; and r c , r unsat and r s denote the cuticular, unsaturated mesophyll and stomatal resistances, respectively. In the special case where w i = w sat , then r unsat = 0. Note that R unsat = r unsatu + r unsatl . The occurrence of unsaturation within the leaf impacts the foundations of the current estimation of gas exchange, influencing our interpretation of gas exchange data and the information deduced from it. For instance, we examined the effects of correcting w i on the estimation of mesophyll conductance to CO 2 ( g m ) during experiments in which Δ w was increased (Fig. 4 ) in the cotton leaf featured in Fig. 2 and Table 1 . Fig. 4: Mesophyll conductance to the diffusion of CO 2 ( g m ) as a function of Δ w . We calculated g m using online 13 CO 2 fractionation and gas exchange data. The open circles represent g m calculated using the original data. The solid squares represent g m recalculated using the same dataset but with adjustment for the unsaturation of substomatal air spaces. The data are for the same cotton leaf as in Fig. 2 . Full size image The values of g m calculated using the original gas exchange data showed a sudden increase when Δ w increased beyond 20.9 mmol mol −1 . This was the same turning point where A and E began to decrease and the c i difference changed from +9 to −18 µmol mol −1 (Fig. 2 ). Using the corrected w i shown in Table 1 , the recalculated g m was fairly insensitive to Δ w . Hydraulic mechanism (aquaporins) Measurements of bulk leaf water potential ( Ψ bulk ), which account for the water in the whole tissue, in our plants were between −1.35 and −1.54 MPa; then, from the Kelvin effect, the expected RH within the substomatal cavity ( w i / w sat ) was about 99%. However, w i / w sat calculated from gas exchange measurements varied from 97% to 90% (Table 3 ). To achieve the latter, the expected water potential would be between −4.2 and −13.5 MPa, unlikely values for the cytosol of the cells, particularly considering that the turgor loss point of cotton leaves is about −1.5 to −2 MPa (ref. 18 ). In practice, water evaporates from the liquid phase of mesophyll cell walls to the air space within the leaf; thus, only the water potential in the liquid phase of the cell wall ( ψ wall ) must be at equilibrium with the RH in the substomatal cavity ( w i / w sat ), and the cytosol appears to be largely protected from this stress. The cell wall is a thin structure formed by microfibrils creating an interconnected porous medium with pore diameters varying from about 0.05 μm to less than 0.006 μm, averaging 0.02 μm (refs. 19 , 20 ). From Jurin’s law and a contact angle of water on cell wall fibrils of about 50° (ref. 21 ), the cell wall can be considered fully saturated until a water potential of about −4.5 to −7 MPa and it has an air entry tension higher than −25 MPa, as the core of the cell wall microfibrils are in a more dense arrangement 22 . In practice, the cell wall will always have liquid water in living tissue. It is reasonable to assume that the water potential in the cytosol ( ψ cyt ) is close to Ψ bulk , as this represents the great majority of the water in the tissue (including vacuoles), suggesting that ψ wall and ψ cyt of the cell are substantially different. Current models of leaf gas exchange assume that the vapour phase of the intercellular air space is in equilibrium with ψ cyt or the leaf water potential (see Buckley 23 , Damour et al. 24 and Buckley and Mott 25 ), and therefore ψ wall = ψ cyt . However, our data indicate that ψ wall must be noticeably lower than ψ cyt . For cells to be able to hold this pressure difference between the cytosol and the cell wall (about −3 to −12 MPa) without drying out, they need to control the flow of water from the cytosol to the cell wall. This means effectively controlling the hydraulic conductivity of the cell membrane ( L p ). Under steady-state gas exchange conditions, the water evaporated from the cell wall surfaces ( E w ) is an upper limit on the flow of water across the membrane ( J m ) and thus can be expressed as J m ≈ E w = L p ( ψ cyt − ψ wall ). The precise value of E w is unknown, but if we assume that it must be, say, ten times smaller than E to allow for the ratio of leaf surface to internal surface area, then L p should have values at high Δ w in the vicinity of 0.1 to 0.4 mmol m −2 s −1 MPa −1 . Martre et al. 26 report minimum measurements of L p in isolated protoplasts of 0.27 mmol m −2 s −1 MPa −1 with an average lower bound of 0.55 mmol m −2 s −1 MPa −1 , and similar minimum values have been reported in other studies 27 , 28 , 29 . The occurrence of unsaturation in the substomatal cavity and the gradual increase of R unsat while Δ w increases support our theory of w sat retreating from the substomatal cavity deeper into the mesophyll intercellular air space. The retreat of the saturation front adds diffusive length to the water vapour pathway, with the extra resistance having an upper bound equal to that of the full intercellular air space, meaning that R unsat will always be less than the intercellular air space resistance ( R ias ). This was commented on directly by Farquhar and Raschke 3 and appears to be the case with the ‘wall’ resistance data of Jarvis and Slatyer 1 , restricted to water contents greater than 80% where leaves retain turgor. Our data obtained from noble gases and measurements of Ψ bulk suggest that reducing leaf water potential increases R ias , though the leaves never lost turgor. The occurrence of a plateau in the increases of E with a later drop as Δ w rises suggests that there is active control of L p that is linked to the transpiration rate. Aquaporins seem to be a logical target to investigate as being responsible for controlling L p (ref. 30 ), as there is evidence that some aquaporins may close in dry air 31 , protecting water in those cells. This hypothesis would also explain the finding presented by Farquhar and Raschke 3 : when a leaf was stripped of its epidermis and exposed to dry air, E decreased rapidly without the expected reduction of leaf water potential, presumably because the aquaporins had already closed. This cellular level of water control requires further examination using novel methods. For example, Jain et al. 32 recently presented one such exciting technique using fluorescent powder to measure water potential. Our data show the assimilation rate to be little affected by unsaturation, which suggests an active control of cell membrane water conductivity without substantially impacting the cell membrane CO 2 permeability. Some aquaporins are permeable to CO 2 33 , 34 , and there is evidence of different pathways for water and CO 2 through the cell membrane 35 . This raises the question: if plants can substantially restrict water loss without affecting CO 2 permeability, why do they not replace stomata with such membranes, maximizing water/carbon optimization? The membrane would need the water pathway restrained to avoid desiccation with whatever metabolic and structural costs required and mechanical support similar to that offered by cell walls. Additionally, a single layer of chloroplasts would have to be seated immediately behind the membrane to avoid any lengthy pathways through water, which slows diffusion, just as chloroplasts avoid mesophyll walls that abut on a neighbouring cell 36 . On the other hand, with stomata and a humid intercellular air space network instead of a single membrane, plants have multiple layers of membranes (mesophyll cells) with chloroplasts separated from the gas phase by only a thin cell wall with liquid water. In this way, membranes having aquaporin-like properties effectively wrapping the mesophyll cells, nature has added an exquisite mechanism to be called on when evaporative demand is high. This mechanism not only reduces evaporation with little apparent effect on carbon assimilation, at least initially, but also preserves a modest water potential of the symplast. Methods Plant materials Plants were grown in a glasshouse under natural sunlight in Canberra, during the austral spring–summer and late autumn. During spring–summer the midday photosynthetically active radiation ranged from 1,400 to 2,000 μmol m −2 s −1 , and during late autumn the midday photosynthetically active radiation was around 1,200 μmol m −2 s −1 . The air temperature was 28 ± 2 °C during the day and 20 ± 2 °C at night. The RH of the air was 40% during the day and 80 to 90% at night. Seeds of cotton ( Gossypium hirsutum L.) and sunflower ( Helianthus annuus L.) were sown in 10 l plastic pots containing steam-sterilized potting medium. Slow-release fertilizer (Osmocote, Scotts Australia) was added to the potting medium. To get uniform plants, the seedlings were thinned from four to one per pot after germination. Gas exchange device A double-sided, clamp-on chamber was used to measure gas exchange separately at the adaxial and abaxial surfaces. The leaf chamber was equipped with a fan for each side of the chamber. The boundary layer conductances to water vapour for the upper and lower cuvettes were 2.35 and 1.66 mol m −2 s −1 , respectively. The chamber enclosed a leaf area of 4.9 cm 2 . For the noble gas diffusion measurements, the leaf area enclosed by the chamber was 10 cm 2 , and the boundary layer conductances to water vapour for both the upper and lower cuvettes were 2.44 mol m −2 s −1 . Leaf temperature was measured with an infrared thermometer (model M50, Mikron Infrared). The infrared thermometer had a field of view of 9.5°, which covered a leaf area of 12 mm 2 . Leaf temperature was controlled by circulating water from a temperature-controlled water bath. Two platinum resistive elements (Pt100) were used to measure the chamber air temperature. The pressure difference between the upper and lower cuvettes was kept within ±2 Pa. This was measured with a high-sensitivity differential pressure sensor with a full range of ±250 Pa (Model DP45, Validyne Engineering). Two identical through-flow gas exchange systems were used, one for each side of the leaf. Each system consisted of three mass flow controllers for N 2 , O 2 and CO 2 in air. CO 2 -free air was generated by mixing 79% N 2 with 21% O 2 . For measurements during 2009, the dry gas was humidified by passing through a bubbler. The humidity was controlled by flowing moist gas through a condenser, with the dew point controlled via circulating water from another temperature-controlled water bath. From 2010 onwards, the control of mixing of wet and dry air was modified by adding two mass flow controllers, one for wet air and one for dry air, for each side of the leaf. CO 2 was added to the gas stream after mixing of the wet and dry air. Humidity was measured using a humidity and temperature probe (Model HMP50, Vaisala). The probe was enclosed in a water jacket through which temperature-controlled water circulated from the leaf chamber water bath. Absolute and differential CO 2 concentrations were measured with the same infrared gas analyser (IRGA) (Model Li-6251, Li-Cor) by alternately switching zero gas, reference gas and sample gas to the reference and sample cells of the IRGA. Nafion tubes (Perma Pure) were used to dry gas before entering the IRGA. The two IRGAs were calibrated daily against a calibration gas of known CO 2 concentration. The stability of the two humidity sensors was checked continuously against the dew point of the respective condensers. To check the possible bias of the upper and lower gas exchange instruments, the upper chamber outlet gas line could be switched to the lower chamber IRGA and humidity sensor. Averaged values of the two sets of CO 2 and humidity measurements from each chamber were used in the calculations. A platinum resistive element was used to measure the temperature inside the condenser, and the reading was taken as the dew point of the gas flowing from it. A barometer (BAROCAP PTB 110, Vaisala) was used to measure the pressure inside the condenser. There was an option to bypass the humidifier and condenser when very dry gas was required to achieve a high leaf-to-air water vapour concentration difference, Δ w . CO 2 was added to the gas stream after the condenser. Gas exchange technique The gas exchange measuring sequence started at low Δ w , usually around 6 to 10 mmol mol −1 , and a c a for both upper and lower leaf chambers at around 380 μmol mol −1 or 400 μmol mol −1 . Photosynthetically active radiation was set at 1,000 μmol m −2 s −1 . After the rates of gas exchange reached steady state, which normally took one to two hours, the ambient [CO 2 ] for the lower leaf chamber, c al , was reduced until A l was 0 ± 0.3 μmol m −2 s −1 (zero). The difference, calculated as upper c i minus lower c i , was taken as the gradient of CO 2 that was required for sustaining A . Increases in Δ w from about 6 to 10 mmol mol −1 were achieved by increasing the leaf temperature from 24 °C to 27 °C. Further increases in Δ w to maximum values in both chambers were achieved by decreasing the condenser temperature and bypassing the humidifier and condenser. Gas exchange calculations included ternary effects and those of cuticular conductances 7 , 17 . Minimum leaf surface conductance was measured on leaves sprayed with 10 −4 M cis - trans abscisic acid (Sigma Chemicals). Minimum leaf conductances for the upper and lower surfaces of cotton were 2 mmol m −2 s −1 and 3.5 mmol m −2 s −1 , respectively (6 mmol m −2 s −1 and 7 mmol m −2 s −1 , respectively, for sunflower). Minimum leaf surface conductances were approximated as cuticular conductances in the calculations. The flow rates of gas to both the upper and lower leaf chambers, measured with mass flow meters, were between 0.7 l min −1 and 1.5 l min −1 . Noble gases In our experiments with noble gases, neon (10% Ne in N 2 ), argon (100% Ar) and helium (100% He) were individually fed to the upper chamber. These gases were added to the main gas line via a mass flow controller after the mixing of wet and dry synthetic air and CO 2 . Synthetic air was used to avoid measuring double-ionized argon as neon in the mass spectrometer. The final concentration of noble gas in the main gas line was around 1,000 μmol mol −1 . The concentration of the noble gas was measured by passing the air mixture directly into an isotope ratio mass spectrometer (Isoprime) through a capillary. There were four gas lines linking the gas exchange system to the mass spectrometer: (1) a dry 79% N 2 and 21% O 2 mixture (the zero gas), (2) the noble gas inlet, (3) the upper leaf chamber outlet and (4) the lower leaf chamber outlet. In one measurement with a cotton leaf, both CO 2 and Ne were fed to the lower chamber of the leaf. Oxygen isotopic composition measurements Further experiments were carried out using the gas exchange technique combined with the oxygen isotope techniques of Cernusak et al. 8 and Holloway-Phillips et al. 10 . This involved a cavity-ring-down laser water isotope analyser (Picarro L2130-i, Picarro) that measured the δ 18 O of water vapour and a dual quantum cascade laser (QC-TILDAS) absorption spectroscope that measured the C and O isotopic composition of CO 2 (QCLAS-ISO, Aerodyne Research). The w i was then estimated accounting for the difference between the expected δ 18 O in water 10 and CO 2 and the measured values. Mesophyll conductance Mesophyll conductance to the diffusion of CO 2 , g m , was measured using the online carbon isotope discrimination technique described by Evans et al. 37 as modified by Cernusak et al. 38 , with ternary corrections as per Farquhar and Cernusak 39 . This involved the use of online trapping, and analysing the CO 2 on a dual-inlet isotope ratio mass spectrometer to measure δ 13 C of the CO 2 in the chamber and then to estimate g m from the difference between the expected and the measured δ 13 C. Leaf water potential The gas exchange technique described above was used to generate unsaturation in cotton leaves before sampling them to measure Ψ bulk . The Ψ bulk was measured from leaf tissue within the chamber. Two circular samples 6 mm in diameter were taken after achieving unsaturation conditions, and each sample was placed in a psychrometer (PSY1 psychrometer, ICT International). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data Availability All generated and analysed data from this study are included in the published article and its Supplementary Information .
Scientists from The Australian National University (ANU) and James Cook University (JCU) have identified an "exquisite" natural mechanism that helps plants limit their water loss with little effect on carbon dioxide (CO2) intake—an essential process for photosynthesis, plant growth and crop yield. The discovery, led by Dr. Chin Wong from ANU, is expected to help agricultural scientists and plant breeders develop more water-efficient crops. Study co-author Dr. Diego Marquez from ANU said the findings will have significant implications for the agricultural industry and could lead to more resilient crops that are capable of withstanding extreme weather events, including drought. "Plants continuously lose water through pores in the 'skin' of their leaves. These same pores allow CO2 to enter the leaves and are critical to their survival," Dr. Marquez said. "For every unit of CO2 gained, plants typically lose hundreds of units of water. This is why plants require a lot of water in order to grow and survive. "The mechanism we have demonstrated is activated when the environment is dry, such as on a hot summer day, to allow the plant to reduce water loss with little effect on CO2 uptake." The researchers believe this water preserving mechanism can be manipulated and, in turn, may hold the key to breeding more water-efficient crops. According to lead author Dr. Wong, the ANU team's findings are a "dream discovery" from a scientific and agricultural perspective. "The agriculture industry has long held high hopes for scientists to come up with a way to deliver highly productive crops that use water efficiently," Dr. Wong said. "Plant scientists have been dealing with this big question of how to increase CO2 uptake and reduce water loss without negatively affecting yields. "Having this mechanism that can reduce water loss with little effect on CO2 uptake presents an opportunity for agricultural scientists and plant breeders researching ways to improve water use efficiency and create drought-tolerant crops." Although the researchers have confirmed there is a system in place that is working to limit the amount of water being lost from the leaf, they still don't know what's causing it. "Our main target now is to identify the structures inside the plant that allow this control. We think that water conduits, called aquaporins, located in the cell membranes are responsible," Dr. Marquez said. "Once we're able to confirm this, we can then start thinking about how we can manipulate these systems and turn them into an asset for the agricultural industry." Co-author Distinguished Professor Graham Farquhar from ANU said: "Finding the mechanism itself was one step, a big one, but there is still work to do to translate this discovery into the industry. "We expect that both government and industry will see the value of contributing funds to achieve this goal." Dr. Wong first alluded to this water preserving mechanism 14 years ago, but the research team has only now been able to officially confirm its existence thanks to years of experimentation and corroboration of their results. The research is published in Nature Plants.
10.1038/s41477-022-01202-1
Nano
Precision sieving of gases through atomic pores in graphene
P. Z. Sun et al, Exponentially selective molecular sieving through angstrom pores, Nature Communications (2021). DOI: 10.1038/s41467-021-27347-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-27347-9
https://phys.org/news/2021-12-precision-sieving-gases-atomic-pores.html
Abstract Two-dimensional crystals with angstrom-scale pores are widely considered as candidates for a next generation of molecular separation technologies aiming to provide extreme, exponentially large selectivity combined with high flow rates. No such pores have been demonstrated experimentally. Here we study gas transport through individual graphene pores created by low intensity exposure to low kV electrons. Helium and hydrogen permeate easily through these pores whereas larger species such as xenon and methane are practically blocked. Permeating gases experience activation barriers that increase quadratically with molecules’ kinetic diameter, and the effective diameter of the created pores is estimated as ∼ 2 angstroms, about one missing carbon ring. Our work reveals stringent conditions for achieving the long sought-after exponential selectivity using porous two-dimensional membranes and suggests limits on their possible performance. Introduction Two-dimensional (2D) membranes with a high density of angstrom-scale pores can be made by engineering defects in 2D crystals 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 or, perhaps more realistically in terms of applications, by growing intrinsically porous crystals such as, e.g., graphynes 10 , 11 , 12 . Interest in angstroporous 2D materials is strongly stimulated by potential applications, particularly for gas separation as an alternative to polymeric membranes employed by industry 3 , 13 . On one hand, the atomic thickness of 2D materials implies a relatively high permeability as compared to traditional 3D membranes. On the other hand, angstrom-scale pores with effective sizes d P smaller than the kinetic diameter d K of molecules should pose substantial barriers for their translocation, which is predicted to result in colossal selectivities S > 10 10 , even for gases with fractionally ( ∼ 25%) different d K such as, for example, H 2 and CH 4 1 , 14 , 15 . This unique combination of material properties holds a promise of better selectivity-permeability tradeoffs than those possible by conventional membranes 3 , 13 . At present, this optimistic assessment is based mostly on theoretical modeling. Experimental clarity has so far been achieved only for the classical regime of d P > d K where the flow is governed by the Knudsen equation, and the resulting modest selectivities arise from differences in thermal velocities of gases having different molecular masses m 7 , 8 , 9 , 16 . For smaller pores with d P ≈ d K , S up to 10–100 have been reported for monolayer graphene 5 , 8 , and even higher selectivities ( ∼ 10 4 ) were found for some defects with an estimated diameter of ∼ 3.5 Å in bilayer graphene 4 . Still, this is many orders of magnitude smaller than S predicted for the activated-transport regime, d P < d K 1 , 14 , 15 . Little remains known about the latter regime, which has proven to be extremely difficult to reach in experiment 5 , 8 , 9 . Indeed, even monovacancies in dichalcogenide monolayers were suggested to exhibit the conventional Knudsen flow 9 . The experimental difficulties and lack of understanding are further exacerbated by prohibitive computational costs of simulating molecular permeation in the activated regime 17 , 18 , 19 , 20 . In this work, we achieve the activated regime by creating individual angstrom-scale pores in monolayer graphene by its short exposure to a low-energy electron beam. Gas permeation measurements reveal exponentially large selectivities with activation barriers that depend quadratically on gas molecules’ kinetic diameter. Results Experimental devices Our devices were micrometer-size cavities sealed with monolayer graphene (Fig. 1a ). The microcavities were fabricated from graphite monocrystals, using lithography and dry etching, and had internal diameters of 1–3 μm and depth of ∼ 100 nm (“Methods”). Large, exfoliated graphene crystals were then transferred in air on top of the microcavities, creating “atomically tight” sealing 21 . The sealing was tested by placing the devices into a He atmosphere and monitoring changes in graphene membrane’s position by atomic force microscopy (AFM) (Fig. 1b ). We selected only the devices that were completely impermeable to He (“Methods”; ref. 21 ). Next, the He-tight membranes were subjected to electron radiation using a scanning electron microscope. The accelerating voltage was chosen to be ≤10 kV, and the beam current was set at 10 pA. In a single exposure lasting 3–5 s and using magnification of 700, an area of ~150 × 150 μm 2 was radiated, which translated into an electron dose of 0.1–0.2 μC cm −2 or only ∼ 1 electron per 100 nm 2 . After the exposure, the devices were He-leak tested again. The procedure was repeated several times, until a leak appeared indicating a damage induced by electrons (Fig. 1c ). Fig. 1: Creating defects in suspended graphene. a Schematic of our devices. Left: Monolayer graphene sealing a microcavity was bombarded with electrons. Initially, the membrane sagged inside the cavity due to adhesion to the side walls 4 , 5 , 21 . Right: After pressurization, defected membranes bulged out. b AFM images of the same device before (left) and after (right) its exposure to 10 keV electrons; dose of 0.5 μC cm −2 . Both images were taken after storing the device in Kr at 3 bar for 10 days. The white curves are height profiles along the membrane diameter 21 . σ is the membrane’s central position measured with respect to graphite’s top surface. The gray scale is given by σ ≈ −15 and +24 nm in the left and right images, respectively. c Examples of σ as a function of radiation dose and acceleration voltage. Each point is taken after pressurizing the devices in 3-bar Kr. Dashed lines: guides to the eye; short black lines: σ = 0. d σ ( t ) for a device with the medium-size pore denoted as type 2, after pressurizing it with various gases (color coded). Solid curves: best linear fits. Inset: representative height profiles for a deflating device with Ar inside. Full size image Number of pores We argue that, in most cases, only a single pore was created during such exposures. This conclusion is supported by the following observations. First, the pores appeared after extremely small doses of <0.01 electron per typical pore (its size is determined later in the report). Second, the pores appeared suddenly, usually after not one but several such exposures (Fig. 1c ), and no additional damage or modification normally occurred during further, much longer exposures (>100 times larger doses; Supplementary Fig. 3 ). Third, no pores could be created in ~20% of our devices even after hour-long exposures. This clearly shows that the observed perforation was not a continuous damage process but represented rare single events. They can be attributed to the presence of “weak spots” where damage could be induced by the electron beam. Most of our devices had only one such spot, whereas the others had none or two, which would explain the above observations (Supplementary Fig. 3 ). Fourth, we followed ref. 5 . and sealed the leaking pores with sparsely dispersed Au nanoparticles ( Supplementary Information ; Supplementary Fig. 2 ), an approach used previously to argue the presence of individual pores in graphene membranes. Fifth and most unequivocal, only three discrete pore sizes were ever observed in our experiments rather than their statistical distribution (see below). If several pores were present in a single membrane, a broad distribution of leak rates should have been observed. All this evidence suggests the presence of a single pore in our typical device. As for a mechanism of creating such pores, incident electrons with an energy of 10 keV can transfer at most ~1.8 eV to a carbon atom, which is ten times less than the threshold energy (18–20 eV) required for knock-on damage 22 , 23 . Many electrons would be needed to strike the same carbon atom nearly simultaneously to remove it from the graphene lattice, which is statistically impossible especially for the used low doses. Accordingly, we tentatively attribute the pore formation to “chemical etching” of graphene with locally adsorbed water, which was activated by the electron beam, as reported previously 24 , 25 , 26 . We also speculate that further electron-beam exposure protected graphene from continuous water-mediated damage because hydrocarbons adsorbed on graphene became cross-linked 27 , 28 and prevented water molecules from reaching the surface. This would be consistent with our observation of rare damage events and the absence of the pores’ modification during further radiation. It would be interesting to gain further information about the discussed etching processes, which can probably be achieved by numerical simulations. On impossibility of imaging individual atomic-scale pores in graphene Unfortunately, no existing technique can visualize the created pores’ atomic structures. Indeed, let us first compare the doses used in our experiments with those typical for studies of graphene defects by high-resolution transmission electron microscope (HRTEM). In the latter case, beam currents were ~10 5 –10 6 electron nm −2 s −1 with exposure times of many seconds 22 , 27 , 29 , 30 . In contrast, our pores were created using a dose of only ~10 −2 electron per square nm, at least seven orders of magnitude lower than needed for HRTEM imaging. Furthermore, we used low-energy (typically 8 kV) beams whereas atomic-resolution TEMs operate at 60 kV or higher. The combination of the high doses and high acceleration voltages required for HRTEM imaging would inevitably result in additional defects in graphene or modification of the existing ones. Even if we were to find a rare angstrom-size pore in our large μm-scale membranes, it would be impossible to argue that the defect was previously there, created by the low-energy beam rather than emerged during the imaging, leaving aside the fact that defects in graphene are known to be strongly modified by > 60 kV used for HRTEM imaging. The fact that it is practically impossible to visualize the studied pores also applies to AFM and scanning tunneling microscopy (STM). Although AFM allows the atomic resolution using freshly cleaved graphite or multilayer graphene, monolayer graphene presents a much harder challenge, especially because of mechanical instabilities induced by the tip interacting with suspended membranes. Vacancies and other atomic-scale defects were previously imaged by STM using atomically clean graphene 31 , 32 but our membranes after the electron-beam exposure are not clean or flat, being covered by an atomically thin layer of hydrocarbon contamination 4 , 5 , 27 . Not surprisingly, all the previous reports on individual pores in graphene could not visualize them either 4 , 5 . Gas permeation through the atomic-scale pores The defected membranes prepared as described above were subjected to further permeation tests using various gases (namely, He, Ne, Ar, Kr, Xe, H 2 , CO 2 , O 2 , N 2 and CH 4 ). To this end, the devices were placed in a chamber containing a mixture of air at 1 bar (to match the air captured inside during fabrication) and the tested gas at a partial pressure P of typically ≥3 bar. Storage for 2–20 days, depending on the gas, allowed pressures inside and outside to equalize so that the membranes reached stable-in-time positions. After taking the devices back into air, graphene membranes would normally bulge out (Fig. 1a, b ) and then gradually deflate, which was monitored by AFM (Fig. 1d ). For quantitative analysis, we recorded the central position σ of bulged membranes (Fig. 1b ) as a function of time t . Initially, σ evolved linearly with t , indicating a constant outflow of the tested gas (Fig. 1d ), until its partial pressure inside dropped leading to saturation in σ ( t ), in agreement with the behavior reported in refs. 4 , 5 . We used the initial slope to evaluate the permeation rate J for each gas, as described in Supplementary Information . Repeating this procedure at different P , we confirmed that J ∝ P (Supplementary Fig. 1 ) and, therefore, the pores could be characterized by their P -independent permeance J* = J / P . For slowly permeating gases, our range of J* was limited by observation times of several days, which yielded a permeance of ∼ 10 −31 mol s −1 Pa −1 , that is, less than one gas atom per minute escaping the cavity. It is due to this exceptional sensitivity that we could detect flows through individual pores in the activated-transport regime, which would be difficult if not impossible to access otherwise 4 , 5 , 9 , 21 . As for the upper limit on J* , it was determined by the required time of ∼ 3 min to obtain an AFM image after taking devices from the gas chamber, which translates into ∼ 10 −23 mol s −1 Pa −1 , if using high P = 10 bar and our largest cavities. Our measurements of J* are summarized in Fig. 2 on the basis of more than 40 devices, with each one used to probe several gases. Only three distinct types of pores were observed. This is illustrated by Fig. 2a that compares J* for Ne and Kr (30% different d K ). The measured selectivities S = J* (Ne)/ J* (Kr) fall into clearly separated groups. Small scattering around the average S within each group can be attributed to random local strain or curvature 21 . We refer to the groups as type 1, 2 and 3 pores, according to their S . Using other acceleration voltages between 4 and 10 kV, again only the same three types of pores were observed. This is the strongest evidence in favor of only one pore per membrane (see the other arguments above). The only possibility we cannot rule out is that, for membranes exhibiting highest permeance, two types of pores could be present. For example, type 3 pore could in principle be also present in some devices referred to as type 1 because the biggest pore should dominate the permeation rate. Even if such statistically unlikely events did happen, this would not change any of our conclusions below. Fig. 2: Gas selectivity for graphene pores created by electron bombardment. a Selectivity between Ne and Kr as a function of the dosage at which the pores appeared under an 8 kV electron beam. Each symbol denotes a different device. Three distinctive groups are emphasized by their color with the solid lines indicating the average S for each group. Vertical lines: guides to the eye indicating typical threshold doses for different pore types. b – d J* for the three types of pores using ten different gases, as annotated in the panels. Error bars: SD for typically six but minimum three devices. Solid curves in b – d : best fits to the exponential selectivity J* ∝ exp (− αd K ) for noble gases with α being constants. Because of the limited range of d K , the data fit equally well with J* ∝ exp (− αd K 2 ) (not shown). Dashed curves: guides to the eye for diatomic gases. The arrows in d refer to undetectable permeation for Xe and CH 4 . Full size image Figure 2a also shows that the radiation dose at which a pore appeared can serve as a good predictor of its type, before doing actual gas permeation measurements, with low and high doses favoring type 3 and 2 pores, respectively. The observed nonmonotonic dependence of pores’ permeability on radiation dose seems surprising. Indeed, the appearance of bigger pores for larger doses as in the case of type 1 and 3 pores is what is generally expected. To obtain tighter pores (type 2) using doses higher than those allowing the largest pores (type 1) is somewhat counterintuitive. Note however that, in all the cases, the pores appeared spontaneously at some weak spots and did not evolve further with increasing the dose (Supplementary Fig. 3 ). We speculate that the weak spots are determined by local strain and/or random adatoms on the graphene surface and, once such a spot is in place, the low-energy beam would eventually activate its damage into a predetermined structural configuration. Characteristics of each pore type are detailed in Fig. 2b–d . All the pores exhibited exponential dependences J* ( d K ) with type 3, the least permeable pores, being most selective, followed by type 2 and 1. Judging by their permeance, type 1 pores are similar to those created by ultraviolet-induced oxidation 5 . Within our sensitivity limits, the smallest (type 3) pores were completely impermeable to Xe and CH 4 yielding selectivity > 10 7 with respect to He or H 2 , which is higher than S for any type of membranes reported in the literature. Surprisingly, diatomic gases exhibited systematically higher J* than noble gases (Fig. 2 ). This cannot be due to the elongated shape of diatomic molecules because d K corresponds to the smallest cross-section 33 , that is, the most favorable orientation for translocation. Furthermore, Fig. 2 shows that the observed permeation was controlled mainly by spatial confinement rather than, e.g., chemical affinity: otherwise, translocation of molecules containing certain atoms like oxygen would fall out of the monotonic sequences. Temperature dependence To investigate the underlying sieving mechanisms, we measured temperature ( T ) dependences of J* for all pore types. An example of such measurements is shown in Fig. 3a whereas Fig. 3b plots the extracted activation energies E A , using J* = ν /( N A P ) exp(− E A / k B T ) where N A is the Avogadro number, k B is the Boltzmann constant and ν is the impingement rate. If plotted as a function of d K 2 (rather than d K ) our data closely follow E A = α ( d K 2 − d 0 2 ). This dependence allows the following interpretation. The pores have an empty space with the diameter d 0 which is free from graphene’s electron clouds (inset of Fig. 3b ). To “squeeze” through the pore, atoms and molecules must disturb a region of ~π( d K 2 − d 0 2 )/4 in size, and both electronic and elastic contributions are expected to scale with this area (Supplementary Fig. 5 ). The same α for all three pore types strongly supports the above interpretation, indicating that α is determined by the graphene properties, independently of pores’ configurations and diameters. Fig. 3: Characterizing the angstrom pores. a Example of the measured T dependences for type 2 pores (color coded T ). Symbols: experimental data for Ar. Solid lines: linear fits. Inset: resulting Arrhenius plot (same color-coding). Solid curve: best fit yielding E A ≈ 0.4 eV. b E A for noble gases and different pore types shown as a function of d K (note the nonlinear x axis). Symbols: experimental data with error bars showing SD, using the same set of devices as in Fig. 2 . Solid curves: best fits with E A = α ( d K 2 − d 0 2 ) using same α . Inset: One of possible atomic-scale defects ( Supplementary Information ) with d 0 close to that of type 2 pores (blue circle’s diameter is 2.5 Å). c Impingement rates ν at 1 bar for the same gases and E A as in b . The solid line: best fit using 1/ β = 40 meV 34 , 35 . Blue shaded area: impingement rates ν 0 if the noble atoms were coming from the bulk only. Note that, because of the upper limit on J* ≈ 10 −23 mol s −1 Pa −1 , we could not obtain the Arrhenius plots for gases with higher permeability than Ne. All E A and impingement rates that were possible to obtain using our experimental setup are presented in b and c . Full size image Next, we analyze the pre-exponential factors ν (Fig. 3c ), which were found from the measured T dependences such as in Fig. 3a . For atoms arriving from the bulk, their impingement rate is given by ν 0 = AP /(2π mk B T ) 1/2 where A is the effective pore area 5 , 17 , 18 , 21 , which yields ν 0 of the order of 10 8 s −1 at 1 bar for all our pores and gases. In contrast, the experiment yielded several orders of magnitude higher ν (Fig. 3c ). This unambiguously indicates that translocating atoms come not from the bulk but mostly through adsorption and surface diffusion 17 , 18 , 20 . Discussion The impingement rate ν ad due to adsorption–diffusion processes can be expressed as ( Supplementary Information ) $${\upsilon }_{{ad}}=\frac{P} {\sqrt{2{\pi} m{k}_{B}T}} \sqrt{\frac{{k}_{B}T}{2{\pi} m}}\frac{C}{{f}_{d}}$$ (1) where C is the circumference of the pore and f d is the desorption frequency of adsorbed gases. The desorption frequency f d is described by the van ‘t Hoff equation: \({f}_{d}=\frac{{k}_{B}T}{h}{{{{{\rm{exp }}}}}}\left(\frac{\Delta S}{{k}_{B}}\right){{{{{\rm{exp }}}}}}\left(-\frac{{E}_{{ad}}}{{k}_{B}T}\right)\) , where h is the Planck constant, k B T / h the vibration frequency of adsorbed gases, Δ S the entropy change during the permeation process and E ad is the adsorption energy ( E ad is positive for this notation). The involvement of the adsorption–diffusion mechanism has the following consequences on gas selectivity 8 , 18 . First, the measured E A should be notably lower than the actual translocation barriers, as the former values are reduced by the adsorption energy E ad ( Supplementary Information ). Second, the mechanism should favor permeation of stronger-adsorbed diatomic gases, in agreement with their systematically higher J* as compared to noble gases (Fig. 2 ). In the limit of zero E A , the impingement rate in Fig. 3c extrapolates close to ν 0 , as generally expected because this limit corresponds to the Knudsen flow. On the other hand, the strong dependence ν ≈ ν 0 exp( βE A ) in Fig. 3c is rather surprising. We speculate that it can be due to entropy loss during the surface-transport permeation process, as discussed in the literature 34 , 35 , and is a result of an increasingly large area that supplies gas molecules to the pore mouth, which rapidly grows with increasing the barrier 2 , 34 (see Supplementary Information ). Note that polymeric membranes exhibit similar ν ( E A ) dependences with a universal, material-independent coefficient β ≈ 1/(40 meV) 34 , 35 which value also matches well our results (Fig. 3c ). The origins of such universality remain unknown 34 , 35 . Although the importance of the adsorption–diffusion mechanism for small pores is well documented in the literature 17 , 19 , 34 , 36 , it is especially difficult to extrapolate the existing simulations onto our case because of the extreme crowding effects expected for ultimately small, angstrom-scale pores 19 . The surface contamination of any realistic membrane (rather than idealized graphene) complicates perspective theoretical analysis even further. To conclude, our work provides experimental feedback for extensive theoretical studies of molecular transport through angstrom-scale pores and reveals some unexpected features of the activated-transport mechanism. The mechanism critically involves adsorption and surface diffusion, which places strong constraints on the pore sizes required to reach high selectivity. The found pre-exponential factor ∝ exp( βE A ) counteracts the Arrhenius behavior exp(− E A / k B T ) and strongly reduces selectivity for any given pair of gases. Although atomic structures of the studied pores remain unknown, type 3 pores could be similar in size to hepta-vacancies ( Supplementary Information ) and intrinsic pores in γ -graphyne. Only if 2D membranes with such angstrom pores of high density are developed, one can envisage separation technologies with selectivities beyond the existing selectivity-permeability bounds (for projections based on our results, see Supplementary Fig. 6 ). Methods Device fabrication and inspection To make our devices and test their atomically tight sealing, we followed the procedures developed in ref. 21 . In brief, monocrystals of graphite with a thickness of >200 nm were prepared by mechanical exfoliation on an oxidized silicon wafer. The crystals were examined in an optical microscope using both dark-field and differential-interference-contrast modes to locate relatively large areas (over tens of microns in size), which were free from wrinkles, folds, atomic-step terraces, and other defects. Then, using electron-beam lithography and dry etching, an array of microwells with internal diameters of 1–3 μm and depth of ∼ 100 nm was fabricated within the found atomically flat areas. After overnight annealing at 400 °C in H 2 /Ar atmosphere (volume ratio of 1:10), the microwells were sealed with a large crystal of monolayer graphene, which was transferred in ambient air (Fig. 1 ). The resulting devices were carefully inspected using AFM, and those showing any damage to their sealing were discarded. Such damage could be, for example, extended defects in the atomically flat top surface of the microwells or wrinkles in the graphene sealing 21 . The remaining devices were leak tested by placing them into a stainless-steel chamber containing Ar or Kr at a partial pressure P ≈ 3 bar. After a few days, they were taken out and quickly (typically within 3 min) checked using AFM for any changes in the membrane position (Fig. 1b ). Again, we discarded those devices that exhibited any sign of leakage, namely, if changes in the membrane position after pressurization were >1 nm. Finally, we repeated the same leak test but in an atmosphere of helium at 1 bar. Only devices with no changes in membrane positions were kept for further investigation. Perforating graphene with low-energy electrons Devices that successfully passed the above inspection were exposed to electron irradiation in scanning electron microscope Zeiss EVO. To evaluate the radiation exposure of the studied graphene membranes, we first measured the beam current using a Faraday cup. Then the electron beam was switched off and the membrane device with the known coordinates on the substrate was moved into a central position within a projected exposure area. The beam was then switched on and scanned over this entire area for a few seconds, using magnification 700 with a single area scan lasting ∼ 0.1 s. The simultaneously taken images ensured that membranes were in the center and properly exposed to the beam. After each exposure, the devices were subjected to the same leak tests as described above. We repeated the exposure-test cycle several times until the irradiated container started to exhibit a leak, indicating a defect created in the graphene membrane. In about 20% of cases, we could not create any discernible leak, no matter how long the graphene membranes were exposed to the electron beam. In another 20% of cases, we found an increase in permeation after additional exposures, which probably indicates the creation of the second, larger defect (Supplementary Fig. 3 ). No changes in permeation rates occurred after further prolonged exposures, even those leading to visible hydrocarbon contamination 25 , 26 , 27 , 37 . Data availability All relevant data to support this study are available upon request from the corresponding authors.
By crafting atomic-scale holes in atomically thin membranes, it should be possible to create molecular sieves for precise and efficient gas separation, including extraction of carbon dioxide from air, University of Manchester researchers have found. If a pore size in a membrane is comparable to the size of atoms and molecules, they can either pass through the membrane or be rejected, allowing separation of gases according to their molecular diameters. Industrial gas separation technologies widely use this principle, often relying on polymer membranes with different porosity. There is always a trade-off between the accuracy of separation and its efficiency: the finer you adjust the pore sizes, the less gas flow such sieves allow. It has long been speculated that, using two-dimensional membranes similar in thickness to graphene, one can reach much better trade-offs than currently achievable because, unlike conventional membranes, atomically thin ones should allow easier gas flows for the same selectivity. Now a research team led by Professor Sir Andre Geim at The University of Manchester, in collaboration with scientists from Belgium and China, have used low-energy electrons to punch individual atomic-scale holes in suspended graphene. The holes came in sizes down to about two angstroms, smaller than even the smallest atoms such as helium and hydrogen. In December's issue of Nature Communications, the researchers report that they achieved practically perfect selectivity (better than 99.9%) for such gases as helium or hydrogen with respect to nitrogen, methane or xenon. Also, air molecules (oxygen and nitrogen) pass through the pores easily relative to carbon dioxide, which is >95% captured. The scientists point out that to make two-dimensional membranes practical, it is essential to find atomically thin materials with intrinsic pores, that is, pores within the crystal lattice itself. "Precision sieves for gases are certainly possible and, in fact, they are conceptually not dissimilar to those used to sieve sand and granular materials. However, to make this technology industrially relevant, we need membranes with densely spaced pores, not individual holes created in our study to prove the concept for the first time. Only then are the high flows required for industrial gas separation achievable," says Dr. Pengzhan Sun, a lead author of the paper. The research team now plans to search for such two-dimensional materials with large intrinsic pores to find those most promising for future gas separation technologies. Such materials do exist. For example, there are various graphynes, which are also atomically thin allotropes of carbon but not yet manufactured at scale. These look like graphene but have larger carbon rings, similar in size to the individual defects created and studied by the Manchester researchers. The right size may make graphynes perfectly suited for gas separation.
10.1038/s41467-021-27347-9
Medicine
New study using novel approach for glioblastoma treatment shows promising results, extending survival
Oncolytic DNX-2401 virotherapy plus pembrolizumab in recurrent glioblastoma: a phase 1/2 trial, Nature Medicine (2023). DOI: 10.1038/s41591-023-02347-y Journal information: Nature Medicine
https://dx.doi.org/10.1038/s41591-023-02347-y
https://medicalxpress.com/news/2023-05-approach-glioblastoma-treatment-results-survival.html
Abstract Immune-mediated anti-tumoral responses, elicited by oncolytic viruses and augmented with checkpoint inhibition, may be an effective treatment approach for glioblastoma. Here in this multicenter phase 1/2 study we evaluated the combination of intratumoral delivery of oncolytic virus DNX-2401 followed by intravenous anti-PD-1 antibody pembrolizumab in recurrent glioblastoma, first in a dose-escalation and then in a dose-expansion phase, in 49 patients. The primary endpoints were overall safety and objective response rate. The primary safety endpoint was met, whereas the primary efficacy endpoint was not met. There were no dose-limiting toxicities, and full dose combined treatment was well tolerated. The objective response rate was 10.4% (90% confidence interval (CI) 4.2–20.7%), which was not statistically greater than the prespecified control rate of 5%. The secondary endpoint of overall survival at 12 months was 52.7% (95% CI 40.1–69.2%), which was statistically greater than the prespecified control rate of 20%. Median overall survival was 12.5 months (10.7–13.5 months). Objective responses led to longer survival (hazard ratio 0.20, 95% CI 0.05–0.87). A total of 56.2% (95% CI 41.1–70.5%) of patients had a clinical benefit defined as stable disease or better. Three patients completed treatment with durable responses and remain alive at 45, 48 and 60 months. Exploratory mutational, gene-expression and immunophenotypic analyses revealed that the balance between immune cell infiltration and expression of checkpoint inhibitors may potentially inform on response to treatment and mechanisms of resistance. Overall, the combination of intratumoral DNX-2401 followed by pembrolizumab was safe with notable survival benefit in select patients (ClinicalTrials.gov registration: NCT02798406). Main Glioblastoma is the most common and lethal adult primary brain tumor. The standard of care treatment for newly diagnosed patients includes surgical resection followed by concomitant chemoradiotherapy and adjuvant temozolomide 1 . Despite maximal multimodal therapy, patients invariably experience recurrence of their disease 7 months after diagnosis, on average 1 . Unfortunately, treatment options at recurrence are scarce. Existing salvage therapies have very limited efficacy, with median survival being in the range of only 6–8 months after tumor progression 2 . Effective treatments for recurrent disease are urgently needed. While immune checkpoint blockade by anti-PD1 or anti-PD-L1 antibodies have improved outcomes with objective responses in a variety of other cancers, including those in the brain such as metastatic melanoma 3 , they have had limited efficacy as monotherapy for recurrent glioblastoma where the microenvironment is innately immunosuppressive (that is, immunologically ‘cold’) 4 , 5 . Oncolytic viruses are capable of reconditioning the tumor microenvironment toward a ‘hot’ phenotype, providing rationale for combinatorial therapy with checkpoint inhibitors, which has been shown to improve outcomes in other cancers 6 , 7 . DNX-2401 (tasadenoturev; Delta-24-RGD) is a conditionally replicative oncolytic adenovirus engineered to treat high-grade malignant gliomas 8 , 9 . The virus contains two stable genetic changes in the adenovirus dsDNA genome that cause it to selectively and efficiently replicate in cancerous cells. A dose-escalation phase 1 study demonstrated that stereotactic delivery of DNX-2401 into patients with high-grade gliomas was safe and induced cell death initially by direct oncolysis and subsequently by antitumor response from infiltrated immune cells, with durable responses after a single intratumoral dose 10 . In this Article, we report the results of CAPTIVE (2401BT-002P; KEYNOTE-192; NCT02798406 ), a two-part, phase 1/2, multicenter, open-label clinical trial of combined intratumoral injection of DNX-2401 with systemic pembrolizumab for patients with recurrent glioblastoma. This is the first in-human investigation of combined oncolytic virus with immune checkpoint blockade for recurrent glioblastoma. Results Patient demographics and baseline characteristics A total of 49 patients from 13 of the 15 participating institutions were enrolled between 28 September 2016 and 17 January 2019 (Fig. 1a ). The demographic and baseline clinical characteristics of all patients enrolled are reported in Table 1 . The median age of patients was 53 years, and 41% were women. The majority of patients (80%) presented after first recurrence, and 18% of patients were using steroids at baseline. All patients had histopathological diagnosis of glioblastomas, except one patient enrolled with gliosarcoma (2%). Most patients (90%, N = 44) had reported IDH1 wild-type tumors, four (8%) had IDH1 mutant tumors and IDH1 mutation status was not known for one patient. All patients had received prior treatment with temozolomide and radiotherapy, six (12%) patients had prior bevacizumab treatment and five (10%) had prior treatment with a tumor-treating fields device. Fig. 1: Survival and response to treatment. a , Patient flow in trial. b , Waterfall plot that displays the maximal change in tumor size for all patients who received full-dose DNX-2401 treatment ( n = 42). Bars represent the maximal tumor change from baseline on the basis of contrast-enhanced MRI. Bars are colored according to responses classified according to mRANO criteria. c , Survival for each patient by DNX-2401 dose. The bar colors show the response to treatment according to the mRANO criteria. Arrows indicate that the patient remains alive. d , Overall survival for the intent to treat population. Crosses denote censored data. Full size image Table 1 Patient demographics and baseline characteristics Full size table Safety Forty-eight of 49 (98%) patients were treated with one dose of DNX-2401 after a standard biopsy, which was then followed by pembrolizumab starting 7 days later. One patient enrolled in the first dose cohort received 5 × 10 8 viral particles (v.p.) DNX-2401 but did not start pembrolizumab due to delirium, which was attributed by the investigators to anesthesia used during biopsy, unrelated to treatment. This patient was included in the safety analysis set only, per protocol. There were no dose-limiting toxicities observed, and the maximal dose tested (5 × 10 10 v.p. DNX-2401) was selected as the declared dose for the dose-expansion phase. In total, across both dose-escalation and dose-expansion phases, patients were treated with 5 × 10 8 ( n = 4), 5 × 10 9 ( n = 3) and 5 × 10 10 v.p. DNX-2401 ( n = 42). The median duration of exposure to treatment with DNX-2401 and pembrolizumab was 153 days (range 21–753 days), including three patients (6%) who completed the full 2 year course of pembrolizumab therapy. An overview of adverse events (AEs) in the study is summarized in Extended Data Tables 1 and 2 and Supplementary Table 1 . Overall, DNX-2401 in combination with pembrolizumab was generally well tolerated and AEs were primarily as expected for patients with recurrent glioblastoma, with the majority of these being grade 3 or lower events. There were no AEs related to adenoviral infection. There were no deaths related to AEs that were related to treatment. One patient died approximately 7 months after initiating treatment due to hyperosmolar hyperglycemic nonketotic acidosis, which was considered unrelated to treatment. AEs that were considered to be related to treatment are summarized in Table 2 . The majority of these events were grade 1 or 2 events, with the most common being brain edema (37%), headache (31%) and fatigue (29%). Longitudinal volumetric changes of perilesional edema are shown in Extended Data Fig. 1 . We found that patients with and without symptomatic edema both had increases in volumetric measurements of perilesional edema from 8 weeks to 20 weeks after treatment. Patients who did not develop symptomatic edema begin to have a decrease in volume of perilesional edema after 20 weeks, whereas those who develop symptomatic edema continue to have increases in volume of perilesional edema after 20 weeks. Treatment-related serious AEs that were noted in more than one patient included brain edema (16%), dysphasia (6%) and hemiparesis (6%). Serious cerebral edema was managed with either short-course dexamethasone (89%) and/or other concomitant supportive medications including bevacizumab (18%; Supplementary Table 2 ). Surgical intervention was not needed for serious cerebral edema in any patient. Pembrolizumab was interrupted or discontinued for four patients who had cerebral edema but resumed after resolution. One patient had grade 3 cerebral edema, somnolence and hemiparesis that started 23 days after initiation of treatment, leading to treatment discontinuation and resolution of the AE. A summary of serious AEs related to treatment is provided in Supplementary Table 3 . Table 2 Summary of AEs related to treatment Full size table Efficacy The efficacy and survival endpoints are summarized in Table 3 . According to modified Response Assessment in Neuro-Oncology (mRANO) criteria, two patients had a complete response and three patients had a partial response (Fig. 1b,c ) yielding an objective response rate of 10.4% (90% CI 4.2–20.7%) in the intent-to-treat population and 11.9% (90% CI 4.8–23.4%) for patients treated with the declared dose of DNX-2401, which was numerically greater than prespecified historical rate of 5% but did not meet statistical endpoint. One additional patient of interest had a complete response at the lesion where DNX-2401 was delivered approximately 8 months after treatment; however, a new lesion at a distant site was evident at the same assessment and the patient was therefore classified to have progressive disease. The median time to response was 3.0 months (range 1.9–17.4 months), and median duration of response was 9.4 months (range 1.8–33.7 months) in patients who showed an objective response. An additional 22 patients in the intent-to-treat population and 18 patients in the declared dose population had stable disease lasting longer than 28 days, which resulted in a clinical benefit rate of 56.2% (95% CI 41.1–70.5%) and 54.8% (95% CI 38.7–70.2%), respectively. The median duration of clinical benefit was 3.7 months (range 1.7–37.7 months). A summary of therapies received after treatment and at or after disease progression is presented in Supplementary Table 4 . Table 3 Summary of efficacy endpoints Full size table Patients with objective responses did not universally harbor characteristics that are commonly described in prognostically favorable tumors (Table 4 ). All patients with objective responses had reported IDH1 wild-type tumors by immunohistochemistry (IHC), and only two of them had had tumors with MGMT promoter hypermethylation. Additional targeted sequencing revealed that two patients with objective responses harbored mutations in either IDH1 or IDH2 at low allelic frequencies. Three of the patients with objective responses only had prior radiation and chemotherapy without prior resection of their tumor. The median tumor diameter was similar in patients with and without objective response (32.8 mm, 95% CI 25.2–46.6 mm versus 28.4 mm, 95% CI 24.8–30.8 mm; Supplementary Fig. 1 ). Table 4 Baseline characteristics of patients with complete or partial responses per mRANO criteria Full size table The two patients with complete response each had over 80% reduction in tumor volume approximately 6 months after treatment, which reached complete response criteria by 15–18 months after treatment (Fig. 2 ). These two patients completed 2 year treatment with pembrolizumab with durable responses and remain alive without evidence of disease progression. Fig. 2: Complete responses to DNX-2401 and pembrolizumab. a , Axial T1-weighted MR (top row) and FLAIR images (bottom row) obtained at baseline, 3 months, 6 months, 12 months and 38 months after infusion of DNX-2401 for one complete responder. b , The change of tumor size over time in each patient with a complete response. Dotted black line represents no change relative to baseline. Dashed red line represents the threshold for response according to the mRANO criteria. Both patients showed response to treatment at 3 months after DNX-2401 infusion, with complete response by 15–18 months. Full size image Survival analyses The secondary efficacy endpoint of 12 month survival was met. The 12 month overall survival was 52.7% (95% CI 40.1–69.2%) in the intent-to-treat population and 53.1% (95% CI 36.8–67.0%) in patients who received the declared dose of DNX-2401 (Fig. 1d ), and this was greater than the prespecified threshold of 20% from an approved treatment approach. The median overall survival was 12.5 months (10.7–13.5 months) in the intent-to-treat population and 12.5 months (95% CI 10.2–13.0 months) in declared dose population. Patients with objective responses had longer survival than patients without objective responses that was statistically significant (hazard ratio (HR) 0.20, 95% CI 0.05–0.87, P = 0.02; Extended Data Fig. 2 ). Three patients, all with objective responses (including the two patients with complete response), completed the prespecified pembrolizumab treatment and remain alive at the time this Article was written, beyond the study interval, at 45, 48 and 60 months. Moreover, one patient, with an IDH1 wild-type and MGMT unmethylated tumor received a total of six doses of pembrolizumab with overall stable disease. This patient elected to discontinue participation in the study and remained alive over 34 months after initiation of treatment. Exploratory associations We considered that concurrent use of medications may have impacted outcomes. Physicians were permitted to use low-dose bevacizumab or corticosteroids to address cerebral edema in this trial. Baseline corticosteroid use and corticosteroid use throughout the study were not statistically associated with outcomes, though use of corticosteroids throughout the study approached the threshold for statistical significance in some instances (Extended Data Table 3 ). Moreover, none of the patients with an objective response received bevacizumab during treatment. We also considered that variability in intrinsic patient and tumor factors might be associated with differences in outcomes of patients. To characterize potential biomarkers of treatment response, we obtained gene expression data on 38 patients with biopsy specimens available before treatment. We divided tumors from this study into three tumor microenvironment subtypes (TME high , TME medium and TME low ) on the basis of the degree of immune cell enrichment (Extended Data Fig. 3a ), as recently described 11 . TME high tumors had high scores for multiple different immune cells but also highly expressed multiple complementary suppressive immune checkpoints genes (Extended Data Fig. 4 ). By contrast, TME low tumors had low immune cell scores with low expression of immune checkpoint genes. TME medium tumors had intermediary immune cell scores and expression of PDCD-1 (gene that encodes PD-1) but relatively low expression of other checkpoint proteins. We found that pre-treatment gene expression levels of PDCD-1 , but not CD274 ( gene that encodes PD-L1), was statistically significantly associated with reduction in tumor size (Extended Data Fig. 3b and Supplementary Fig. 2 ). All of the patients who had an objective response had TME medium tumors before treatment (29.4%, 95% CI 10.3–55.6%, P = 0.012). Patients with TME medium tumors were more likely to have clinical benefit from treatment (odds ratio (OR) 4.08, 95% CI 1.02–19.4, P = 0.036; Extended Data Fig. 3c ), and also had statistically significantly longer survival in our cohort (HR 2.27, 95% CI 1.09–4.49, P = 0.027; Extended Data Fig. 3d ). Patient samples from a prior trial investigating adjuvant anti-PD1 monotherapy in recurrent glioblastoma 12 were also divisible into the same three TME subtypes, but associations between TME subtypes and outcomes were less clear in this population treated with monotherapy (Extended Data Fig. 3c,e ). Ten patients also had biopsy specimens at the time of disease progression after treatment allowing for a biological assessment of matched-pair tissues. Of these ten patients, one initially had a partial response to treatment before progression, while the other nine patients did not demonstrate objective responses (three patients with progressive disease as best response and six patients with initially stable disease as best response). Comparing gene expression profiles at disease progression after treatment to those at baseline before treatment revealed several differentially expressed genes (Extended Data Fig. 5a ). Genes that were overexpressed in post-treatment specimens were highly enriched for pathways involved in immune system activation and regulation by functional enrichment analysis (Extended Data Fig. 5b ). The patient with a partial response to treatment showed heightened immune activity after treatment relative to other patients, with the highest levels of interferon gamma and downstream signaling, infiltration of T cells, as well as the highest score for a T-cell inflamed microenvironment (Extended Data Fig. 5c ) 13 . Moreover, the expression of several different immune checkpoint genes such as TIGIT (log 2 fold change (FC) 1.77), LAG3 (log 2 FC 2.05) and CD276 (log 2 FC 2.06) were consistently increased in post-treatment samples, and this was highest for the patient with a partial response to treatment. We performed immunophenotypic characterization of tumors before and after treatment by blinded immunohistochemical and multiplex immunofluorescence analysis. Patients with TME medium and TME high tumors by gene expression subtyping also showed progressively greater density of immune cell infiltrates by IHC and immunofluorescence (Extended Data Fig. 6a–b,d ). Comparing specimens before and after treatment, we found that increases in density of microglia (Iba1), macrophages (CD68) and lymphocytes (CD3, CD4 and CD8) after treatment were most evident in the patient who showed an objective response to treatment (Extended Data Fig. 6c,e ). Certain pathogenic mutations are potentially associated with prognosis and specific response to checkpoint inhibition in glioblastoma 14 . Clinically relevant molecular features were reported by investigators for tumor biopsies analyzed using various assays at each clinical site. Investigators reported MGMT status, IDH1/2 mutation and, for 42 of 49 subjects, pathogenic mutations. Targeted next-generation sequencing was also separately performed on available tumor biopsies on a subset of patients. A notable number of pathogenic mutations, including those in TP53 , NF1 , PTEN, MTOR and RB1 were detected, as were a few mutations in POLE and POLD1 . There was no clear association between these specific molecular features, including tumor mutational burden, on response to treatment (Table 4 and Supplementary Table 5 ). Anti-adenovirus antibodies were measured by direct immunofluorescence assay in the serum of patients before treatment and throughout the course of the trial. All patients were seropositive for IgG antibodies against adenoviral hexon protein before treatment with DNX-2401, and in general, anti-adenovirus IgG levels increased within 2 months post treatment, with levels sustained longest in patients treated with 5 × 10 10 v.p. DNX-2401, compared to lower doses (Extended Data Fig. 7a,b ). We considered that variability in systemic immunogenic response to DNX-2401 might have impacted outcomes. The median overall survival of patients with and without a systemic immunogenic response to DNX-2401 delivery, which we defined as a greater than fourfold increase in baseline levels of anti-adenovirus antibodies, were similar (12.5 months, 95% CI 10.8–15.9 months versus 12.8 months, 95% CI 10.6 months to not reached). These findings were unchanged using more stringent thresholds of greater than tenfold increase in baseline levels of anti-adenovirus antibodies (12.9 months, 95% CI 12.0 months to not reached versus 12.3 months, 95% CI 8.9–16.6 months; Extended Data Fig. 7c,d ) Discussion Glioblastoma is a devastating disease, and recurrence of disease is inevitable after initial treatment with radiotherapy and concurrent and adjuvant temozolomide chemotherapy. At progression, treatment options are very limited and of marginal efficacy. Immune checkpoint blockade in other advanced solid cancers such as melanoma 15 , 16 , 17 and non-small cell lung cancer 18 , 19 has greatly improved outcomes. However, the innately immunologically cold microenvironment in glioblastomas has presumably rendered immune checkpoint blockade less effective for this disease 4 , 5 . DNX-2401 (Delta-24-RGD) is a conditionally replicative oncolytic adenovirus with a 24 base pair deletion in the E1A gene that renders selective replication of the virus in malignant cells with defective retinoblastoma signaling. DNX-2401 also has an RGD peptide insertion into the fiber knob that allows the virus to anchor directly to integrins and improve the infectability of glioblastoma cells 9 . Preclinical studies of DNX-2401 in glioma mouse models showed promising antitumor immune activity as early as 1–2 weeks after delivery of a single dose of virus with potential for longer-term antigen-specific memory responses 9 , 20 . This led to the first in human trials of DNX-2401 for glioblastoma, where in addition to direct oncolytic effects, we showed that the delivery of the virus into tumors induced an immunogenic environment with increased T-cell infiltration and also altered the expression of checkpoint proteins 10 . Treatment with oncolytic virus and immune checkpoint blockade combines the initial local effects of the oncolytic virus on the tumor microenvironment with the systemic effects of innate and adaptive immune responses from virus replication and PD-1 inhibition 7 . This combination has led to improved outcomes in other tumors, such as melanoma 6 , pointing to the possibility for therapeutic benefit of combination therapy in glioblastoma. Systematic screening of co-signaling molecules after DNX-2401 treatment in preclinical glioma models revealed significant increases in PD-1 expression that would prime the immune system for effective synergy with subsequent anti-PD-1 therapy 21 . Indeed, combination therapy of a single intratumoral dose of DNX-2401 followed by systemic pembrolizumab 1 week after viral treatment improved survival compared to monotherapy with either virus or pembrozliumab alone in glioma mouse models, providing rationale for further investigation in humans 21 . Here we report the results of a two-part, phase 1/2, multicenter, open-label clinical trial evaluating the safety and efficacy of combined intratumoral delivery of DNX-2401 with systemic pembrolizumab for patients with recurrent glioblastoma treated at 13 institutions in North America. All centers used purpose-built cannulas to standardize the delivery of virus into the tumor, eliminating backflow and ensuring full administration of virus to the tumor. A total of 48 of 49 patients successfully received treatment with DNX-2401 and pembrolizumab. We tested between 5 × 10 8 to 5 × 10 10 v.p. of DNX-2401 when delivered sequentially with pembrolizumab and found that the safety profile was consistent with prior studies reporting on oncolytic viruses or immunotherapies for brain tumors 3 , 10 , 22 . There were no dose-limiting toxicities in the dose-escalation phase of this study, and no deaths that were directly related to the treatment regimen. The most common serious AE reported was neurological symptoms related to increase in peritumoral inflammation (cerebral edema), which occurred in 16% of patients. We anticipated the possibility for treatment-induced cerebral edema when designing this study due to inflammatory responses observed in phase 1 study of DNX-2401 monotherapy 10 , and so we allowed for a short-course steroid or low-dose bevacizumab regimen to mitigate these effects. All serious cerebral edema events were resolved with anticipated medical measures, and surgical intervention to remove tumor due to tissue swelling was not necessary for any patient. We established the time course of edema development in this trial by serial volumetric analysis of changes in perilesional fluid-attenuated inversion recovery (FLAIR) signal on imaging. We found increases in volume of edema as early as 8 weeks after treatment that was sustained to 20 weeks, even in patients who did not become symptomatic with cerebral edema. These data can help inform on the expected time interval of cerebral edema for future trials of immunotherapy in recurrent glioblastoma. The nonneurologic toxicity profile in this study was otherwise comparable to those previously reported for pembrolizumab 5 . In total, five patients had objective responses, with two patients showing durable complete responses >45 months and three patients remaining alive at the writing of this manuscript. The objective response rate was 10.4% (90% CI 4.2–20.7%). It is noteworthy that there was one additional patient who received the declared dose of DNX-2401 with complete response at the site of treatment; however, this patient developed a new lesion at a distant site resulting in a classification of progressive disease. This patient remained alive a total of 12.3 months after treatment. In the previous phase 1 trial evaluating DNX-2401 monotherapy in recurrent glioma, there was also one patient with a complete response who developed a distant nodule several years after treatment 10 . Pathological examination of the nodule after resection showed only necrosis and inflammation without evidence of tumor. Although the patient in this trial did not undergo resection for the new nodule, it is possible that the radiographic changes seen reflect a similar adaptive memory antitumor response that was observed in the original phase 1 trial of DNX-2401 monotherapy, and not progressive disease. Beyond this, prior reports of durable responses to immunotherapies have largely been limited to patients with favorable biological characteristics 23 . Patients with objective responses in this study had tumors that did not universally harbor the prognostically favorable mutation in IDH1 and had both MGMT methylated and unmethylated tumors, representing the group of glioblastomas that desperately need efficacious therapies. The median overall survival was 12.5 months (10.7–13.5 months) and overall survival at 12 months was 52.7% (95% CI 40.1–69.2%), which was greater than the prespecified threshold of 20% using approved treatment of tumor-treating fields by Novo-TTF 24 . The 12 month overall survival was 32% in patients treated with DNX-2401 alone 10 , while median overall survival was as 9.3 months and 9.8 months with DNX-2401 or PD-1 blockade alone in prior trials 5 , 10 . While the primary endpoint of objective response was not met, the secondary endpoint of 12 month survival, which is more clinically meaningful and reliable than response rate, was met and the survival of objective responders are encouraging, suggesting that tumor control led to improved survival. Although this trial was not designed to distinguish the effects of DNX-2401 versus pembrolizumab versus combination therapy, the notable survival data point to the potential of improved efficacy in combining oncolytic virus with checkpoint inhibition. As cross-trial comparisons have limitations, further focused comparative studies are needed. While the use of bevacizumab may complicate response assessment in trials by inducing changes in contrast enhancement seen on imaging, none of the patients with objective responses received bevacizumab during the study. Moreover, we did not find that baseline corticosteroid use was associated with outcomes in our study, confirming the findings in a prior study evaluating neoadjuvant checkpoint blockade in recurrent glioblastoma 25 . This may be explained by the fact that patients using more than 4 mg per day of dexamethasone as baseline were excluded from both studies. Although associations of steroid use throughout this study and outcomes were not statistically significant, some comparisons approached the threshold for significance. Whether this association is reflective of symptom management in disease progression or a potential modulation of antitumor immune responses is unclear and warrants dedicated investigation in larger cohorts. We obtained matched mutational data and gene expression data on tumor specimens from patients, where available. Three of the patients with objective responses (60%) had tumors with mutational burden (TMB) greater than 10 mutations Mb −1 , while two patients with objective responses (40%) had tumors with TMB less than 10 mutations Mb −1 . Although TMB is a known predictive biomarker of response to checkpoint inhibition in a range of advanced cancers, this relationship is more complex and has been less consistent in prior investigations in glioblastomas 26 . One of the major determinants linking TMB to response to checkpoint inhibition is alterations in mismatch repair proteins or polymerase E and D ( POLE and POLD ) genes 26 . None of the patients who showed objective responses had mutations in POLE or POLD genes. Although this suggests that the antitumor responses after combined oncolytic virus and checkpoint inhibition in glioblastomas may be less dependent on TMB than in other solid cancers, further investigation in much larger cohorts is warranted for definitive conclusions. Using gene expression data, we found that objective responses exclusively occurred in patients with moderately inflamed microenvironment, and modest PD-1 expression (TME medium ) before treatment (29.4%, 95% CI 10.3–55.6%). Clinical benefit rates and overall survival was also longer in TME medium tumors in this trial. These findings are consistent with prior investigations and our own findings that show that adjuvant anti-PD1 inhibition as monotherapy does not seem to improve survival in TME high tumors 11 , 25 . While TME high tumors are enriched with immune cell infiltrates, they also highly express multiple different suppressive immune checkpoints leading to an exhaustive immune microenvironment by complementary mechanisms. TME medium tumors are primed with a moderate degree of immune cells and express moderate levels of PD-1. DNX-2401 can induce further infiltration of cytotoxic T cells and expression of PD-1 in these tumors that can be further targeted with subsequent anti-PD-1 treatment without immunosuppression from alternative checkpoint proteins. We also obtained specimens on disease progression after treatment for ten patients in this trial. We found that the expression of several different immune checkpoints such as TIGIT, LAG3 and B7-H3 was elevated after treatment, pointing to the potential for using multiple parallel immune checkpoint inhibitors in TME medium tumors that eventually develop disease progression. A similar approach could potentially be considered for TME high tumors. There are limitations to this study that require further investigation. First, this trial did not include a comparator cohort. Further trials to directly compare combination therapy to monotherapy are needed before considering large-scale randomized trials. Second, this trial evaluated a single dose of intratumoral oncolytic virus. Emerging data since the conception of this study have shown some potential benefit with multiple doses of oncolytic virus 27 . The safety of multiple doses of DNX-2401 with pembrolizumab needs further investigation given the local immune-stimulatory effects of treatment, if the logistical considerations to safely conduct such a trial can be addressed. Third, we did not find that variability in seroconversion, as measured by changes in anti-Ad5 IgG levels, impacted patient outcomes. While changes in anti-Ad5 IgG levels can be a surrogate for seroconversion, a potentially more definitive assessment of seroconversion would have benefited from quantification of neutralizing antibodies against human adenovirus 8 . Lastly, we identified biological correlates of outcome using gene expression, mutational data and immunophenotyping that can be leveraged to identify subsets of patients who might benefit most from treatment. It should be noted that these findings were exploratory, and future trials should consider maximizing collection of specimens before and after treatment to allow for even more comprehensive characterization of biological outcomes. To our knowledge, the present study is the first to report on the combined direct delivery of oncolytic viral therapy and systemic checkpoint inhibition for any brain tumor. We identified a safe dose of DNX2401 combined with pembrolizumab with objective and durable responses, including two complete responses, and survival benefit for select patients across multiple institutions. These results are promising and particularly relevant in this population of patients who did not receive repeat resection of tumor and for whom efficacious and nontoxic treatments are entirely lacking. As well, we demonstrate the value that translational analyses and endpoints can add in advancing our understanding of the molecular mechanisms and biomarkers of response and/or resistance to treatment in clinical trial settings. Methods Patients Adult patients with histologically confirmed glioblastoma or gliosarcoma, presenting with documented failure of previous surgical resection, chemotherapy and/or radiation at first or second recurrence, with a Karnofsky performance score of at least 70, were eligible. All patients were required to have a single contrast-enhancing tumor of at least 1 cm in two planes but no more than 4 cm in any single plane, as assessed by magnetic resonance imaging (MRI). Surgical resection must not have been possible or planned as part of the treatment for their presentation, and the tumor must have been accessible for stereotactic delivery of DNX-2401. Patients with multifocal or bilateral disease were excluded. The full inclusion and exclusion criteria are detailed in Supplementary Methods . Design To evaluate the safety of combining DNX-2401 with pembrolizumab, we conducted an initial dose-escalation phase to determine a safe dose of DNX-2401 in combination with pembrolizumab and followed by a dose-expansion phase. All patients received a single dose of DNX-2401 by stereotactic injection at the time of standard tumor biopsy followed by 200 mg pembrolizumab infused intravenously at a dose of 200 mg over 30 min every 3 weeks starting 7 days after DNX-2401. Resection of tumors was not permitted. Treatment with pembrolizumab continued for up to 2 years, or until one of the following occurred: disease progression, unacceptable toxic effects or withdrawal of consent. Dose escalation evaluated 5 × 10 8 , 5 × 10 9 and 5 × 10 10 v.p. DNX-2401 in combination with standard dosing pembrolizumab in a 3 + 3 design. All patients underwent a stereotactic biopsy to document the presence of tumor tissue before delivery of DNX-2401. Immediately after biopsy, a stereotactic-compatible neuro-ventricular cannula (Alcyone MEMS; ClearPoint SmartFlow) was inserted into the tumor to deliver the precise targeted dose of DNX-2401 via a single micro-tip at a rate of 0.9 ml h −1 over approximately 1 h. The cannula was left in place for 10 min after administration of virus to allow v.p. to diffuse without backflow before removal. Assessments Patients were continuously monitored throughout the study for safety as outlined in the schedule of assessments in the study Protocol. AEs and serious AEs were graded according to National Cancer Institute-Common Terminology Criteria for Adverse Events, version 4.03, and their relationship to treatment administered was assessed. For the dose-escalation phase, the dose-limiting toxicity (DLT) window of observation was the first 21 days after initial pembrolizumab infusion. The occurrence of any of the following toxicities is considered a DLT, if judged by the Investigator to be possibly, probably or definitely related to administration of DNX-2401 and pembrolizumab (and not to the administration procedure): 1. Grade 4 nonhematologic toxicity (not laboratory) 2. Grade 4 hematologic toxicity lasting ≥7 days 3. Grade 3 nonhematologic toxicity (not laboratory) lasting >3 days despite optimal supportive care 4. Any Grade 3 or Grade 4 nonhematologic laboratory value if: Medical intervention is required to treat the subject, or The abnormality leads to hospitalization, or The abnormality persists for >1 week 5. Febrile neutropenia Grade 3 or Grade 4: Grade 3 is defined as ANC <1,000 mm −3 with a single temperature of >38.3 °C (101 °F) or a sustained temperature of ≥ 3 °C (100.4 °F) for more than 1 h Grade 4 is defined as ANC <1,000 mm −3 with a single temperature of >38.3 °C (101 °F) or a sustained temperature of ≥38 °C (100.4 °F) for more than 1 h, with life-threatening consequences and urgent intervention indicated 6. Thrombocytopenia <25,000 mm −3 if associated with: A bleeding event that does not result in hemodynamic instability but requires an elective platelet transfusion, or A life-threatening bleeding event which results in urgent intervention and admission to an Intensive Care Unit. 7. Prolonged delay (>2 weeks) in initiating cycle 2 due to treatment-related toxicity 8. Missing >10% of pembrolizumab doses as a result of AE(s) during the first cycle 9. Grade 5 toxicity Treatment response was determined by serial protocolized contrast-enhanced MRI every 4 weeks for 28 weeks, and afterward at an interval of every 8 weeks for the remainder of the treatment period. Patients who completed the treatment phase entered the long-term response and survival follow-up phase of the study for the rest of life, with MRI every 16 weeks. Objective responses were evaluated by the RANO criteria 28 , 29 and mRANO criteria 30 . Complete and partial responses required confirmation on the consecutive scan 4 weeks after the initial response was observed. Patients with suspected radiological progression were permitted to remain on study until progression was confirmed by follow-up MRI separated by a minimum of 4 weeks. Endpoints and statistical analyses The analyses reported in this study were performed according to the statistical analysis plan. All enrolled patients were included in the safety analysis set, and patients were considered evaluable for efficacy if they received at least one dose, or part of one dose, of either study drug, had measurable tumor at baseline and completed the week 4 follow-up visit. Patients who discontinued study participation for any reason other than progressive disease or study treatment-related toxicity before the 4 week visit were not considered evaluable and were replaced; however, they continued to be monitored for safety. The primary safety objective was to evaluate the safety of escalating doses of DNX-2401 and the overall safety of the declared dose of intratumoral DNX-2401 when followed by sequential intravenous administration of pembrolizumab. AEs and serious AEs were summarized for all patients in the study and were considered treatment related if reported as possibly, probably or definitely related to study drug. The primary efficacy objective was to determine the objective response rate, defined as the percentage of patients that had complete or partial responses based on mRANO criteria 30 . The primary endpoint was tested in a single-arm design. As the sample size estimation was based on a prespecified historical response rate of 5%, with α = 0.05, a total of 39 evaluable subjects in the declared dose phase would yield an 80% power for an alternative hypothesis of objective response rate of 18%. Objective response rate was reported as the number and percentage of subjects with an objective response and the corresponding 95% CI based on the exact binomial method (Clopper–Pearson method). Type I error was set at 5% (one-sided), so it was predetermined that the 90% CI would also be provided. Secondary efficacy objectives were to evaluate 12 month overall survival as well as the clinical benefit rate, defined as the proportion of patients treated with DNX-2401 and pembrolizumab who had stable disease, complete response or partial response. Overall survival was defined as the time from the start of treatment (DNX-2401 injection) until death (or last follow-up). Overall survival at 12 months was summarized using Kaplan–Meier methods and outcomes were compared to historical rates of 20% from an approved treatment approach, NovoTTF 24 . Overall survival of patients with objective responses was compared to those without objective responses using the 6 month landmark Kaplan–Meier method to account for potential lead time bias 31 . IDH1 mutation status and MGMT methylation status were assessed locally at each institution. Follow-up of survival for patients remaining alive after database lock was used for descriptive purposes only. Statistical and computations analyses were performed using SAS 9.4 and R 4.1.3. Study organization and oversight The study was conducted in compliance with the Protocol at 15 clinical trial sites in the United States and Canada, as well as recognized international standards including the Good Clinical Practice guidelines of the International Conference on Harmonisation and the principles of the Declaration of Helsinki. The Protocol and its amendments were approved by the institutional review board of each participating trial site. Voluntary written informed consent was obtained from every patient before participation in this study. DNX-2401 preparation, handling and administration followed institutional standards for biosafety level 2 agents. Anti-adenovirus antibodies Anti-hexon IgG antibody levels were determined before and after treatment by ELISA from patient serum samples according to the manufacturer’s instructions (Adenovirus IgG ELISA Kit; DEIA309; Creative Diagnostics). Absorbance at 450 nm was measured using a Synergy H4 plate reader (BioTek), and concentrations calculated on the basis of a standard curve (Gen 5 software Version 3.0, BioTek). Anti-adenovirus IgG serum concentration increases of fourfold or greater were considered seroconversions. A more stringent threshold of tenfold or greater increases in levels of anti-adenovirus IgG serum concentrations was also tested. Targeted mutational sequencing Targeted next-generation sequencing was performed on DNA extracted from formalin-fixed, paraffin-embedded (FFPE) pretreatment tumor biopsies available from 28 patients. Tumor samples from 18 subjects were sequenced by NeoGenomics using NeoType Discovery Profile for Solid Tumor. Tumor samples from ten subjects were sequenced by NovoGene using Novogene PM 2.0. Gene expression profiling and analyses RNA was extracted from FFPE pretreatment tumor biopsies available from 38 patients and analyzed retrospectively on the NanoString nCounter system. For ten patients, there were also tumor biopsy specimens available at the time of disease progression, allowing for an examination of gene expression changes before and after treatment in matched patient samples. The geometric mean of canonical marker genes was used to compute scores for immune cell types 32 , functional orientation markers and signature scores that are reported in this study, unless otherwise explicitly stated. Functional orientation markers and the chemokine and cytolytic signature scores were obtained from previous studies 11 , 33 , 34 . Remaining marker genes are provided in Supplementary Table 6 . A T-cell-inflamed signature was computed as previously described using a weighted sum of normalized expression values of 18 inflammatory genes (CCL5, CD27, CD274 (PD-L1), CD276 (B7-H3), CD8A, CMKLR1, CXCL9, CXCR6, HLA.DQA1, HLA.DRB1, HLA.E, IDO1, LAG3, NKG7, PDCD1LG2 (PD-L2), PSMB10, STAT1 and TIGIT) related to antigen presentation, chemokine expression, cytolytic activity and adaptive immune resistance 13 . Glioblastoma microenvironment subtypes were obtained by partition-around-medoid clustering using immune cell type scores, as previously described 11 . Differentially expressed genes between groups were identified by comparing log 2 FC and Welch’s P values. Genes with absolute value log 2 FC >1 and P < 0.05 were considered differentially expressed, unless otherwise specified. Functional enrichment analysis was performed using gProfiler. Previously published datasets Zhao et al. previously published their transcriptomic data in patients receiving anti-PD-1 therapy in high-grade gliomas 12 . A total of 16 patients had transcriptomic data available before initiation of anti-PD-1 therapy, and 9 patients also had transcriptomic data available at progression after initiating anti-PD-1 therapy. The transcriptomic data from these 25 patients were downloaded from SRAPRJNA482620, and clinical annotation was provided by the authors. Response was considered as stable disease or better in this study. Associations with outcome were based on overall survival after initiating anti-PD-1 therapy. Edema volumetric analysis Digital Imaging and Communications in Medicine files for study MRIs were imported into Horos (version 3.3.6), and a blinded reviewer used non-motion degraded, axial, FLAIR sequences to segment perilesional FLAIR hyperintense signal. The Horos volume generator function was used to determine the total FLAIR signal volume for each study MRI. Volume of edema at each study MRI was normalized relative to baseline levels. Grouped comparisons were made by calculating the mean normalized edema volume with 95% confidence intervals at the timepoints outlined in the protocol every 4 weeks for 28 weeks and then every 8 weeks thereafter. IHC We performed immunohistochemical analyses for myeloid cell markers (Iba-1, CD68 and CD163) and lymphoid cell markers (CD3, CD4 and CD8) in samples with available tissue before and after treatment in this sample. Staining and subsequent annotation and analyses were performed blinded to clinical status. Slides with 5 µm FFPE tissue sections were rehydrated and a sodium citrate-dihydrate buffer or Tris–EDTA buffer was used for heat-mediated antigen retrieval. A 3% hydrogen peroxide in methanol solution was utilized to block endogenous peroxidase activity. Blocking solution (5% bovine serum albumin in phosphate buffered saline plus 0.1% Triton X-100) was applied to slides for 1 h at room temperature. Subsequently, primary antibodies including anti-CD3 (Agilent, M725401-2, mouse monoclonal, 1:100), anti-IBA1 (Wako, 019-19741, rabbit polyclonal, 1:1,500), anti-CD68 (Agilent, M0514, mouse monoclonal, 1:200), anti-CD4 (abcam, ab133616, rabbit monoclonal, 1:100) and anti-CD8 (abcam, ab93278, rabbit monoclonal, 1:250) were applied overnight at 4 °C in blocking solution. A 1 h incubation with secondary antibody was performed followed by processing with the DAKO polymer-HRP system and DAB peroxidase kit, counterstaining with hematoxylin, dehydration of the tissue and coverslipping. Whole slide images were digitized, and then for each slide tumor versus non tumor content was annotated and representative images were selected. Proportions of stain-positive cells were quantified using HALO (version 3.0311, Indica Labs) software algorithms that were defined to identify cells with either nuclear or cytoplasmic staining as a fraction of all cells. This algorithm was applied to all annotated tissue sections in an unbiased systematic manner, and the density of immunopositivity per square millimeter was recorded for each antibody. PD-L1 protein expression was performed by NeoGenomics Laboratories (NeoGenomics) under the direction of Merck using FFPE tumor biopsy samples according to standard protocols (PD-L1 IHC 22C3 assay). Multiplex immunofluorescence staining, tissue imaging and cell phenotyping A validated and standardized multiplex immunofluorescence protocol was developed for simultaneous detection of CD3, CD8, CD11b, CD163, GFAP and DAPI in a single FFPE tissue section. The validation pipeline for the multiplex immunofluorescence protocol has been previously described by our group 8 . Briefly, whole-slide tissue sections were deparaffinized and subjected to sequential rounds of antibody staining. Antigen retrieval was performed using Dako PT-Link heat-induced antigen retrieval with low pH (pH 6) or high pH (pH 9) target retrieval solution (Dako). The antibody panel included CD11b (rabbit monoclonal, clone EPR1344, 1:1,000, Abcam, product number ab133357), CD163 (mouse monoclonal, clone MRQ-26, ready-to-use, Cell Marque, product number 760-4437), CD3 (rabbit polyclonal, IgG, ready-to-use, Agilent, product number IR503), CD8 (mouse monoclonal, clone C8/144B, ready-to-use, Agilent, product number IR623), and GFAP (mouse monoclonal, clone 6F2, 1:500, Agilent, product number M0761). After all sequential rounds, nuclei were counterstained with spectral DAPI (Akoya Biosciences) and sections were mounted with Faramount Aqueous Mounting Medium (Dako). Multiplexed immunofluorescence slides were scanned on a Vectra-Polaris Automated Quantitative Pathology Imaging System (Akoya Biosciences). Spectral unmixing was performed using inForm software (version 2.4.8, Akoya Biosciences), as described. Image analysis was performed using QuPath and Fiji/ImageJ. Briefly, cells were segmented on the basis of nuclear detection using the StarDist 2D algorithm. A random trees algorithm classifier was trained for each cell marker. Cells were then subclassified as CD3 + , CD8 + , CD11b + and CD163 + cells. CD4 + T cells were defined as CD3 + CD8 − . Cells negative for these markers were defined as ‘other cell types’. Measurements were calculated as cell densities (cells mm −2 ). GFAP was used to identify tumor areas. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Pseudononymized participant data, including outcomes and relevant reported patient characteristics, are shared as Supplementary Information . Processed gene expression data that can be linked to pseudonymized participant data are provided at GSE226976 . Previously published data were accessed from SRAPRJNA482620 with clinical annotation provided from authors. Custom algorithms or software were not used to generate the results reported in this manuscript.
A new international study published in Nature Medicine and presented as a late-breaking abstract at the American Association of Neurological Surgeons (AANS) annual conference, shows great promise for patients with glioblastoma. Drs. Farshad Nassiri and Gelareh Zadeh, neurosurgeons at the University Health Network (UHN) in Toronto, published the results of a Phase 1/2 clinical trial investigating the safety and effectiveness of a novel therapy which combines the injection of an oncolytic virus—a virus that targets and kills cancer cells—directly into the tumor, with intravenous immunotherapy. The authors found that this novel combination therapy can eradicate the tumor in select patients, with evidence of prolonged survival. Investigative work by the authors also revealed a new genetic signature within tumor samples that has the potential to predict which patients with glioblastoma are most likely to respond to treatment. "The initial clinical trial results are promising," says Dr. Zadeh, who is also Co-Director of the Krembil Brain Institute and a Senior Scientist at the Princess Margaret Cancer Center. "We are cautiously optimistic about the long-term clinical benefits for patients." Glioblastoma is a notoriously difficult-to-treat primary brain cancer. Despite aggressive treatment, which typically involves surgical removal of the tumor and multiple chemotherapy drugs, the cancer often returns, at which point treatment options are limited. Immune checkpoint inhibitors are effective treatments for a variety of cancers, but they have had limited success in treating recurrent glioblastoma. This novel therapy involves the combination of an oncolytic virus and immune checkpoint inhibition, using an anti-PD-1 antibody as a targeted immunotherapy. First, the team delivered the virus by accurately localizing the tumor using stereotactic techniques and injecting the virus through a small hole and a purpose-built catheter. Then, patients received an anti-PD-1 antibody intravenously, every three weeks, starting one week after surgery. "These drugs work by preventing cancer's ability to evade the body's natural immune response, so they have little benefit when the tumor is immunologically inactive—as is the case in glioblastoma," explains Dr. Zadeh. "Oncolytic viruses can overcome this limitation by creating a more favorable tumor microenvironment, which then helps to boost anti-tumor immune responses." The combination of the oncolytic virus and immune-checkpoint inhibition results in a "double hit" to tumors; the virus directly causes cancer cell death, but also stimulates local immune activity causing inflammation, leaving the cancer cells more vulnerable to targeted immunotherapy. Dr. Zadeh and colleagues evaluated the innovative therapy in 49 patients with recurrent disease, from 15 hospital sites across North America. UHN, which is the largest research and teaching hospital in Canada and the only Canadian institution involved in the study, treated the majority of the patients enrolled in the trial. The results, published in Nature Medicine, show that this combination therapy is safe, well tolerated and prolongs patient survival. The therapy had no major unexpected adverse effects and yielded a median survival of 12.5 months—considerably longer than the six to eight months typically seen with existing therapies. "We're very encouraged by these results," says Dr. Farshad Nassiri, first author of the study and a senior neurosurgery resident at the University of Toronto. "Over half of our patients achieved a clinical benefit—stable disease or better—and we saw some remarkable responses with tumors shrinking, and some even disappearing completely. Three patients remain alive at 45, 48 and 60 months after starting the clinical trial." "The findings of the study are particularly meaningful as the patients in the trial did not have tumor resection at recurrence—only injection of the virus—which is a novel treatment approach for glioblastoma. So, it's really remarkable to see these responses," says Dr. Zadeh. "We believe the key to our success was delivering the virus directly into the tumor prior to using systemic immunotherapy. Our results clearly signal that this can be a safe and effective approach," adds Dr. Nassiri. The team also performed experiments to define mutations, gene expression, and immune features of each patient's tumor. They discovered key immune features which could eventually help clinicians predict treatment responses and understand the mechanisms of glioblastoma resistance. "In general, the drugs that are used in cancer treatment do not work for every patient, but we believe there is a sub-population of glioblastoma patients that will respond well to this treatment," says Dr. Zadeh. "I believe this translational work, combining basic bench science and clinical trials, is key to moving personalized treatments for glioblastoma forward." This is one of the few clinical trials with favorable results for glioblastoma over the last decade, and it was truly a team effort. "The trial would not have been possible without our incredible OR teams, research safety teams and researchers—including Dr. Warren Mason and his team at Princess Margaret Cancer Center—and our brave patients and their families. We're also grateful to the Wilkins Family for providing the funds to enable us to complete trials that advance care for our patients," says Dr. Zadeh. The next steps for the group are to test the effectiveness of the combination therapy against other treatments in a randomized clinical trial. "We are encouraged by these results, but there is still a lot of work ahead of us," says Dr. Nassiri. "Our goal, as always, is to help our patients. That's what motivates us to continue this research."
10.1038/s41591-023-02347-y
Medicine
Dressmakers found to have needle-sharp 3D vision
Adrien Chopin et al. Dressmakers show enhanced stereoscopic vision, Scientific Reports (2017). DOI: 10.1038/s41598-017-03425-1 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-03425-1
https://medicalxpress.com/news/2017-06-dressmakers-needle-sharp-3d-vision.html
Abstract The ability to estimate the distance of objects from one’s self and from each other is fundamental to a variety of behaviours from grasping objects to navigating. The main cue to distance, stereopsis, relies on the slight offsets between the images derived from our left and right eyes, also termed disparities. Here we ask whether the precision of stereopsis varies with professional experience with precise manual tasks. We measured stereo-acuities of dressmakers and non-dressmakers for both absolute and relative disparities. We used a stereoscope and a computerized test removing monocular cues. We also measured vergence noise and bias using the Nonius line technique. We demonstrate that dressmakers’ stereoscopic acuities are better than those of non-dressmakers, for both absolute and relative disparities. In contrast, vergence noise and bias were comparable in the two groups. Two non-exclusive mechanisms may be at the source of the group difference we document: (i) self-selection or the fact that stereo-vision is functionally important to become a dressmaker, and (ii) plasticity, or the fact that training on demanding stereovision tasks improves stereo-acuity. Introduction Depth perception is an important human visual ability allowing people to interact easily with their environment. It relies substantially on the stereoscopic depth information, which itself is based on image binocular disparities. These disparities are caused by the different viewpoints of the two eyes. Monocular cues to depth (e.g. motion parallax, shadows, occlusion) also contribute to depth perception 1 . The functional role of stereopsis has been the subject of much debate. It has been theorized to guide the fine movements of the hands in reaching and grasping 2 , 3 , 4 . Indeed, object placement 5 , 6 and grasping 7 , 8 , 9 , 10 are more precise with binocular viewing than monocular viewing (at least in the centre of the visual field 11 ). However, most of the evidence is based on comparing binocular and monocular viewing conditions, which differ not only in the absence of stereopsis, but also in an absence of binocular vergence and summation, and a decreased field of view. It is known that decreasing the field of view affects reaching 12 . Yet, there remains a binocular advantage in object prehension even when controlling for the field of view 13 . There is also a growing body of confirmatory evidence, including studies showing that binocular cues to depth are crucial to prehension 14 , that binocular cues are given more weight than monocular cues when placing objects 15 , and that the binocular advantage in object placement correlates with stereo-acuity 5 . Previous studies 6 , 16 have shown that binocular vision is more efficient than monocular vision in delicate manual tasks like threading a needle. Some have argued that stereopsis can only be useful for slow motions requiring extreme precision 17 . However, past studies have not shown better stereo-acuities for professions based on slow motions requiring extreme precision, like surgeons 18 or dentists 19 , 20 , although stereoblind surgeons performed a simulated surgical task significantly worse than the stereo-normal ones 21 . Furthermore, stereo-acuity when entering a school of dentistry was not linked with later student grades 22 . In the current study, we tested stereoscopic acuities of a sample of dressmakers, and compared these acuities with those of a non-dressmaker group. Given the likely advantage given by stereopsis in fine eye-hand tasks, we reasoned that dressmakers may display better stereo-acuities. This could result either through self-selection or through the development of expertise given that their daily work involves constantly estimating small changes in visual depth. Indeed, stereoscopic vision is known to undergo some training-dependent plasticity. For example, stereo-perception can be ameliorated by training on a depth task with random dot stereograms 23 , 24 , 25 , or a depth task with local stereograms, involving edges, squares, lines, dots, or Gabor patches 24 , 26 , 27 , 28 , 29 . In addition, persons with strabismus and amblyopia, who often suffer from stereo-blindness, have been trained to recover stereoscopic vision with various rates of success (for a review, see ref. 4 ), using techniques such as patching 30 , monocular 31 or dichoptic perceptual learning 32 , 33 , monocular 34 or dichoptic video gaming 30 , 35 , 36 , and stereo-training 37 , 38 , 39 . However, it is not known whether manual actions, in particular, the kind of fine actions involved in sewing can increase stereoscopic depth perception, or whether having poor (or no) stereopsis would deter individuals from professions such as dressmaking. Although we have discussed stereoscopic acuity as if it were a unitary concept, it is well known that there are two different types of disparity: absolute disparity and relative disparity. An object’s absolute disparity is the difference between the angle subtended by the target at the two entrance pupils of the eyes and the angle of convergence. Absolute disparity is important for judging the depth distance of an object from one’s self (Fig. 1 ). The difference between the absolute disparities of two objects is called relative disparity (Fig. 1 ). Relative disparity is important for judging the depth distance between two (or more) objects. It is well known that human observers are better at judging relative disparity than at judging absolute disparity 40 . We and others have argued that the source of this difference is an absence of conscious readout for absolute disparities. We refer to this as the absolute disparity anomaly 41 . Despite this anomaly, humans should have a high sensitivity for absolute disparities, given that both vergence eye movements 42 , 43 , 44 , 45 and relative disparities are based on absolute disparities 41 , 46 , 47 . The plasticity studies discussed above were all conducted with relative disparities. Therefore, it is not clear whether absolute disparity acuity (or readout) can be improved by learning. On the one hand, in a recent study 41 , we have found very little evidence for rapid learning of absolute disparity sensitivity (or readout), suggesting it may be difficult to change. However, our participant sample was small (n = 6) and we tested learning over only 1200 trials. On the other hand, given the assumed link between absolute disparities and relative disparities, at least under the absolute disparity anomaly view, changes in relative disparity could go hand in hand with changes in absolute disparities. Therefore, we tested both absolute and relative disparities, in order to learn whether expertise in sewing might be associated with better relative or absolute disparity acuity (or readout), or both. Figure 1 Schematic illustration of absolute and relative disparities. Left and right panels show the viewpoints from left and right eyes respectively. The observer fixates on the phone (fixation indicated in red crosshairs). The absolute disparity of the author’s cap is the sum of the distances indicated in blue while the absolute disparity of the tower (Berkeley’s campanile) is the sum of the distances indicated in green. The relative disparity between the cap and the tower is the sum of the distances indicated in yellow, and also the differences of the absolute disparities of the cap and of the tower. A more formal definition can be found in ref. 41 . Full size image Absolute disparity is a cue for vergence. However, it is widely believed that relative disparity acuity is considerably better than absolute disparity acuity, because absolute disparities are corrupted by vergence noise 2 , 42 , 48 . In a recent article 41 , we argued against that idea by showing that vergence noise was too small to explain the difference between absolute and relative disparity acuities. Rather, we suggested that vergence noise is not the limiting factor for absolute disparity measurements. Given that debate, however, we felt it was important to measure vergence ability. Furthermore, we were interested to learn whether dressmakers (who need to converge accurately) would show less vergence noise than non-dressmakers. For that purpose, we measured vergence noise and bias (over-convergence or divergence during fixation) for each participant with the Nonius-line technique. Results We compared absolute and relative disparity thresholds of dressmakers and non-dressmakers. In addition, we measured vergence thresholds under nearly identical conditions and separated the results in two values: fixation noise and fixation bias. All stimuli were briefly presented to minimize eye movements. Stereo-thresholds: Dressmakers are better than non-dressmakers A mixed-model ANOVA on Log-thresholds with group as between-subject factor and disparity task (absolute/relative) as a within-subject factor established a main effect of task (F(1,32) = 67; p < 10 −5 ), and importantly of group, with the dressmakers outperforming the non-dressmakers (F(1,32) = 6.2; p = 0.018; the interaction “disparity task × group” was not significant, p = 0.99). As illustrated in Fig. 2 , the dressmakers displayed better (i.e., lower) absolute (1504 vs. 2714 arcsec; T(32) = 1.78; p = 0.05) and better (i.e., lower) relative disparity acuities (241 vs. 345 arcsec; T(32) = 2.16; p = 0.025) than the non-dressmakers (one-sided post-hoc t-tests with Holm-Bonferroni-corrected p-values for the between-group differences). The effect sizes were relatively small (for the absolute disparity task: Cohen’s d = 0.68; for the relative disparity condition: Cohen’s d = 0.34), mostly because of the large range and variance of performances: dressmakers’ median acuity was 43% better in the relative disparity condition and 80% better in the absolute disparity condition, when compared to non-dressmakers’ acuity. Figure 2 Boxplots of log-transformed thresholds for discrimination of depth from absolute disparities only (left side) and from additional relative disparities (right side), for non-dressmaker and dressmaker groups. The median for each group is in red and the blue box defines the Q1 and Q3 quantiles for each group. The whiskers encompass the entire distribution. Each pink dot is a data point for a female participant and each blue dot is a data point for a male participant. Full size image Vergence noise and bias do not differ between dressmakers and non-dressmakers Neither vergence noise (log-thresholds - Fig. 3 ; t-test T(32) = 1.13, p = 0.27) nor vergence bias (Fig. 3 ; t-test T(32) = 1.64; p = 0.11) differed significantly between dressmaker and non-dressmaker groups. Figure 3 Boxplots of log-transformed vergence thresholds (( a ), noises) and vergence biases ( b ) from Nonius - line method, for non-dressmaker and dressmaker groups. Medians are in red and the blue box defines the Q1 and Q3 quantiles. The whiskers encompass the entire distribution. Each pink dot is a data point for a female participant and each blue dot is a data point for a male participant. Full size image Discussion Dressmakers demonstrate a better overall disparity acuity than non-dressmakers for both absolute and relative disparities. There are two plausible and non-exclusive explanations for the dressmakers’ superior stereo-acuity: selection and experience. First, it is possible that having a high stereo-acuity is highly advantageous for becoming a professional dressmaker. We know that observers differ substantially in the precision of their stereo-acuity 49 . Thus, it could be that dressmaking selects for those individuals endowed with superior stereo-acuity because it makes the dressmakers’ task easier, highlighting an example of the functional importance of stereo-vision. A second plausible explanation is that dressmakers, who spend significant time manually sewing, become accustomed to situations in which they deal with precise visual details and in which the depth matters: sewing requires the dressmaker to put a needle behind or in front of a thread or a cloth. In addition, sewing likely provides immediate and direct feedback when an error is accompanied by negative reinforcement (pain from being pricked by the needle), which may aid perceptual learning. In other words, this could be a form of stereo-plasticity from a manual task. For that reason, it is important to note that our dressmakers were selected because they were hand sewing rather than machine sewing. Interestingly, this interpretation, if confirmed, would also imply that absolute disparity acuity (or readout) can be improved by experience. With the present cross-sectional design, it is not possible to know whether dressmakers’ acuities are better because of learning by experience, or because of an implicit selection for better stereoscopic vision by the profession. The two possible origins of the effect could also be cumulative. In the future, a training study could be carried out to address this issue. A recent study demonstrates a complementary idea: namely, that studying representative arts, a profession involving mostly 2D images, is associated with poorer stereopsis 50 . That study also cannot disentangle between a training effect (while ignoring stereo 3D-information in order to represent 2D) and a selection bias (when impaired stereo 3D-vision helps to represent 2D information). Interestingly, we found no difference between groups for the vergence precision and accuracy (vergence noise and bias). This was not a given, as sewing, which requires high precision, would certainly benefit from better vergence, and may also provide a form of vergence training. Finally, the lack of interaction between the type of disparity (absolute or relative) and the type of observer (dressmaker or non-dressmaker) is consistent with the view that relative disparities are calculated from absolute disparities 41 , 46 , 47 . In this view, improvements in threshold may have originated at the level of the absolute disparity encoding, and then percolated to relative disparities. Note that improved absolute stereo-acuities could be expected to result in improved vergence, as absolute disparities contribute to vergence noise 42 , 43 , 44 , 45 . Yet, similar vergence was measured across groups. This is probably owing to the fact that there are other sources of vergence noise than absolute disparity noise. Among such sources are motor noise and noise in the estimation of the vergence angle from eye muscle tension. Those sources of noise may constitute greater limiting factors for vergence fixation than the absolute disparity noise. We acknowledge that our two groups differed in gender balance with the dressmaker group being predominantly female and the non-dressmaker group male. To further assess a potential gender difference in our sample, we show on each figure which participant is a female or a male (color-coded). No clear pattern in favour of a gender bias appears, with around half of the women in the non-dressmaker group, and half of the men in the dressmaker group fall on either side of the median line (Figs 2 and 3 ). If anything, male participants in the dressmaker group had slightly better stereo-acuity. In addition and importantly, several large - scale studies have investigated gender differences in stereo-acuity and they reported no differences, both for standardized clinical tests 51 and psychophysical measures 52 . Therefore, gender is unlikely to explain the effect we document here. Although all of our participants successfully passed the Randot and Butterfly clinical stereo-tests with an acuity better than 70 arcsec, we were unable to measure relative-disparity stereo-acuity better than 3000 arcsec for four of them (out of 34) with our psychophysical method, which had a greater range and sensitivity than either clinical test. This suggests that clinical measures may still present monocular cues 4 . While clinical tests can be performed quickly, allowing large-scale screening tests, they are unsuitable for detecting group differences of the size we document here. There is a clear need for new, computerized tests of stereopsis for the clinic, that contain no monocular cues, and we are encouraged that some, such as “Asteroid” are currently being developed (J. Read, personal communication). To conclude, we were interested in the role of expertise in the perception of stereoscopic depth. We have shown that dressmakers have better stereoscopic acuity than non-dressmakers for both absolute and relative disparities, and no difference in their vergence abilities. The findings are compatible with two non-exclusive possibilities: either that stereopsis has a clear functional importance (here, to success in dressmaking), or that experience with fine manual tasks can influence the precision of the stereoscopic system. Only a training study could disentangle the two options, with one of them opening a door to new ways of training stereoscopic vision. Methods The stimuli, methods and data have been described in detail elsewhere 41 , 53 , therefore we simply provide a brief overview below. Observers Thirteen professional dressmakers (11 female, 2 male, age range: 21–34 years, average: 27.6) and twenty-one non-dressmakers (4 female, 17 male, age mean: 24.1, age range: 19–35 years) participated in the study. Only dressmakers with substantial experience (minimum 2 h/week over the last 3 years) with manual sewing (rather than machine sewing) were included in the study. None of the observers had ever participated in visual studies. Crossed stereoacuity was better than or equal to 70 arcsec on two clinical stereo-tests (Randot circle test and Butterfly circle test) for all observers. All passed the random dot stereogram part of each test. Following recommendations in ref. 54 , we report exclusion of participants at the first stage: no participant was excluded at the clinical stereo-test stage. We collected informed consent for all participants and they all obtained monetary compensation for their participation. Both groups were fully naïve about the computerized tasks in our study. The study was carryout out in accordance with the Declaration of Helsinki and was approved by UNIGE’s Ethics Committee. Stereo-task stimuli and procedures The two stereo tasks that we used to collect thresholds for absolute and relative disparities, used nearly identical stimuli (vertical white lines 20-arcmin long and 26-arcsec wide on a black background). We presented stereoscopic stimuli appearing in depth using a stereoscope in a darkened room. Distance to the screen was 2.1 m and we used a subpixel presentation technique so that binocular disparities as small as 2.6 arcsec could be reliably presented on screen (for full details, see ref. 41 ). A trial started when the fixation point was presented. Observers had to fixate it, and to maintain precise vergence. Vergence feedback was achieved through the perceived horizontal alignment of Nonius lines around the fixation. After aligning the nonius lines, the observer pressed a key which initiated the disappearance of all items on the screen and replaced them with a 10-ms mask made of uniform uncorrelated white noise. The two vertical lines of the stereoacuity stimulus were then presented for 200 ms. Vergence eye movements were precluded by the short presentation time. To minimize the effect of monocular cues, a horizontal jitter in the position of both lines was added. A black screen then replaced the stimulus. Absolute disparity task Participants were shown the two vertical lines at the same depth. We measured the absolute disparity thresholds using the method of single stimuli with an implicit reference 55 , 56 , 57 . For each trial, observers had to decide whether the depth between the (extinguished) fixation point and the lines was smaller or larger than the mean of the same depth over all previous trials seen in the block. The method allowed us to minimize differences in memory load inherent to the task. We estimated that the largest stereo-threshold that we could reliably measure was 3000 arcsec, using a Monte Carlo experiment simulating an ideal observer (2000 repetitions). Larger thresholds could be measured but were most likely under-estimated. Relative disparity task Participants were shown the two vertical lines identical to those in the absolute disparity task. However, when measuring relative disparity thresholds, each line presented a different depth and observers responded about the depth distance between the lines. Observers had to decide whether the depth difference between the two lines was smaller or larger than the mean of the same depth over all trials seen in the block. Vergence measures We measured vergence using the Nonius line method described in ref. 41 . In short, participants were presented with stimuli as identical as possible as the disparity task stimuli. After the initial fixation Nonius lines, whose goal was to ensure the best vergence fixation, another set of Nonius lines was dichoptically flashed with some horizontal jitter. The lines were also shifted horizontally from each other and the shift was varied with a staircase procedure. The task was to judge whether the line above (extinguished) fixation was flashed to the left or to the right of the line below. Vergence data was separated in two aspects: the noise (which reflects the variability of vergence) and the bias (which reflects the accuracy of vergence). Statistical Analyses All statistical tests were conducted at criterion α = 0.05, with n = 34. In the absolute disparity task, participants ran a block with a reference at a 5-arcmin disparity and the other with the reference at 10 arcmin. The blocks did not differ significantly (mixed ANOVA model with absolute disparity condition as a within-subject factor, and group as another factor: F(1,32) = 0.47; p = 0.50; and on log scale: F(1,32) = 0.63; p = 0.43). Therefore, we merged them for the rest of the analyses. When studying acuities in absolute and relative disparity tasks, Lilliefors tests showed that the samples were not normally distributed (absolute disparity condition: p = 0.012 for the control group and p = 0.0021 for the dressmaker group; relative disparity condition: p = 0.0015 for control group and p = 0.001 for dressmaker group). However, the log-transformed distributions could not be shown to diverge from normality, using Kolmogorov-Smirnov or Lilliefors tests (all p > 0.22). Cochran test on the log-transformed thresholds demonstrated that the assumption of homoscedasticity was met for all samples (C = 0.31; p = 0.75), therefore we used log-transformed stereo-thresholds. For the stereo-threshold analysis, we use a modified Thompson tau procedure (median as central value, alpha = 0.01), which identified two observers as outliers (one in the non-dressmaker group/absolute disparity condition and one in the dressmaker group/relative disparity condition). Their values were replaced with the group median in each condition. Vergence - noise estimates for the non-dressmaker group were not normally distributed (Lilliefors test, p = 0.0011). Log-transformed thresholds were not different from Gaussian distributions (using both Kolmogorov-Smirnov and Lilliefors tests, all p > 0.11), therefore we used log-transformed data for the vergence noise analysis. Outlier detection (modified Thomson tau) also detected 2 outliers (1 in each group). Their values were replaced with the group median in each condition. Distributions of vergence biases were not different from Gaussian distributions (using both Kolmogorov-Smirnov and Lilliefors tests, all p > 0.50) and therefore we used raw data for the analysis. Outlier detection (modified Thomson tau) also detected 4 outliers (2 in each group). Their values were replaced with the group median in each condition. Data Availability The dataset is available online on Figshare public repository.
Haute couture can be credited for enhancing more than catwalks and red carpets. New research from UC Berkeley suggests that the 3-D or "stereoscopic" vision of dressmakers is as sharp as their needles. Stereoscopic vision is the brain's ability to decode the flat 2-D optical information received by both eyes to give us the depth of perception needed to thread a needle, catch a ball, park a car and generally navigate a 3-D world. Using computerized perceptual tasks, researchers from UC Berkeley and the University of Geneva, Switzerland, tested the stereoscopic vision of dressmakers and other professionals, and found dressmakers to be the most eagle-eyed. The results, published in the June 13 issue of the journal Scientific Reports, show dressmakers to be 80 percent more accurate than non-dressmakers at calculating the distance between themselves and the objects they were looking at, and 43 percent better at estimating the distance between objects. "We found dressmakers have superior stereovision, perhaps because of the direct feedback involved with fine needlework," said study lead author Adrien Chopin, a postdoctoral researcher in visual neuroscience at UC Berkeley. What researchers are still determining is whether dressmaking sharpens stereoscopic vision, or whether dressmakers are drawn to the trade because of their visual stereo-acuity, Chopin said. Credit: University of California - Berkeley To experience what it means to have stereoscopic vision, focus on a visual target. Now blink one eye while still staring at your target. Then blink the other eye. The background should appear to shift position. With stereoscopic vision, the brain's visual cortex merges the 2-D viewpoints of each eye into one 3-D image. It has generally been assumed that surgeons, dentists and other medical professionals who perform precise manual procedures would have superior stereovision. But previous studies have shown this not to be the case. That spurred Chopin to investigate which professions would produce or attract people with superior stereovision, and led him to dressmakers. A better understanding of dressmakers' stereoscopic superpowers will inform ongoing efforts to train people with visual impairments such as amblyopia or "lazy eye" to strengthen their stereoscopic vision, Chopin said. In addition to helping people with sight disorders, improved stereoscopic vision may be key to the success of military fighters, athletes and other occupations that require keen hand-eye coordination. An estimated 10 percent of people suffer from some form of stereoscopic impairment, and 5 percent suffer from full stereo blindness, Chopin said. For example, the 17th-century Dutch painter Rembrandt, whose self-portraits occasionally showed him with one lazy eye, is thought to have suffered from stereo blindness, rendering him with flat vision. Some vision scientists have posited that painters tend to have poorer stereovision, which gives them an advantage working in 2-D. For the study, participants viewed objects on a computer screen through a stereoscope and judged the distances between objects, and between themselves and the objects. Researchers recorded their visual precision and found that, overall, dressmakers performed markedly better than their non-dressmaker counterparts in visual acuity.
10.1038/s41598-017-03425-1
Physics
Record-breaking laser link could provide test of Einstein's theory
Benjamin P. Dix-Matthews et al. Point-to-point stabilized optical frequency transfer with active optics, Nature Communications (2021). DOI: 10.1038/s41467-020-20591-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-20591-5
https://phys.org/news/2021-01-record-breaking-laser-link-einstein-theory.html
Abstract Timescale comparison between optical atomic clocks over ground-to-space and terrestrial free-space laser links will have enormous benefits for fundamental and applied sciences. However, atmospheric turbulence creates phase noise and beam wander that degrade the measurement precision. Here we report on phase-stabilized optical frequency transfer over a 265 m horizontal point-to-point free-space link between optical terminals with active tip-tilt mirrors to suppress beam wander, in a compact, human-portable set-up. A phase-stabilized 715 m underground optical fiber link between the two terminals is used to measure the performance of the free-space link. The active optical terminals enable continuous, cycle-slip free, coherent transmission over periods longer than an hour. In this work, we achieve residual instabilities of 2.7 × 10 −6 rad 2 Hz −1 at 1 Hz in phase, and 1.6 × 10 −19 at 40 s of integration in fractional frequency; this performance surpasses the best optical atomic clocks, ensuring clock-limited frequency comparison over turbulent free-space links. Introduction Modern optical atomic clocks have the potential to revolutionize high-precision measurements in fundamental and applied sciences 1 , 2 , 3 , 4 , 5 , 6 , 7 . The ability to realize remote timescale comparison in situations where fiber links are impractical or impossible, specifically, between ground- and space-based optical atomic clocks 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , will enable significant advances in fundamental physics and practical applications including tests of the variability of fundamental constants 23 , 24 , general relativity 25 , 26 , searches for dark matter 27 , geodesy 28 , 29 , 30 , 31 , 32 , 33 , 34 , and global navigation satellite systems 35 among others 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 . These efforts build on optical timing links developed for timescale comparison between microwave atomic clocks 47 , 48 , 49 , and efforts are underway to develop optical clocks that can be deployed on the International Space Station 50 and on dedicated spacecraft 51 . Similarly, timescale comparisons between mobile terrestrial optical clocks 1 , 52 , 53 , 54 , 55 , where one or more mobile clocks are able to be deployed and moved over an area of interest, enable ground tests of general relativity and local geopotential measurements for research in geophysics, environmental monitoring, surveying, and resource exploration. Comparison of both ground- and space-based clocks, and mobile terrestrial clocks, requires frequency transfer over free-space optical links. Just as with timescale comparison over optical fiber links, free-space frequency transfer should have residual instabilities better than those of the optical clocks. However, atmospheric turbulence induces much greater phase noise than a comparable length of fiber 12 , 19 , 56 , 57 . In addition, free-space links through the turbulent atmosphere must also overcome periodic deep fades of the signal amplitude due to beam wander and scintillation. When the size of the optical beam is smaller than the Fried scale of the atmospheric turbulence, the centroid of the beam can wander off the detector, while in the case where the beam is larger than the Fried scale, destructive interference within the beam (speckle) can result in loss of signal (scintillation) and so loss of timescale synchronization 19 , 58 , 59 . These deep fades can occur 10s to 100s of times per second for vertical links between the ground and space, and also on horizontal links on the order of 10 km 12 , 17 . One method to overcome deep fades of the signal is to transmit a series of optical pulses from an optical frequency comb and compare them with another optical frequency comb at the remote site 21 . While deep fades will result in the loss of some pulses, the time and phase information can be reconstructed from the remaining pulses. Another method to overcome deep fades is to stabilize the spatial noise caused by atmospheric turbulence by active correction of the emitted and received wave front. In general, tip-tilt correction is sufficient when using apertures that are small compared to the Fried scale as beam wander will dominate the deep fades. For large apertures, the effects of speckle scintillation increase and higher-order corrections using adaptive optics may be necessary. Tip-tilt stabilization of beam wander for comparison of atomic clocks has previously been demonstrated over 12 km with 50 mm scale optics 17 and 18 km with larger 250 mm telescopes 8 . A further practical concern for the deployment of free-space links is the ability of the system to acquire and track a moving object 10 , 60 . In that case, tip-tilt capability is mandatory, and additionally such a system must be robust while also having as low a size, weight, and power as possible for ease of deployment in spacecraft, airborne relay terminals, or mobile ground segments. In this work, we describe phase-stabilized optical frequency transfer via a 265 m point-to-point free-space link between two portable optical terminals. Both terminals have 50 mm apertures and utilize tip-tilt active optics to enable link acquisition and continuous atmospheric spatial noise suppression. The terminals are human-portable and ruggedized for daily field deployment to demonstrate the suitability for remote optical timescale comparison. The performance of the phase stabilization system was determined using a separate 715 m, phase-stabilized optical fiber link between the two terminals. The phase-stabilized free-space optical transfer exhibits an 80 dB improvement in phase noise at 1 Hz, down to 2.7 × 10 −6 rad 2 Hz −1 , compared to the unstabilized optical transmission. The active spatial stabilization used at each terminal is effective at suppressing beam wander caused by the atmospheric turbulence, allowing continuous, cycle-slip and deep-fade free, coherent transmission over periods longer than an hour. The resulting fractional-frequency stability of the phase-stabilized optical transfer reaches 1.6 × 10 −19 with 40 s of integration. At timescales beyond 100 s, the fractional-frequency stability flattens, which we determine to be caused by unstabilized temperature fluctuations in the uncompensated short fibers in the phase stabilization system. Results Coherent optical stabilization system Figure 1 shows the architecture of the phase stabilization systems, as well as the free-space and fiber links used to compare the phase noise performance. Fig. 1: Point-to-point phase-stabilized optical frequency transfer between buildings. a Block diagram of the experimental link. Two identical phase stabilization systems are implemented across the CNES campus. Both systems have their transmitter located in the Auger building (local site), and both receivers are located in the Lagrange building (remote site). One system transmits the optical signal over a 265 m free-space path between the buildings using tip-tilt active optics terminals while the other transmits via 715 m of optical fiber. The relative stability of the two optical signals is then measured at the remote site. QPD, quad-photodetector; Pol, polarization controller; PD, photodetector; PLL, phase-locked loop; AOM, acousto-optic modulator; FM, Faraday mirror; EDFA, erbium-doped fiber amplifier; Mix, radio frequency electronic mixer. Satellite image adapted from Google (Map data: Google, Maxar Technologies). b Active optical terminal located at the local site. c Transmitter portion of the phase stabilization system located at the local site. d Receiver portion of the phase stabilization system located at the remote site. Full size image A 15 dBm optical signal from a 1550 nm NKT Photonics X15 Laser was split and passed into two independent phase stabilization systems, detailed in “Methods”. One of these phase-stabilized systems operated over the free-space link, and was used to suppress the phase noise resulting from atmospheric turbulence. The second phase stabilization system operated over an optical fiber that ran underground between the local and remote sites, and was used to measure the performance of the free-space transmission. Each side of the free-space link also incorporated tip-tilt active optical terminals (detailed in “Methods”) that were used to suppress the received optical intensity fluctuations and deep fades caused by beam wander due to atmospheric turbulence. The remote terminal additionally had a bi-directional optical amplifier that amplified the incoming optical signal (typically by ~13 dB) before passing it to the phase stabilization system, and amplified the reflected portion of the signal for transmission back over the link. The free-space link spanned 265 m between two buildings at the Centre National d’Études Spatiales (CNES) campus in Toulouse, as shown in Fig. 1 . The link passed over grass, sparse trees, and roads, and was operated during late winter over the course of 2 weeks. The most favorable conditions were when the sky was overcast and wind speed was low. Fully coherent transfer over a true point-to-point link Figure 2 shows the measurements for the fiber noise floor (gray) and phase stabilization off (red) cases made with a Microsemi 3120A Phase Noise Test Probe. Phase noise measurements for the phase-stabilized cases with (orange) and without (blue) tip-tilt were obtained using an Ettus X300 Software Defined Radio operating as a continuous IQ demodulator, and are also shown in Fig. 2 . Fig. 2: Phase and frequency stability of the optical transmission measured at the remote site. Red trace, free-space link phase stabilization off, tip-tilt active optics off (data from Microsemi); blue trace, free-space link phase stabilization on, tip-tilt active optics off (data from Ettus); orange trace, free-space link phase stabilization on, tip-tilt active optics on (data from Ettus); and gray trace, system noise floor with both phase stabilization systems transmitting over parallel optical fiber (data from Microsemi). a Power spectral density of the phase noise ( S ϕ ( f )) after transmission. b Fractional frequency stability presented as modified Allan deviation ( σ y ( τ )). The dashed traces are calculated from raw data; the solid traces are calculated from data with quadratic drift removed; and the error bars represent a standard fractional frequency measurement confidence interval set at \(\pm {\sigma }_{y}(\tau )/\sqrt{N}\) , where N is the number of phase measurements. On both plots, the black dashed lines show key gradients of interest. Full size image Further discussion of the measurement equipment architecture and choice may be found in the Supplementary Note 1 . The phase noise Power Spectral Densities (PSD) found using the Ettus X300 shows good agreement with the Microsemi 3120A within overlapping frequency ranges, as shown in Supplementary Fig. 1 . When the phase stabilization and tip-tilt systems are off, the measured noise is expected to be dominated by atmospheric turbulence. In theory 61 , the corresponding PSD is expected to decrease as f −8/3 for low frequencies, before dropping sharply as f −17/3 due to the averaging effect of the optical aperture. The slopes of our measured PSD are compatible with that model. The transition frequency between the two regimes is given in ref. 61 by f c = 0.3 V / D , where V is the transverse wind speed and D the aperture diameter. This is not confirmed in our data, as wind speeds were no more than a few tens of m/s and our beam diameter was about 34 mm. The corresponding theoretical transition frequency is significantly lower than the ≈400 Hz visible in Fig. 2 . We attribute that discrepancy mainly to the fact that the theoretical calculations in ref. 61 were done for a plane wave impinging on a circular aperture, while our beam is Gaussian and smaller than the receiving aperture, and we note that discrepancies between the theoretical model and experimental measurements have been reported previously (see e.g. Tab. I in ref. 57 ). When the stabilization system is turned on, we see around eight orders of magnitude reduction in phase noise PSD at 1 Hz, down to 2.7 × 10 −6 rad 2 Hz −1 . Having the active tip-tilt terminal engaged appears to offer a slight improvement in phase-stability. At frequencies above roughly 2 kHz, the phase noise performance is limited by the residual phase noise of the laser (this is discussed in Supplementary Note 2 ), which also affects the unstabilized measurement above ≈10 kHz. At lower frequency (roughly 200 Hz to 2000 Hz), we are most likely limited by the noise floor resulting from the operation of our compensation system when applied to the atmospheric phase noise, as shown in detail in Supplementary Note 2 . The long-term fractional frequency stability of the stabilized signals is shown in Fig. 2 b in terms of modified Allan deviation (MDEV). This provides an alternative tool for assessing the performance of the stabilized optical transfer, with a particular focus on stability at longer time scales. The MDEV, calculated using the same Ettus X300 data, is shown in Fig. 2 b both in its raw form (dashed traces), as well as after removal of a quadratic fit in phase (solid traces). The linear and quadratic coefficients were 0.15 rad s −1 and 6.1 × 10 −7 rad s −2 for the tip-tilt on data (and 0.14 rad s −1 and −1.3 × 10 −7 rad s −2 for tip-tilt off). We attribute the linear drift to a known offset (measured as 0.141 rad s −1 ) produced in the Ettus. The residual linear drift after accounting for the Ettus is <9 mrad s −1 and results in a systematic offset of <1.5 mHz (or a fractional offset of <7.5 × 10 −18 ). This systematic offset does not impact the transfer stability; however, it would need to be taken into account when calibrating a true optical clock comparison. We further conclude that any residual drift is due to thermally induced variations in the differential optical length change of the uncompensated short (~60 cm) fibers between the laser and first splitters on the transmitter side, and last splitters and photo-diode on the receiver side (refer to Fig. 1 ). We expect that modest temperature control can decrease the quadratic effect by about an order of magnitude, hence the drift removed stability (solid lines) is likely to reflect the ultimate potential of our method. The MDEV averages as a combination of τ −3/2 and τ −1 power laws until an integration time of around 20 s, indicating that the dominant noise at short timescales is white phase and flicker phase noise, in agreement with the phase noise PSD. The optimum stability reached when the active tip-tilt control system was turned off is 3.0 × 10 −19 at 40 s of integration time. When the active tip-tilt terminal is engaged, a slight improvement in stability is seen for integration times longer than 0.02 s (consistent with the phase noise PSD), and the transfer is made more robust. This results in a fractional frequency stability less than 7 × 10 −19 for integration times longer than 10 s, with an optimum stability of 1.6 × 10 −19 achieved at 40 s of integration. This is a factor of two improvement over the case without active tip-tilt control. At longer timescales, the stability does not integrate down further. This is likely due to long-term residual temperature fluctuations in the local and remote sites affecting the uncompensated parts of the two links, as discussed above, and observed in ref. 12 . With better thermal regulation, the fractional frequency stability is expected to continue averaging down to a lower limit. The minimum absolute fiber-to-fiber power loss achieved for the one-way transmission was ~12 dB, though this would quickly degrade with poor alignment. The two free-space beam splitters in the optical terminal account for 6–7 dB of the loss, and the remaining is attributed to coupling losses, imperfect alignment, and atmospheric effects. During operation, the relative power of the optical signal received by the remote site was recorded in order to measure the atmospheric induced fluctuations encountered during a one-way pass of the free-space link. Immediately after the active terminal, a fiber splitter was used to send a small portion of the received signal to a fiber-photo-detector with a linear response to the received optical power. The response of this detector was then digitized at 4 kHz. Figure 3 shows the frequency domain power of the received power fluctuations. Without the active tip-tilt terminal engaged, the power fluctuations drop as roughly f −2 at low frequency and f −3/2 beyond a few Hz. The tip-tilt active optical terminal improves the stability at frequencies below 4 Hz, with over two orders of magnitude reduction in power fluctuations at 0.1 Hz. The tip-tilt servo bump at ~7 Hz is clearly visible. Beyond that bump, there is not a significant difference between having the tip-tilt compensation on or off, as expected. It is interesting to note that the ~4 Hz crossing point roughly matches the frequency at which the phase noise PSD in Fig. 2 starts improving for tip-tilt on (with respect to off), confirming that the phase noise reduction is related to the reduction in power fluctuations. This also implies that better performance of the active optics system (hence lower power fluctuations) is likely to lead to lower phase noise. The tip-tilt system was based on a commercially available unit and the low bandwidth of the system is due to the low gain setting necessary to mitigate some artifacts in the control system firmware (discussed further in “Methods”). Fig. 3: Normalized power ( P /〈 P 〉) of the free-space optical signal received at the remote site with over 3 min. Blue trace, tip-tilt active optics off; and orange trace, tip-tilt active optics on. a Power spectral density of the received power. b Time series of received power with tip-tilt active optics off. c Time series of received power with tip-tilt active optics on. d Histogram of the normalized received power values. Full size image The time domain plots and the histogram provide additional representations of the effect of the active terminal. Without tip-tilt, the optical power fluctuates significantly, and at around 100 s, there is a step change in the received power. This was likely due to mechanical movement of the optical terminal, such as mechanisms in the telescope mount suddenly slipping. This step in power can also be seen in the bi-modal distribution of the histogram. When the tip-tilt actuation was activated, step behavior like this was not observed. Taking the bi-modal feature due to movement of the optical terminal into account, the histograms for both the tip-tilt on and off cases exhibit a log-normal distribution, as is expected of power fluctuations caused by turbulence-induced beam wander. The case with the tip-tilt system engaged shows a much narrower distribution in received power, indicating more constant optical power levels delivered to the phase stabilization system. This indicates that the tip-tilt active optics are effective at suppressing power fluctuations caused by atmospheric turbulence or movement of the terminals. For clarity, the optical power time series traces shown in Fig. 3 are normalized to their own average power level. With the tip-tilt system on, the average optical power received at the remote site was 2.4 times higher than the average power level when the tip-tilt system was off. Critically, with the tip-tilt system on, the optical power does not make significant excursions into lower power values, greatly reducing the chance of a cycle slip in the phase stabilization system. Discussion The transfer of stable optical frequency reference signals over free-space is of particular interest to applications involving ad-hoc transmissions between mobile sites. A specific example of interest is chronometric geodesy 28 , 29 , 30 , 31 , 32 , 33 , 34 , where frequency comparisons with a mobile optical atomic clock at different positions over the region of interest provide a direct measurement of the gravitational red-shift caused by changes in gravitational field and height. The requirements are that the transfer system provide sufficiently stable optical transmission so that the uncertainty of the frequency comparison is limited by the uncertainty of the optical atomic clocks themselves, be physically robust and portable, and be light and small enough to allow for easy and rapid set up of the terminals in different locations. The stabilities of the best lab-based optical atomic clocks are approaching 10 −18 for averaging times on the order of 10 3 s 2 , 28 , 62 , 63 , 64 , 65 . Bothwell et al. 63 achieve a stability of \(4.8\times 1{0}^{-17}/\sqrt{\tau }\) (represented by the τ −1/2 gradient line in Fig. 2 ), averaging down to a final systematic uncertainty-dominated stability of 2 × 10 −18 within 10 min. The stability demonstrated using the system described in this paper surpasses this stability by more than an order of magnitude, ensuring that frequency comparison between optical clocks over a turbulent free-space channel such as this will not be limited by the performance of the phase-stabilized link. Our system is also designed to be physically robust and portable (as shown in Fig. 1 ). The optical terminals are securely built within a steel enclosure that provides protection during transport and while the link is operational. Each terminal has a mass of 14.5 kg, and is 49 cm wide, 24 cm deep, and 18 cm high. The optical fiber-based phase stabilization systems are built within 19" rack-mount steel and aluminum enclosures. The transmitter module is 2U high, 34 cm deep, and has a mass of 11.6 kg; while the receiver module is 1U high, 25 cm deep, with a mass of 5.9 kg. It should be noted that there is scope for significant reduction in size and weight through the use of custom-engineered components. The robustness of the terminals was demonstrated by the fact that they were successfully shipped, via conventional couriers, from Perth, Australia to Toulouse, France without damage or misalignment of the optics. One of the terminals was installed in a telescope dome for the duration of the 2-week trial period, while the other terminal was set up on an open rooftop and was removed and reset every day. A co-aligned visible guide laser and a simple mount scanning algorithm were used to set the link alignment each morning, and initial alignment could be completed within ~15 min. Throughout the day, the link alignment would occasionally need to be re-optimized. We have since commenced the development of a co-aligned camera with a machine vision imaging system to automate link acquisition to under a minute. The long-term operation of the system was limited by the performance of the tip-tilt active optics due to the relatively low sensitivity of the Quadrant Photo Detector (QPD), as discussed further in “Methods”. The lower limit of the QPD’s operational detected power range is −10 dBm (0.1 mW), whereas the phase stabilization system is capable of operating with ~−54 dBm (4.0 nW) of light returning to the transmitter unit 66 . Thus, large drops in link power would first affect the QPD, causing the tip-tilt system to lose the link alignment, and resulting in a loss of signal and cycle slips in the phase stabilization system. Additionally, the tip-tilt system lacked the ability to consistently recover from loses in link alignment. This resulted in the link being able to consistently achieve cycle-slip and deep-fade-free operation for time periods on the order of 3 × 10 3 s, before the tip-tilt system lost link alignment. For the system to be able to operate over longer periods of time, the tip-tilt system has to be improved to operate with less stringent power requirements and to be able to effectively re-acquire the link. There are additional challenges associated with extending the link to beyond 265 m, including more severe atmospheric effects and increased power losses. The more severe atmospheric effects will require higher tip-tilt suppression bandwidth and steering range. This will involve improving the feedback transfer function to more effectively deal with the frequency resonances of the tip-tilt mirror, or replacing the mirror and actuators with alternatives that have higher resonances. The decreased optical power associated with longer links will exacerbate the issues caused by the low sensitivity of the QPD. Our plan to overcome this is to replace the QPD with a more sensitive equivalent and increase the power of the transmitted beam with a high-power amplifier. This should allow operation over longer links, without having to significantly increase the complexity of the active optical terminals. An alternative method of dealing with the greater power losses associated with longer links would be to increase the size of the apertures. This however introduces other difficulties. If the size of the apertures increases to much larger than the Fried parameter, then higher-order spatial effects of the atmosphere will start to become significant and lead to speckle and scintillation 19 . Complex and expensive adaptive optics would be required to suppress these higher order effects. Simulations, similar to those published by Robert et al. 19 , indicate that tip-tilt is sufficient at keeping power fluctuations low for links to a stratospheric platform at a 50 km distance, provided the apertures remain below around 10 cm. This reduces the required complexity of the optics, but at the cost of higher absolute link loss. While the focus of our research has been terrestrial links between mobile optical atomic clocks, it is worth noting that the compact nature of the demonstrated system may prove useful for future satellite-to-satellite timing links. The significant weight, and power consumption costs associated with satellite instrumentation, lends itself to the simple system demonstrated in this paper. Additionally, the reduction in atmospheric effects associated with satellite-to-satellite transmission may reduce the challenges associated with the active optics. There are however many other challenges associated with creating a coherent satellite-to-satellite link that have not been captured within the experiment described in this paper. For example, the phase stabilization technique will work only with reduced bandwidth due to the longer transmission time, and be affected by large Doppler shifts. While solutions exist (e.g. corrections in post-analysis), a significant amount of system development and experimentation would be required before translating the system to space-based links. The long-term goal of our collaboration is to work toward a practical system for performing high-precision clock comparisons between mobile atomic clocks for the purposes of chronometric geodesy. This application requires the use of ad-hoc free-space links between mobile optical atomic clocks separated by up to 100 km, and without necessarily having line of sight. For this extreme application, beyond having to overcome the power and atmospheric challenges mentioned above, an active relay off an airborne platform would be required. The results of this paper represent the first steps toward this ambitious long-term goal. Methods Phase stabilization system Two phase stabilization systems, with very similar architectures, are used to stabilize the free-space and fiber paths. For simplicity, we assume negligible propagation delay and consider only link noise in this section, but these assumptions are revisited in Supplementary Note 2 . Equivalent variables relating to the free-space and fiber stabilization systems are identified by superscripts of fs and fb, respectively. The stabilization systems are based on the imbalanced Michelson interferometer design developed by Ma et al. 67 , 68 , where the long arm of the interferometer is sent over the link and the short arm is reflected by a Faraday mirror to provide an optical frequency reference. The frequency of the outgoing optical signal is shifted by a transmission acousto-optic modulator (AOM) with a nominal frequency ( \({\nu }_{{\rm{tr}}}\) ) which may be varied ( \({{\Delta }}{\nu }_{{\rm{tr}}}\) ). The shifted optical signal is then sent over the link. In the free-space system, the signal is passed through the active terminal described below and launched over the free-space link. The signal then reaches the remote site after picking up link phase noise caused mainly by atmospheric turbulence ( δ ν fs ). This optical signal is received by a second active optical terminal and passed through a bi-directional optical amplifier to offset the signal power lost during transmission. In the fiber system, the signal is passed through an underground fiber running between the two sites. The transmitted signal picks up link noise due to mechanical and thermal fluctuations along this fiber ( δ ν fb ). At the remote site, each stabilization system passes their received signal through an anti-reflection AOM ( ν ar ), before outputting half the signal to the end user ( ν out ). The output at the remote site of the free-space stabilization system is given by $${\nu }_{{\rm{out}}}^{{\rm{fs}}}={\nu }_{{\rm{L}}}+{\nu }_{{\rm{tr}}}^{{\rm{fs}}}+{{\Delta }}{\nu }_{{\rm{tr}}}^{{\rm{fs}}}+{\nu }_{{\rm{ar}}}^{{\rm{fs}}}+\delta {\nu }^{{\rm{fs}}}\ ,$$ (1) where ν L is the laser frequency, while the output of the fiber stabilization system is given by $${\nu }_{{\rm{out}}}^{{\rm{fb}}}={\nu }_{{\rm{L}}}+{\nu }_{{\rm{tr}}}^{{\rm{fb}}}+{{\Delta }}{\nu }_{{\rm{tr}}}^{{\rm{fb}}}+{\nu }_{{\rm{ar}}}^{{\rm{fb}}}+\delta {\nu }^{{\rm{fb}}}\ .$$ (2) The two signals are optically beat together at a photodetector and low-pass filtered to produce a down-converted signal, $${\nu }_{{\rm{meas}}}={\nu }_{{\rm{out}}}^{{\rm{fs}}}-{\nu }_{{\rm{out}}}^{{\rm{fb}}},$$ (3) used to measure the relative stability of the optical signals reaching the remote site through free-space and fiber. The residual phase noise from the free-space transmission dominates the residual phase noise from the fiber transmission over most of the Fourier frequency range. The AOM frequencies were chosen so that the measured beat signal ( ν meas ) was at a nominal frequency of 1 MHz. An external 10 MHz signal from a hydrogen maser was shared between the two sites via radio frequency (RF) over fiber and provided a common reference for the transmitter oscillators and remote site measurement equipment. As the frequency of the RF reference is seven orders of magnitude lower than the optical signal, the frequency stability of the RF reference will not significantly degrade the phase measurement taken by the remote site measurement equipment. The other half of the signals reaching the remote site are reflected by Faraday mirrors back through the anti-reflection AOMs and back over the free-space and fiber links. For the free-space link, the return signal also passed back through the bi-directional optical amplifier. At the local site, the signals returning from the fiber and free-space links pass back through their respective transmission AOMs. Each system then performs a self-heterodyne measurement by beating the returned signal against the short arm of the Michelson interferometer on a photodetector. The final electrical beat signal, $${\nu }_{{\rm{beat}}}=2{\nu }_{{\rm{tr}}}+2{{\Delta }}{\nu }_{{\rm{tr}}}+2{\nu }_{{\rm{ar}}}+2\delta \nu ,$$ (4) now contains information about the phase noise picked up during the transmission over the link. This signal is then mixed with a local oscillator of frequency ( \(2{\nu }_{{\rm{tr}}}+2{\nu }_{{\rm{ar}}}\) ) and low-pass filtered in order to extract a DC error signal, $${\nu }_{{\rm{dc}}}=2{{\Delta }}{\nu }_{{\rm{tr}}}+2\delta \nu ,$$ (5) for the phase-locked loop (PLL) that stabilizes the transmission frequency. The PLL then controls the frequency of the transmission AOM in order to drive this error signal to zero, such that \({{\Delta }}{\nu }_{{\rm{tr}}}=-\delta \nu\) . This has the effect of suppressing the link phase noise from the free-space (Eq. 1 ) and fiber (Eq. 2 ) output signals. Active optical terminals The active terminals (Fig. 1 ) used at each end of the free-space link were reciprocal and identical. The optical signal is passed through a fiber to free-space collimator with a 1/ e 2 radius of 1.12 mm. This is then passed through a 50–50 beam splitter (BS). Half the optical signal is sent to a beam-dump, and the other half is sent to a 15:1 Galilean beam expander (GBE) with a 48 mm clear aperture. The signal from the GBE is reflected off a 50 mm flat mirror with active piezo-electric actuators and launched over the free-space link with a 1/ e 2 radius and divergence of approximately 16.8 mm and 29 μrad, respectively. The incoming beam is reflected by the active mirror into the GBE. The BS then sends half the incoming light to the free-space-to-fiber collimator, and the other half to a QPD. This QPD is used to detect first-order spatial fluctuations in the incoming beam. The measured fluctuations are passed through a Proportional Integral (PI) controller and used to drive the piezo-electric actuators on the active mirror in order to suppress these fluctuations and keep the incoming beam centered on the QPD. The QPD is positioned so that the optical signal coupled by the collimator into the fiber is maximized when the beam is centered on the QPD. The QPD and active mirror control system is a commercial off-the-shelf system. The achievable turbulence suppression bandwidth of the system during these tests was limited by the low PI controller gain settings which were necessary to reduce the sensitivity of the tip-tilt system to noise in the QPD when the link optical power dropped below the threshold for effective operation of the QPD. When the optical power dropped below this threshold, the tip-tilt system would attempt to steer to the false beam centroid caused by the detector noise, losing the real beam in the process. The low gain settings prevented the tip-tilt system from steering too far off target before sufficient optical power was restored. Data availability The data that support the findings of this study are available from the corresponding author, B.P.D.-M., upon reasonable request.
Scientists from the International Centre for Radio Astronomy Research (ICRAR) and the University of Western Australia (UWA) have set a world record for the most stable transmission of a laser signal through the atmosphere. In a study published today in the journal Nature Communications, Australian researchers teamed up with researchers from the French National Centre for Space Studies (CNES) and the French metrology lab Systèmes de Référence Temps-Espace (SYRTE) at Paris Observatory. The team set the world record for the most stable laser transmission by combining the Aussies' phase stabilization technology with advanced self-guiding optical terminals. Together, these technologies allowed laser signals to be sent from one point to another without interference from the atmosphere. Lead author Benjamin Dix-Matthews, a Ph.D. student at ICRAR and UWA, said the technique effectively eliminates atmospheric turbulence. "We can correct for atmospheric turbulence in 3-D, that is, left-right, up-down and, critically, along the line of flight," he said. "It's as if the moving atmosphere has been removed and doesn't exist. It allows us to send highly stable laser signals through the atmosphere while retaining the quality of the original signal." The result is the world's most precise method for comparing the flow of time between two separate locations using a laser system transmitted through the atmosphere. One of the self-guiding optical terminals on its telescope mount on the roof of a building at the CNES campus in Toulouse. Credit: ICRAR/UWA ICRAR-UWA senior researcher Dr. Sascha Schediwy said the research has exciting applications. "If you have one of these optical terminals on the ground and another on a satellite in space, then you can start to explore fundamental physics," he said. "Everything from testing Einstein's theory of general relativity more precisely than ever before, to discovering if fundamental physical constants change over time." The technology's precise measurements also have practical uses in earth science and geophysics. "For instance, this technology could improve satellite-based studies of how the water table changes over time, or to look for ore deposits underground," Dr. Schediwy said. There are further potential benefits for optical communications, an emerging field that uses light to carry information. Optical communications can securely transmit data between satellites and Earth with much higher data rates than current radio communications. "Our technology could help us increase the data rate from satellites to ground by orders of magnitude," Dr. Schediwy said. "The next generation of big data-gathering satellites would be able to get critical information to the ground faster." The phase stabilization technology behind the record-breaking link was originally developed to synchronize incoming signals for the Square Kilometer Array telescope. The multi-billion-dollar telescope is set to be built in Western Australia and South Africa from 2021.
10.1038/s41467-020-20591-5
Other
Ink from ancient Egyptian papyri contains copper
Thomas Christiansen et al, The nature of ancient Egyptian copper-containing carbon inks is revealed by synchrotron radiation based X-ray microscopy, Scientific Reports (2017). DOI: 10.1038/s41598-017-15652-7 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-15652-7
https://phys.org/news/2017-11-ink-ancient-egyptian-papyri-copper.html
Abstract For the first time it is shown that carbon black inks on ancient Egyptian papyri from different time periods and geographical regions contain copper. The inks have been investigated using synchrotron-based micro X-ray fluorescence (XRF) and micro X-ray absorption near-edge structure spectroscopy (XANES) at the European Synchrotron Radiation Facility (ESRF). The composition of the copper-containing carbon inks showed no significant differences that could be related to time periods or the geographical locations. This renders it probable that the same technology for ink production was used throughout Egypt for a period spanning at least 300 years. It is argued that the black pigment material (soot) for these inks was obtained as by-products of technical metallurgy. The copper (Cu) can be correlated with the following three main components: cuprite (Cu 2 O), azurite (Cu 3 [CO 3 ] 2 [OH] 2 ) and malachite (Cu 2 CO 3 [OH] 2 ). Introduction Two of the most profound technological advances in human intellectual history were the twin inventions of ink and papyrus, the ancient precursor of modern paper, by the Egyptians about 5.000 years ago. The advent of writing allowed information to be expanded beyond the mental capacity of any single individual and to be shared across time and space. The two inventions spread throughout the ancient Mediterranean to Greece, Rome and beyond. The chemistry of the black inks used in the ancient world has been only scantily studied so far, leaving gaps in our knowledge of one of the fundamental inventions in the history of civilization 1 . Thus, until recently, it was assumed that the ink used for writing was primarily carbon-based at least until the 4 th to the 5 th century CE. However, micro XRF analyses of two papyrus fragments from Herculaneum have shown that lead compounds were added to black ink already in 1 st century CE, thereby modifying our knowledge of ink manufacture in Antiquity 2 , 3 . Here, we report on the chemical composition of black ink inscribed on papyrus fragments from ancient Egypt using micro XRF and XANES. The fragments form parts of larger manuscripts belonging to the Papyrus Carlsberg Collection, University of Copenhagen, and can be divided into two groups: The first group comes from southern Egypt and consists of the private papers of an Egyptian soldier, Horus, who was stationed at the military camp of Pathyris, located at modern Gebelein some 30 km south of Luxor. Pathyris was destroyed in 88 BCE during a civil war and thousands of papyri have been preserved in the ruins until modern times and are now conserved in papyrus collections around the world, including Berlin, Cairo, Heidelberg and Turin, as well as Copenhagen. Our archive consists of 50 Greek and Egyptian papyri that date to the late 2 nd and early 1 st century BCE. They were bought on the antiquities market in 1924 by the manuscript collector Elkan Nathan Adler (1861–1941) according to whom they had been found inside a sealed jar at the ancient settlement 4 . This is the only archive from Pathyris that have come down to posterity substantially intact 5 . The second group derives from the only large scale institutional library to survive from ancient Egypt, the Tebtunis temple library. The assemblage includes an estimated 400–500 papyrus manuscripts which span the 1 st through the early 3 rd century CE, with the bulk dating to the late 1 st and 2 nd centuries. It was discovered within two small cellars inside the main temple precinct at Tebtunis, modern Umm el-Breigât, which is located in the south of the Fayum depression, some 100 km south-west of Cairo. The dry and brittle manuscripts are all poorly preserved and the material as a whole now consists of many thousands of smaller fragments, which are preserved in papyrus collections around the world, including Copenhagen, Florence, Berlin, Berkeley, Oxford and Yale. Whole columns or pages are only rarely preserved, and the difficult and time consuming process of sorting and identifying fragments of specific manuscripts is still ongoing. Published texts indicate that on average less than 10% of a manuscript is likely to have been preserved. The papyri selected for analysis were acquired for the Papyrus Carlsberg Collection between 1931 and 1938 on the antiquities market in Cairo 6 . Recently, the chemical composition of papyri and ink from the two localities was studied using a combination of laboratory XRF point analysis, Raman spectroscopy and scanning electron microscopy-energy dispersive x-ray spectroscopy (SEM-EDXS). Despite their distance in time, space, and social context, the study concluded that the black inks of Pathyris and Tebtunis revealed similar traits and that – besides carbon ink – two other distinct types of black ink were used for at least a period of 300 years: lead-containing carbon ink and copper-containing carbon ink. However, this preliminary characterization was limited to conventional XRF (few points), Raman and SEM-EDXS (small area maps) techniques and the chemical nature of the lead (Pb) and copper (Cu) compounds detected in the black inks could not be ascertained through the experimental setup 7 . Experimental Samples In total, the research was conducted on a corpus of 12 fragments. The papyri are of a light brown color and the inks range from deep black to light grey or brown (cf. the visible light pictures shown in the figures). The papyrus medium itself is approximately 0.3 mm thick and made of two layers of papyrus strips – in one instance, where two sheets overlap, of four layers (sample 1). The macro XRF elemental maps, discussed below, showed either no contrast between the inked areas and the papyrus, indicating soot or finely powdered charcoal as the origin of the black color, or the presence of Cu or Pb compounds in the pigments. In Fig. 1 , an example of a XRF fit is shown, which demonstrates that the main elements can be identified with certainty. Figure 1 Example of a XRF fit (sample 1). Full size image Out of the 12 samples, five showed no contrast, six contained Cu and a single fragment Pb. Here, we report results obtained from a study of four samples with Cu-containing black inks, two from Pathyris and two from Tebtunis respectively. The four samples were chosen, because they showed an intense Cu signal in the inked areas. Further, as an example (sample 5), a carbon based ink from Pathyris is included in the supporting information (Fig. S1 ). Samples 1 and 2 are Greek contracts from Pathyris that date to 134 BCE (Fig. 2A ) and 101 BCE (Fig. 3A ) respectively. Sample 5 belongs to the same archive and is written demotic, a cursive ancient Egyptian script; it dates to c. 100 BCE. Samples 3 and 4 were found at Tebtunis; they are written in Demotic and can be dated to the 1 st /2 nd century CE (Figs 4A and 5A ) on the basis of paleography. For synchrotron-based analyses, the papyri were analyzed, without any sample preparation: the fragments were maintained between two 4 µm thick Ultralene foils (Spec, Certiprep) and mounted vertically in the X-ray microscope. Figure 2 ( A ) Visible light picture of sample 1 (P. Carlsberg 828) ( B ) macro and micro XRF maps of Cu (fitted and normalized by the intensity of incident beam). The areas were XANES spectra were collected are highlighted ( C ) Average XANES spectra from area 2, and its decomposition by LCF. Full size image Figure 3 ( A ) Visible light picture of sample 2 (P. Carlsberg 839) ( B ) macro and micro XRF maps of Cu (fitted and normalized by the intensity of incident beam). The areas were XANES spectra were collected are highlighted. The red-blue maps are the superimposition of Cu and Fe maps, from area 2. The red-green maps are the superimposition of Cu micro XRF maps excited at two specific energies shown in ( C ) (after realignment) ( C ) Average XANES spectra from area 2, “red region” and “green region” in red-green dual-energy map, and from area 5 and their decomposition by LCF. Full size image Figure 4 ( A ) Visible light picture of sample 3 (P. Carlsberg 79) ( B ) macro and micro XRF maps of Cu (fitted and normalized by the intensity of incident beam). The areas, where XANES spectra were collected, are highlighted ( C ) Average XANES spectra from area 2 and its decomposition by LCF. Full size image Figure 5 ( A ) Visible light picture of sample 4 (P. Carlsberg 649) ( B ) macro and micro XRF maps of Cu (fitted and normalized by the intensity of incident beam). The areas were XANES spectra were collected are highlighted. The red-blue maps are the superimposition of Cu and Fe maps, from the detailed map ( C ) Average XANES spectra from area 1, and its decomposition by LCF. Full size image The black inks on the analyzed fragments appear black at an IR illumination of 970 nm and show no signs of transparency, as observed for other black pigment materials such as iron-gall ink 6 , 7 , 8 . This suggests that the inks used are based on amorphous carbon obtained through the pyrolysis or macerating of botanicals, which is confirmed by Raman spectroscopy carried out on the same papyri, where the spectra are characterized by two broad bands at ca. 1322 and 1588 cm −1 , known as D and G bands of carbon materials 7 , 9 , 10 . Macro XRF and micro XRF XRF measurements were performed at X-ray microscopy beamline ID21 at the ESRF (Grenoble, France) 11 . By the use of a Si (111) monochromator, the primary beam energy was tuned at the Cu-K edge (8979 eV). For general overview mapping over entire fragments (macro XRF), the beam spot size was defined using a pinhole of 100 or 50 µm diameter. The average beam flux was ~10 9 –10 10 ph/s during the measurements of the samples. An incident beam flux monitoring pin diode was used continuously to monitor and correct for intensity variations (i 0 ). XRF maps were acquired by scanning the sample through the X-ray beam with a single energy of 9.05 keV recording a XRF spectrum at each pixel with an acquisition time of 100 ms. High resolution micro XRF maps were acquired the same way, with a beam focused down to ~0.4 × 0.7µm 2 using a Kirkpatrick-Baez mirror system. The microscope was operated in vacuum and samples were mounted vertically under an angle of 62° with respect to the primary X-ray beam. No visible modification of any aspect of the samples was observed after analysis. The XRF (and scattered) radiation was detected using a Bruker (Germany) XFlash 5100 silicon drift detector (SDD), equipped with a Moxtex AP3.3 polymer window 12 , and mounted under 69° with respect to the primary X-ray beam. An additional Ultralene foil (4 µm) covered the detector. XRF spectra were processed using the PyMCA software package 13 . Elemental maps shown in the figures below are the batch fitted XRF intensity maps, divided by the i 0 map. Micro XRF maps were acquired for sample 2 at three different energies to map the different Cu species. The small beam shift between these different maps was determined using the Fe maps and Cu maps were realigned accordingly, using the “Spectrocrunch” python software library 11 . Micro XANES The measurements were performed at ID21, at the Cu K-edge (calibrated with a Cu foil, setting the maximum of the derivative spectrum at 8.979 keV). Micro XANES spectra were recorded in XRF mode (using the same set-up described above) with a micro beam of 0.4 × 0.7µm 2 . The micro XANES spectra were obtained by scanning the primary energy from 8.9 to 9.15 keV in 260 steps of 0.3 eV. To reduce risks of radiation damage, XANES spectra were acquired as single acquisition (30 s/point) over many points, instead of cumulating many spectra on few points. Normalized data were employed for Linear Combination Fitting (LCF), using the ATHENA software, to identify and to estimate the amount of copper compounds on the analyzed papyrus (cf. list of reference in Table 1 ) 14 . The reference compounds were prepared as powder and measured in transmission mode. LCF were accomplished within −20 < E 0 > 30 eV range, using first all the references (azurite, malachite, chalcanthite, tenorite, cuprite, chalcopyrite and Cu acetate), and then reducing this set to the main 3 or 4 components; all amounts between 0–1, but not forced to sum 1 for better alignment (amounts were recalculated); all spectra shared the same E 0 value. The micro XANES spectra were compared to selected reference compounds (Table 1 ), chosen as the most probable compounds according to the micro XRF maps and the available literature 1 , 15 , 16 , 17 , 18 , 19 . In general, the LCFs have good R-factors (0.002–0.017). However, it has to be kept in mind that all results below could be biased by this set of references and we cannot exclude that other Cu compounds may be present. Table 1 Results of the LCFs analysis of XANES spectra, calculated as average over n points per area. Areas are located in the different XRF maps (cf. Figs 2B , 3B , 4B and 5B ). Full size table Results Macro XRF maps The papyrus fragments were scanned using X-ray beams of different sizes, from a sub-millimeter to a micrometric scale. Macro-XRF maps of the full fragments were used to identify and localize elements both outside and inside of the ink (cf. the supporting information, where all the XRF elemental maps of the five samples are provided and some specific results are commented). Although the papyri derive from different time periods and geographical areas, the elemental composition detected in the fragments is similar and showed the following distributions: potassium (K) and chlorine (Cl) maps reveal the fibrous structure of the papyri. Silicon (Si) shows a complementary distribution; as if it fills the holes left by the K-Cl based fiber structure. Sodium (Na), magnesium (Mg), aluminum (Al), phosphor (P), sulfur (S), calcium (Ca) and manganese (Mn) are present in a rather homogeneous way on the surface of the papyrus fragments, independently of the fibrous structure. Iron (Fe) is present as spots all over the papyri, independently of the ink, except for sample 4, where Fe-Al-K containing spots are more concentrated in the inked regions (Fig. S 11 ). The Cu elemental distribution (fitted XRF intensity divided by the intensity of incoming beam) for samples 1, 2, 3 and 4 is depicted in Figs 2B , 3B , 4B and 5B . The color scales are identical for the large map of samples 1, 2 and 3 (0–0.5, a.u.). Because sample 4 shows lower amounts of Cu in the ink, the scale has been adjusted to 0–0.15 a.u. in Fig. 5 . From the maps, it is clear that the copper is concentrated in the letters and signs from where it diffuses out in the papyri and runs along the fibrous structure. As seen in the supporting information, some Cr maps show a peculiar circular structure that is due to the sample holder. This demonstrates that the X-rays penetrated the full depth of the samples (Figs S 5 , S 6 , S 8 ). Micro XRF maps Additional XRF maps were acquired on selected areas with Cu-containing black ink, in order to assess at the micrometer scale the possible co-localization of certain elements that were detected at the macro-scale. Sample 1: as observed in the macro-XRF maps, Cl, K and Cu maps show some correlations, but this may be due to the strong diffusion of Cu in the K-Cl fibrous structure. The other elements do not show a particular co-localization with Cu (Fig. S 4 ). Sample 2: a high resolution micro XRF map revealed significant variation in the distribution of Cu both within and outside the ink area. To further investigate these differences, five areas were examined more closely, encompassing the ink, the surrounding fibers and Cu-rich spots (Figs 3B , S 6 and S 7 ), with a pixel size of 1 or 2 µm. None of the other detected elements were co-localized with Cu. As an example, a Fe map is shown in Fig. 3C . Sample 3: micro XRF maps confirmed the results obtained from the macro XRF maps (cf. Fig. S 8 ), which showed slightly higher counts of Mg, Al, S, P, Ca, Mn and Pb in the ink than in the papyrus (Fig. S 9 ). The detailed maps show a similar co-localization of Cu with P, S and Pb at the micron-scale. However, considering the diffuse distribution of these elements and the absence of micrometric Cu-based spots, it cannot be concluded that these elements originate from the copper source; they are rather associated with the soot and the binder (Fig. S 10 ). Sample 4: spots containing Mg, Al, Si, P, K, Cr, Mn and Fe are found, but they are not co-localized with Cu (cf. Figs 5B , S 11 and S 12 ). Other spots contain Ca and S, Ca and P, or Ca alone. S, Cl and Ca were more concentrated in ~50 µm Cu-rich regions (Fig. S 11 ), but without co-localization to Cu at the micron-scale (Fig. S 12 ). Micro XANES In order to examine the Cu speciation and possibly its origin in the Cu-rich inks, Cu K-edge micro XANES spectra were acquired at different points of the four samples. The fact that none of the other detected elements were co-localized with Cu led us to the assumption that Cu was most probably present as elements from the first and second periods of the periodic table, e.g. oxides, hydroxides, carbonates or organic salts. For sample 1, we recorded micro XANES spectra at 17 points located in three different regions highlighted in Fig. 1C . Since the spectra were very similar, they were subsequently averaged. The LCF of the average of Cu-K XANES spectra was done using the above mentioned reference compounds, excluding chalcopyrite, since Cu is not co-localized with Fe. In the three areas, micro XANES spectra show features characteristic of a mixture of Cu 1+ (fitted as cuprite) and Cu 2+ species (fitted as azurite and malachite) (Table 1 and Fig. 2C ). A total of 120 spectra were collected for sample 2 in the five areas shown in Fig. 3C . These spectra showed clear differences from one area to another, but also within a single Cu spot. As an example, some of the spectra acquired over the spots in area 1, 2 and 3 showed a pronounced shoulder at ~8.987 keV, while this shoulder was mostly absent in spots at other locations. To map the distribution of these different species, speciation maps were acquired by collecting micro XRF maps at three different energies: at the shoulder energy (E 1 = 8.987 keV), at the maximum absorption energy (E 2 = 8.997 keV) and above the edge – in order to map the Cu distribution, independently of its speciation (E 3 = 9.075 keV). The superimposed Cu maps obtained at E 1 and E 2 are shown in Fig. 2B and reveal a tight interlacing of the two Cu-based ingredients. None of these maps is correlated with the Fe map (Fig. 3B ). The LCF of the average spectra exhibiting a strong shoulder at E 1 gives azurite as the main component (the azurite reference contains a pronounced shoulder at E 1 ), together with some malachite and cuprite (Table 1 and Fig. 3C ). The spectra with a less intense shoulder could be fitted with a strong contribution of copper acetate, with smaller amounts of azurite (Table 1 and Fig. 3C ). Spectra acquired in area 4 – i.e. in the ink, but not in the ‘spotty’ Cu regions – were fitted as azurite mainly, together with cuprite and copper acetate. Finally, the spectra acquired in regions where Cu has diffused within papyrus fibers, but outside areas of actual writing, show a lower signal but with a more pronounced shoulder at 9.075 keV, together with a clear shift of the edge energy. These features could be fitted by a high contribution of cuprite (>50%), mixed with malachite and azurite (Table 1 and Fig. 3C ). For sample 3, a total of 21 spectra were acquired: six in the ink in the region with high Cu content, eight in the ink with lower Cu content and seven spectra in a region, where the Cu distribution follows the fibrous structure. The LCF analysis of the average spectra gives cuprite as the main component, together with azurite and malachite (Table 1 and Fig. 4C ). A total of 41 spectra were acquired for sample 4 over four areas (Fig. 5 ): in spots with a high content of Cu both with and without Fe, in ink regions with lower Cu content and in fibrous Cu rich regions, far removed from the ink. In all these areas, cuprite is the main component of the LCFs. Malachite is the second component, and azurite to a lesser extent (Table 1 and Fig. 5C ) Discussion The synchrotron based macro and micro XRF maps confirmed the presence of Cu in the black ink on the four ancient Egyptian papyri studied here. In sample 2, 3 and 4, the ink contains Cu and other lighter elements – Al, Si, K, Mn, Fe – and Pb. However, the study of Cu spots at the micron-scale did not reveal any clear local co-localization of these elements with Cu. Micro XANES revealed that Cu in inked areas is present principally as the copper minerals cuprite, azurite and malachite. In Egypt these minerals are present along almost the entire length of the eastern desert and in the Sinai, and their use in the production of green and blue pigments has been amply documented 18 , 20 . In the areas where Cu is diffused into the fibrous structure of the papyri, and in the complex Cu-rich spots in sample 2, malachite occurs as one of the components. It may be present as part of the original pigment or be formed as a result of the degradation of azurite 18 . The copper acetate present in sample 2 could also be a result of a reaction of the copper minerals with the chemical compounds in the surroundings or reactions caused by the conservation procedures. Ancient copper-containing pigments are well-known as a source of the catalyzed degradation of cellulose based materials such as gum-Arabic and papyrus; 16 for instance Egyptian blue and green can ‘burn’ holes in illustrated papyrus manuscripts 20 . A degraded binder (gum-Arabic) could explain why the inks visually appear to be ‘cracking’ and the diffuse presence of Cu outside the letters and signs in the fibrous structure of the papyrus. Likely, the migration of the Cu along the fibers was enhanced by the conservation procedures applied to the manuscripts. There is no detailed documentation on the method of conservation applied to the fragments, but it usually consists of a simple process, where the papyri were moistened with water in order to relax the fibers and unfold or unroll them; thereafter, they were mechanically cleaned with a sharp instrument and a sable brush 21 , 22 . With respect to cuprite (Cu 2 O) it could be a result of a reduction of copper carbonate pigments like azurite and malachite, the two other principal Cu compounds detected in the ink and along the fibrous structure of the four fragments 17 . However, there is also evidence that the cuprite present in the ink could have undergone an oxidation reaction in the presence of water and CO 2 in the atmosphere, which transversely would lead to the formation of azurite and malachite 16 . These observations suggest that the source of the Cu compounds found in the black inks and along the fibrous structure are by-products of metallurgy, glaze and glass production, which provided the raw material (soot) for “refined” carbon inks in the ancient Mediterranean. This is supported by the few preserved written formulae from the Hellenistic Period pertaining to the manufacture of black ink 1 . Conclusion Looking at the results, it is likely that the soot/charcoal of copper-containing carbon inks were obtained during manufacturing processes related to the extraction of copper from sulfurous ores like chalcopyrite. This hypothesis finds confirmation in the particle size (sub-micron) and the fact that another copper-bearing pigment in Egypt, the so-called Egyptian blue (CaSi 2 O 5 ·CuSi 2 O 5 ), was manufactured from scrap or by-product copper obtained at temple workshops that either melted copper or produced glass and faience 23 . It was made by mixing cupric oxides with sand, soda and lime, which thereafter was roasted at about 850–900 °C to sintered crystalline aggregates rather than glass 18 , 24 , 25 , 26 . Similarly, Egyptian kohl or black eye-paint, which is closely related to the manufacture of lead-containing carbon inks, was produced in workshops, where vitreous materials were manipulated 27 . Since the papyri in question were written over a period of 300 years, the findings cannot represent an accidental event. Moreover, sample 1 is the oldest dated document from the ancient Mediterranean in which the addition of metals to a black ink has been detected. Though the fabrication of ink is likely to have evolved during this time-span, none of the four inks studied here are completely identical and Cu micro XANES showed variations within a single fragment. This demonstrates a variable local ink composition, and by extension production, which precludes the chance to obtain unique signature of the ink based on Cu speciation. This observation complicates the mapping of inks, but might facilitate the identification of fragments belonging to specific manuscripts or sections thereof 7 . Moreover, it should be taken into account that Cu speciation may have evolved since the original preparation and use of the ink. In particular, conservation treatments may have modified the Cu chemistry. Finally, the results will facilitate future strategies of conservation, since knowledge of material composition assists decisions, which remain to be made regarding the proper conservation and storage of the papyri, thereby ensuring their preservation and longevity. Why and when copper-containing carbon inks were introduced in ancient Egypt remain to be explained, but perhaps it is related to the type of pen used for writing the manuscripts, since the four papyri appear to have been written with a Greek reed pen ( kalamos ) rather than an Egyptian reed brush 28 .
Until recently, it was assumed that the ink used for writing was primarily carbon-based at least until the fourth and fifth centuries AD. But in a new University of Copenhagen study, analyses of 2,000-year-old papyri fragments with X-ray microscopy show that black ink used by Egyptian scribes also contained copper - an element previously not identified in ancient ink. In a study published today in Scientific Reports, a cross-disciplinary team of researchers show that Egyptians used carbon inks that contained copper, which has not been identified in ancient ink before. Although the analysed papyri fragments were written over a period of 300 years and from different geographical regions, the results did not vary significantly: The papyri fragments were investigated with advanced synchrotron radiation based X-ray microscopy equipment at the European Synchrotron Radiation Facility in Grenoble as part of the cross-disciplinary CoNext project, and the particles found in the inks indicate that they were by-products of the extraction of copper from sulphurous ores. "The composition of the copper-containing carbon inks showed no significant differences that could be related to time periods or geographical locations, which suggests that the ancient Egyptians used the same technology for ink production throughout Egypt from roughly 200 BC to 100 AD," says Egyptologist and first author of the study Thomas Christiansen from the University of Copenhagen. No unique ink signature The studied papyri fragments all form part of larger manuscripts belonging to the Papyrus Carlsberg Collection at the University of Copenhagen, more specifically from two primary sources: the private papers of an Egyptian soldier named Horus, who was stationed at a military camp in Pathyris, and from the Tebtunis temple library, which is the only surviving large-scale institutional library from ancient Egypt. "None of the four inks studied here was completely identical, and there can even be variations within a single papyrus fragment, suggesting that the composition of ink produced at the same location could vary a great deal. This makes it impossible to produce maps of ink signatures that otherwise could have been used to date and place papyri fragments of uncertain provenance," explains Thomas Christiansen but adds: "However, as many papyri have been handed down to us as fragments, the observation that ink used on individual manuscripts can differ from other manuscripts from the same source is good news insofar as it might facilitate the identification of fragments belonging to specific manuscripts or sections thereof." According to the researchers, their results will also be useful for conservation purposes as detailed knowledge of the material's composition could help museums and collections make the right decisions regarding conservation and storage of papyri, thus ensuring their preservation and longevity.
10.1038/s41598-017-15652-7
Nano
'Weighing' atoms with electrons
Toma Susi et al, Isotope analysis in the transmission electron microscope, Nature Communications (2016). DOI: 10.1038/ncomms13040 Open data: Atomic resolution electron irradiation time series of isotopically labeled monolayer graphene: Toma Susi, Christoph Hofer, Giacomo Argentero, Gregor T. Leuthner, Timothy J. Pennycook, Clemens Mangler, Jannik C. Meyer & Jani Kotakoski. figshare (2016). DOI: 10.6084/m9.figshare.c.3311946.v1 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms13040
https://phys.org/news/2016-10-atoms-electrons.html
Abstract The Ångström-sized probe of the scanning transmission electron microscope can visualize and collect spectra from single atoms. This can unambiguously resolve the chemical structure of materials, but not their isotopic composition. Here we differentiate between two isotopes of the same element by quantifying how likely the energetic imaging electrons are to eject atoms. First, we measure the displacement probability in graphene grown from either 12 C or 13 C and describe the process using a quantum mechanical model of lattice vibrations coupled with density functional theory simulations. We then test our spatial resolution in a mixed sample by ejecting individual atoms from nanoscale areas spanning an interface region that is far from atomically sharp, mapping the isotope concentration with a precision better than 20%. Although we use a scanning instrument, our method may be applicable to any atomic resolution transmission electron microscope and to other low-dimensional materials. Introduction Spectroscopy and microscopy are two fundamental pillars of materials science. By overcoming the diffraction limit of light, electron microscopy has emerged as a particularly powerful tool for studying low-dimensional materials such as graphene 1 , in which each atom can be distinguished. Through advances in aberration-corrected scanning transmission electron microscopy 2 , 3 (STEM) and electron energy loss spectroscopy 4 , 5 , the vision of a ‘synchrotron in a microscope’ 6 has now been realized. Spectroscopy of single atoms, including their spin state 7 , has together with Z-contrast imaging 3 allowed the identity and bonding of individual atoms to be unambiguously determined 4 , 8 , 9 , 10 . However, discerning the isotopes of a particular element has not been possible—a technique that might be called ‘mass spectrometer in a microscope’. Here we show how the quantum mechanical description of lattice vibrations lets us accurately model the stochastic ejection of single atoms 11 , 12 from graphene consisting of either of the two stable carbon isotopes. Our technique rests on a crucial difference between electrons and photons when used as a microscopy probe: due to their finite mass, electrons can transfer significant amounts of momentum. When a highly energetic electron is scattered by the electrostatic potential of an atomic nucleus, a maximal amount of kinetic energy (inversely proportional to the mass of the nucleus, ∝ ) can be transferred when the electron backscatters. When this energy is comparable to the energy required to eject an atom from the material, defined as the displacement threshold energy T d —for instance, when probing pristine 11 or doped 13 single-layer graphene with 60–100 keV electrons—atomic vibrations become important in activating otherwise energetically prohibited processes due to the motion of the nucleus in the direction of the electron beam. The intrinsic capability of STEM for imaging further allows us to map the isotope concentration in selected nanoscale areas of a mixed sample, demonstrating the spatial resolution of our technique. The ability to do mass analysis in the transmission electron microscope thus expands the possibilities for studying materials on the atomic scale. Results Quantum description of vibrations The velocities of atoms in a solid are distributed based on a temperature-dependent velocity distribution, defined by the vibrational modes of the material. Due to the geometry of a typical transmission electron microscopy (TEM) study of a two-dimensional material, the out-of-plane velocity v z , whose distribution is characterized by the mean square velocity , is here of particular interest. In an earlier study 11 this was estimated using a Debye approximation for the out-of-plane phonon density of states 14 (DOS) g z ( ω ), where ω is the phonon frequency. A better justified estimate can be achieved by calculating the kinetic energy of the atoms via the thermodynamic internal energy, evaluated using the full phonon DOS. As a starting point, we calculate the partition function Z =Tr{ e − H /( kT ) }, where Tr denotes the trace operation and k is the Boltzmann constant and T the absolute temperature. We evaluate this trace for the second-quantized Hamiltonian H describing harmonic lattice vibrations 15 : where ħ is the reduced Planck constant, k the phonon wave vector, j the phonon branch index running to 3 r ( r being the number of atoms in the unit cell), ω j ( k ) the eigenvalue of the j th mode at k , and n j ( k ) the number of phonons with frequency ω j ( k ). After computing the internal energy from the partition function via the Helmholtz free energy F =− kT ln Z , we obtain the Planck distribution function describing the occupation of the phonon bands (Methods). We must then explicitly separate the energy into the in-plane U p and out-of-plane U z components, and take into account that half the thermal energy equals the kinetic energy of the atoms. This gives the out-of-plane mean square velocity of a single atom in a two-atom unit cell as where M is the mass of the vibrating atom, ω z is the highest out-of-plane mode frequency, and the correct normalization of the number of modes is included in the DOS. Phonon dispersion To estimate the phonon DOS, we calculated through density functional theory (DFT; GPAW package 16 , 17 ) the graphene phonon band structure 18 , 19 via the dynamical matrix using the ‘frozen phonon method’ (Methods; Supplementary Fig. 1 ). Taking the density of the components corresponding to the out-of-plane acoustic (ZA) and optical (ZO) phonon modes ( Supplementary Data 1 ) and solving equation 2 numerically, we obtain a mean square velocity m 2 s −2 for a 12 C atom in normal graphene. This description can be extended to ‘heavy graphene’ (consisting of 13 C instead of a natural isotope mixture). A heavier atomic mass affects the velocity through two effects: the phonon band structure is scaled by the square root of the mass ratio (from the mass prefactor of the dynamical matrix), and the squared velocity is scaled by the mass ratio itself (equation 2). At room temperature, the first correction reduces the velocity by 3% in fully 13 C graphene compared with normal graphene, and the second one reduces it by an additional 10%, resulting in m 2 s −2 . Electron microscopy In our experiments, we recorded time series at room temperature using the Nion UltraSTEM100 microscope, where each atom, or its loss, was visible in every frame. We chose small fields of view ( ∼ 1 × 1 nm 2 ) and short dwell times (8 μs) to avoid missing the refilling of vacancies (an example is shown in Fig. 1 ; likely this vacancy only appears to be unreconstructed due to the scanning probe). In addition to commercial monolayer graphene samples (Quantifoil R 2/4, Graphenea), we used samples of 13 C graphene synthesized by chemical vapour deposition (CVD) on Cu foils using 13 C-substituted CH 4 as carbon precursor, subsequently transferred onto Quantifoil TEM grids. An additional sample consisted of grains of 12 C and 13 C graphene on the same grid, synthesized by switching the precursor during growth (Methods). Figure 1: Example of the STEM displacement measurements. The micrographs are medium angle annular dark field detector images recorded at 95 kV. ( a ) A spot on the graphene membrane, containing clean monolayer graphene areas (dark) and overlying contamination (bright). Scale bar, 2 nm. ( b ) A closer view of the area marked by the red rectangle in ( a ), with the irradiated area of the following panels similarly denoted. Scale bar, 2 Å. ( c – g ) Five consecutive STEM frames ( ∼ 1 × 1 nm 2 , 512 × 512 pixels (px), 2.2 s per frame) recorded at a clean monolayer area of graphene. A single carbon atom has been ejected in the fourth frame ( f , white circle), but the vacancy is filled already in the next frame ( g ). The top row of ( c – g ) contains the unprocessed images, the middle row has been treated by a Gaussian blur with a radius of 2 px, and the coloured bottom row has been filtered with a double Gaussian procedure 3 ( σ 1 =5 px, σ 2 =2 px, weight=0.16). Full size image From each experimental dataset (full STEM data available 20 ) within which a clear displacement was observed, we calculated the accumulated electron dose until the frame where the defect appeared (or a fraction of the frame if it appeared in the first one). The distribution of doses corresponds to a Poisson process 12 whose expected value was found by log-likelihood minimization (Methods; Supplementary Fig. 2 ), directly yielding the probability of creating a vacancy (the dose data and statistical analyses are included in Supplementary Data 2 ). Figure 2 displays the corresponding displacement cross sections measured at voltages between 80 and 100 kV for normal (1.109% 13 C) and heavy graphene ( ∼ 99% 13 C), alongside values measured earlier 11 using high-resolution TEM (HRTEM). For low-probability processes, the cross section is highly sensitive to both the atomic velocities and the displacement threshold energy. Since heavier atoms do not vibrate with as great a velocity, they receive less of a boost to the momentum transfer from an impinging electron. Thus, fewer ejections are observed for 13 C graphene. Figure 2: Displacement cross sections of 12 C and 13 C measured at different acceleration voltages. The STEM data is marked with squares, and earlier HRTEM data 11 with circles. The error bars correspond to the 95% confidence intervals of the Poisson means (STEM data) or to previously reported estimates of statistical variation (HRTEM data 11 ). The solid curves are derived from our theoretical model with an error-weighted least-squares best-fit displacement threshold energy of 21.14 eV. The shaded areas correspond to the same model using the lowest DFT threshold T d ∈ [21.25, 21.375] eV. The inset is a closer view of the low cross section region. Full size image Comparing theory with experiment The theoretical total cross sections σ d ( T , E e ) are plotted in Fig. 2 for each voltage (Methods; Supplementary Table 1 , Supplementary Data 2 ). The motion of the nuclei was included via a Gaussian distribution of atomic out-of-plane velocities P ( v z , T ) characterized by the DFT-calculated , otherwise similar to the approach of ref. 11 . A common displacement threshold energy was fitted to the data set by minimizing the variance-weighted mean square error (the 100 kV HRTEM point was omitted from the fitting, since it was underestimated probably due to the undetected refilling of vacancies, also seen in Fig. 1 ). The optimal T d value was found to be 21.14 eV, resulting in a good description of all the measured cross sections. Notably, this is 0.8 eV lower than the earlier value calculated by DFT, and 2.29 eV lower than the earlier fit to HRTEM data 11 . Different exchange correlation functionals we tested all overestimate the experimental value (by <1 eV), with the estimate T d ∈ [21.25, 21.375] closest to experiment resulting from the C09 van der Waals functional 21 (Methods). Despite DFT overestimating the displacement threshold energy, we see from the good fit to the normal and heavy graphene data sets that our theory accurately describes the contribution of vibrations. Further, the HRTEM data and the STEM data are equally well described by the theory despite having several orders of magnitude different irradiation dose rates. This can be understood in terms of the very short lifetimes of electronic and phononic excitations in a metallic system 22 compared with the average time between impacts. Even a very high dose rate of 10 8 e − Å −2 s −1 corresponds to a single electron passing through a 1 nm 2 area every 10 −10 s, whereas valence band holes are filled 23 in <10 −15 s and core holes 24 in <10 −14 s, while plasmons are damped 25 within ∼ 10 −13 s and phonons 26 in ∼ 10 −12 s. Our results thus show that multiple excitations do not contribute to the knock-on damage in graphene, warranting another explanation (such as chemical etching 11 ) for the evidence linking a highly focused HRTEM beam to defect creation 27 . Each impact is, effectively, an individual perturbation of the equilibrium state. Local mapping of isotope concentration Finally, to test the spatial resolution of our method, we studied a sample consisting of joined grains of 12 C and 13 C graphene. Isotope labelling combined with Raman spectroscopy mapping is a powerful tool for studying CVD growth of graphene 28 , which is of considerable technological interest. Earlier studies have revealed the importance of carbon solubility into different catalyst substrates to control the growth process 29 . However, the spatial resolution of Raman spectroscopy is limited, making it impossible to obtain atomic-scale information of the transition region between grains of different isotopes. The local isotope analysis is based on fitting the mean of the locally measured electron doses with a linear combination of doses generated by Poisson processes corresponding to 12 C and 13 C graphene using the theoretical cross section values. Although each dose results from a stochastic process, the expected doses for 12 C and 13 C are sufficiently different that measuring several displacements decreases the errors of their means well below the expected separation ( Fig. 3c ). To estimate the expected statistical variation for a certain number of measured doses, we generated a large number of sets of n Poisson doses, and calculated their means and standard errors as a function of the number of doses in each set. The calculated relative errors scale as 1/ n and correspond to the precision of our measurement, which is better than 20% for as few as five measured doses in the ideal case. Although our accuracy is difficult to gauge precisely, by comparing the errors of the cross sections measured for isotopically pure samples to the fitted curve ( Fig. 2 ), an estimate of roughly 5% can be inferred. Figure 3: Local isotope analysis. ( a ) A STEM micrograph of a hole in the carbon support film (1.3 μm in diameter), covered by a monolayer of graphene. In each of the overlaid spots, 4–15 fields of view were irradiated. The dimensions of the overlaid grid correspond to the pixels of a Raman map recorded over this area. ( b ) Isotope concentration map where the colours of the grid squares denote 12 C concentration based on the fitting of the Raman 2D band response (Methods; Supplementary Fig. 3 ). The overlaid spots correspond to ( a ), with colours denoting the concentration of 12 C estimated from the mean of the measured doses. ( c ) Locally measured mean doses and their standard errors plotted on a log scale for each grid square. The horizontal coloured areas show the means±s.e. of doses simulated for the theoretical 12 C and 13 C cross sections. Note that a greater variation in the experimental doses is expected for areas containing a mix of both carbon isotopes. Full size image Working at 100 kV, we selected spots containing areas of clean graphene (43 in total) each only a few tens of nanometers in size ( Fig. 1a ), irradiating 4–15 (mean 7.8) fields of view 1 × 1 nm 2 in size until the first displacement occurred ( Fig. 1f ). Comparing the mean of the measured doses to the generated data, we can estimate the isotope concentration responsible for such a dose. This assignment was corroborated by Raman mapping over the same area, allowing the two isotopes to be distinguished by their differing Raman shift. A general trend from 12 C-rich to 13 C-rich regions is captured by both methods ( Fig. 3b ), but a significant local variation in the measured doses is detectable ( Fig. 3c ). This variation indicates that the interfaces formed in a sequential CVD growth process may be far from atomically sharp 30 , instead spanning a region of hundreds of nanometers, within which the carbon isotopes from the two precursors are mixed together. Discussion It is interesting to compare our method to established mass analysis techniques. In isotope ratio mass spectrometry precisions of 0.01% and accuracies of 1% have been reported 31 . However, these measurements are not spatially resolved. For spatially resolved techniques, one of the most widely used is time-of-flight secondary ion mass spectroscopy (ToF-SIMS). It has a lateral resolution typically of several micrometers, which can be reduced to around 100 nm by finely focusing the ion beam 32 . In the case of ToF-SIMS, separation of the 13 C signal from 12 C 1 H is problematic, resulting in a reported 33 precision of 20% and an accuracy of ∼ 11%. The state-of-the-art performance in local mass analysis can be achieved with atom-probe tomography 34 (APT), which can record images with sub-nanometer spatial resolution in all three dimensions. A recent APT study of the 13 C/ 12 C ratio in detonation nanodiamonds reported a precision of 5%, but biases in the detection of differently charged ions limited accuracy to ∼ 25% compared to the natural isotope abundances 35 . A limitation of ToF-SIMS is its inability to discriminate between the analyte and contaminants and that it requires uniform isotope concentrations over the beam area for accurate results. APT requires the preparation of specialized needle-like sample geometries, a laborious reconstruction process to analyse its results 36 , and its detection efficiency is rather limited 37 . In our case, we are only able to resolve relative mass differences between isotopes of the same element in the same chemical environment. While we do not need to resolve mass differences between different elements, since these differ in their scattering contrast, we do need to detect the ejection of single atoms, limiting the technique to atomically thin materials. However, our method captures the isotope information concurrently with atomic resolution imaging in a general-purpose electron microscope, without the need for additional detectors. We have shown how the Ångström-sized electron probe of a scanning transmission electron microscope can be used to estimate isotope concentrations via the displacement of single atoms. Although these results were achieved with graphene, our technique should work for any low-dimensional material, including hexagonal boron nitride and transition metal dichalcogenides such as MoS 2 . This could potentially extend to van der Waals heterostructures 38 of a few layers or other thin crystalline materials, provided a difference in the displacement probability of an atomic species can be uniquely determined. Neither is the technique limited to STEM: a parallel illumination TEM with atomic resolution would also work, although scanning has the advantage of not averaging the image contrast over the field of view. The areas we sampled were in total less than 340 nm 2 in size, containing ∼ 6,600 carbon atoms of which 337 were ejected. Thus, while the nominal mass required for our complete analysis was already extremely small (131 zg), the displacement of only five atoms is required to distinguish a concentration difference of less than twenty per cent. Future developments in instrumentation may allow the mass-dependent energy transfer to be directly measured from high-angle scattering 39 , 40 , further enhancing the capabilities of STEM for isotope analysis. Methods Quantum model of vibrations The out-of-plane mean square velocity can be estimated by calculating the kinetic energy via the thermodynamic internal energy using the out-of-plane phonon DOS g z ( ω ), where ω is the phonon frequency. In the second quantization formalism, the Hamiltonian for harmonic lattice vibrations is ref. 15 where k is the phonon wave vector, j is the phonon branch index running to 3 r ( r being the number of atoms in the unit cell), ω j ( k ) the eigenvalue of the j th mode at k , and and b k j are the phonon creation and annihilation operators, respectively. Using the partition function Z =Tr{ e − H /( kT ) }, where Tr denotes the trace operation and k is the Boltzmann constant and T the absolute temperature, and evaluating the trace using this Hamiltonian, we have where is the number of phonons with frequency ω j ( k ). The Helmholtz free energy is thus and the internal energy of a single unit cell, therefore, becomes 15 where in the last step the sum is expressed as an average over the phonon DOS. Using the identity yields the Planck distribution function describing the occupation of the phonon bands, and explicitly dividing the energy into the in-plane U p and out-of-plane U z components, we can rewrite this as where the number of modes is included in the normalization of the DOSes, that is, , corresponding to the out-of-plane acoustic (ZA) and optical (ZO) modes (the in-plane DOS g p ( ω ) being correspondingly normalized to 4), and ω d is the highest frequency of the highest phonon mode. Since half of the thermal energy equals the average kinetic energy of the atoms, and the graphene unit cell has two atoms, the out-of-plane kinetic energy of a single atom is Thus, the out-of-plane mean square velocity of an atom becomes where ω z is now the highest out-of-plane mode frequency. This can be solved numerically for a known g z ( ω ). For the in-plane vibrations, we would equivalently get Frozen phonon calculation To estimate the phonon DOS, we calculated the graphene phonon band structure via the dynamical matrix, which was computed by displacing each of the two primitive cell atoms by a small displacement (0.06 Å) and calculating the forces on all other atoms in a 7 × 7 supercell (‘frozen phonon method’; the cell size is large enough so that the forces on the atoms at its edges are negligible) using DFT as implemented in the grid-based projector-augmented wave code (GPAW) package 17 . Exchange and correlation were described by the local density approximation 41 , and a Γ-centered Monkhorst-Pack k -point mesh of 42 × 42 × 1 was used to sample the Brillouin zone. A fine computational grid spacing of 0.14 Å was used alongside strict convergence criteria for the structural relaxation (forces <10 −5 eVÅ −1 per atom) and the self-consistency cycle (change in eigenstates <10 −13 eV 2 per electron). The resulting phonon dispersion ( Supplementary Fig. 1 ) describes well the quadratic dispersion of the ZA mode near Γ, and is in excellent agreement with earlier studies 18 , 19 . Supplementary Data 1 contains the out-of-plane phonon DOS. Graphene synthesis and transfer In addition to commercial monolayer graphene (Graphenea QUANTIFOIL R 2/4), our graphene samples were synthesized by CVD in a furnace equipped with two separate gas inlets that allow for independent control over the two isotope precursors 29 (that is, either ∼ 99% 12 CH 4 or ∼ 99% 13 CH 4 methane). The as-received 25 μm thick 99.999% pure Cu foil was annealed for ∼ 1 h at 960 °C in a 1:20 hydrogen/argon mixture with a pressure of ∼ 10 mbar. The growth of graphene was achieved by flowing 50 cm 3 min −1 of CH 4 over the annealed substrate while keeping the Ar/H 2 flow, temperature and pressure constant. For the isotopically mixed sample with separated domains, the annealing and growth temperature was increased to 1,045 °C and the flow rate decreased to 2 cm 3 min −1 . After introducing 12 CH 4 for 2 min the carbon precursor flow was stopped for 10 s, and the other isotope precursor subsequently introduced into the chamber for another 2 min. This procedure was repeated with a flow time of 1 min. After the growth, the CH 4 flow was interrupted and the heating turned off, while the Ar/H 2 flow was kept unchanged until the substrate reached room temperature. The graphene was subsequently transferred onto a holey amorphous carbon film supported by a TEM grid using a direct transfer method without using polymer 42 . Scanning transmission electron microscopy Electron microscopy experiments were conducted using a Nion UltraSTEM100 scanning transmission electron microscope, operated between 80 and 100 kV in near-ultrahigh vacuum (2 × 10 −7 Pa). The instrument was aligned for each voltage so that atomic resolution was achieved in all of the experiments. The beam current during the experiments varied between 8 and 80 pA depending on the voltage, corresponding to dose rates of ∼ 5–50 × 10 7 e − Å −2 s −1 . The beam convergence semiangle was 30 mrad and the semi-angular range of the medium-angle annular-dark-field detector was 60–200 mrad. Poisson analysis Assuming the displacement data are stochastic, the waiting times (or, equivalently, the doses) should arise from a Poisson process with mean λ . Thus the probability to find k events in a given time interval follows the Poisson distribution To estimate the Poisson expectation value for each sample and voltage, the cumulative doses of each data set were divided into bins of width w (using one-level recursive approximate Wand binning 43 ), and the number of bins with 0, 1, 2... occurrences were counted. The goodness of the fits was estimated by calculating the Cash C-statistic 44 (in the asymptotically- χ 2 formulation 45 ) between a fitted Poisson distribution and the data: where N is the number of occurence bins, n i is the number of events in bin i , and e i is the expected number of events in bin i from a Poisson process with mean λ . An error estimate for the mean was calculated using the approximate confidence interval proposed for Poisson processes with small means and small sample sizes by Khamkong 46 : where is the estimated mean and Z 2.5 is the normal distribution single tail cumulative probability corresponding to a confidence level of (100− α )=95%, equal to 1.96. The statistical analyses were conducted using the Wolfram Mathematica software (version 10.5), and the Mathematica notebook is included as Supplementary Data 2 . Outputs of the Poisson analyses for the main data sets of normal and heavy graphene as a function of voltage are additionally shown as Supplementary Fig. 2 . Displacement cross section The energy transferred to an atomic nucleus from a fast electron as a function of the electron scattering angle θ is ref. 47 which is valid also for a moving target nucleus for electron energies >10 keV as noted by Meyer and co-workers 11 . For purely elastic collisions (where the total kinetic energy is conserved), the maximum transferred energy E max corresponds to electron backscattering, that is, θ = π . However, when the impacted atom is moving, E max will also depend on its speed. To calculate the cross section, we use the approximation of McKinley and Feshbach 48 of the original series solution of Mott to the Dirac equation, which is very accurate for low-Z elements and sub-MeV beams. This gives the cross section as a function of the electron scattering angle as where β = v / c is the ratio of electron speed to the speed of light (0.446225 for 60 keV electrons) and σ R is the classical Rutherford scattering cross section Using equation 14 this can be rewritten as a function of the transferred energy 49 as Distribution of atomic vibrations The maximum energy (in eV) that an electron with mass m e and energy E e = eU (corresponding to acceleration voltage U ) can transfer to a nucleus of mass M that is moving with velocity v is where and are the relativistic energies of the electron and the nucleus, and E n = Mv 2 /2 the initial kinetic energy of the nucleus in the direction of the electron beam. The probability distribution of velocities of the target atoms in the direction parallel to the electron beam follows the normal distribution with a standard deviation equal to the temperature-dependent mean square velocity , Total cross section with vibrations The cross section is calculated by numerically integrating equation 17 multiplied by the Gaussian velocity distribution (equation 19) over all velocities v where the maximum transferred energy (equation 18) exceeds the displacement threshold energy T d : where E max ( v , E e ) is given by equation 18, the term Θ[ E max ( v , E e )− E d ] is the Heaviside step function, T is the temperature and E e is the electron kinetic energy. The upper limit for the numerical integration v max =8 was chosen so that the velocity distribution is fully sampled. Displacement threshold simulation For estimating the displacement threshold energy, we used DFT molecular dynamics as established in our previous studies 12 , 13 , 50 , 51 . The threshold was obtained by increasing the initial kinetic energy of a target atom until it escaped the structure during the molecular dynamics run. The calculations were performed using the grid-based projector-augmented wave code ( GPAW ), with the computational grid spacing set to 0.18 Å. The molecular dynamics calculations employed a double zeta linear combination of atomic orbitals basis 52 for a 8 × 6 unit cell of 96 atoms, with a 5 × 5 × 1 Monkhorst-Pack k -point grid 53 used to sample the Brillouin zone. A timestep of 0.1 fs was used for the Velocity-Verlet dynamics 54 , and the velocities of the atoms initialized by a Maxwell–Boltzmann distribution at 50 K, equilibrated for 20 timesteps before the simulated impact. To describe exchange and correlation, we used the local density approximation 41 , and the Perdew-Burke-Ernzerhof (PBE) 55 , Perdew-Wang 1991 (PW91, ref. 41 ), RPBE 56 and revPBE 57 functionals, yielding displacement threshold energies of 23.13, 21.88, 21.87, 21.63 and 21.44 eV (these values are the means of the highest simulated kinetic energies that did not lead to an ejection and the lowest that did, respectively). Additionally, we tested the C09 (ref. 21 ) functional to see whether inclusion of the van der Waals interaction would affect the results. This does bring the calculated threshold energy down to [21.25, 21.375] eV, in better agreement with the experimental fit. However, a more precise algorithm for the numerical integration of the equations of motion, more advanced theoretical models for the interaction, or time-dependent DFT may be required to improve the accuracy of the simulations further. Varying mean square velocity with concentration Since the phonon dispersion of isotopically mixed graphene gives a slightly different out-of-plane mean square velocity for the atomic vibrations, for calculating the cross section for each concentration, we assumed the velocity of mixed concentration areas to be linearly proportional to the concentration where c is the concentration of 12 C and v 12/13 are the atomic velocities for normal and heavy graphene, respectively. Raman spectroscopy A Raman spectrometer (NT MDT Ntegra Spectra) equipped with a 532 nm excitation laser was used for Raman measurements. A computer-controlled stage allowed recording a Raman spectrum map over the precise hole on which the electron microscopy measurements were conducted, which was clearly identifiable from neighboring spot contamination and broken film holes. The frequencies ω of the optical phonon modes vary with the atomic mass M as ω ∝ M −1/2 due to the mass prefactor of the dynamical matrix. This makes the Raman shifts of 13 C graphene (12/13) −1/2 times smaller, allowing the mapping and localization of 12 C and 13 C domains 28 with a spatial resolution limited by the size of the laser spot (nominally ∼ 400 nm). The shifts of the G and 2D bands compared with a corresponding normal graphene sample are given by , where ω 12 is the G (2D) line frequency of the normal sample, c 0 13 =0.01109 is the natural abundance of 13 C, and c is the unknown concentration of 12 C in the measured spot. Due to background signal arising from the carbon support film of the TEM grid, we analyzed the shift of the 2D band, where two peaks were in most locations present in the spectrum. However, in many spectra these did not correspond to either fully 12 C or 13 C graphene 58 , indicating isotope mixing within the Raman coherence length. To assign a single value to the 12 C concentration for the overlay of Fig. 3 , we took into account both the shifts of the peaks (to estimate the nominal concentration for each signal) and their areas (to estimate their relative abundances) as follows: where are the nominal concentrations of 12 C determined from the measured higher and lower 2D Raman shift peak positions, ω A/B are the measured peak centers of the higher and lower 2D signals, and A and B are their integrated intensities. The peak positions of fully 12 C and 13 C graphene were taken from the highest and lowest peak positions in the entire mapped area (covering several dozen Quantifoil holes), giving ω 12 =2,690 cm −1 and ω 13 =2,600 cm −1 . The fitted 2D spectra, arranged in the same 6 × 6 grid as the overlay, can be found as Supplementary Fig. 3 Data availability The full STEM time series data on which the determination of the 12 C and 13 C displacement cross sections ( Fig. 2 ) are based are available on figshare with the identifier (ref. 20 ). The STEM data of Fig. 3 are available upon request. All other data are contained within the article and its Supplementary Information files. Additional information How to cite this article: Susi, T. et al . Isotope analysis in the transmission electron microscope. Nat. Commun. 7, 13040 doi: 10.1038/ncomms13040 (2016). Change history 30 August 2017 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper.
The different elements found in nature each have their distinct isotopes. For carbon, there are 99 atoms of the lighter stable carbon isotope 12C for each 13C atom, which has one more neutron in its nucleus. Apart from this natural variation, materials can be grown from isotope-enriched chemicals. This allows scientists to study how the atoms arrange into solids, for example to improve their synthesis. Yet, most traditional techniques to measure the isotope ratio require the decomposition of the material or are limited to a resolution of hundreds of nanometers, obscuring important details. In the new study, led by Jani Kotakoski, the University of Vienna researchers used the advanced scanning transmission electron microscope Nion UltraSTEM100 to measure isotopes in nanometer-sized areas of a graphene sample. The same energetic electrons that form an image of the graphene structure can also eject one atom at a time due to scattering at a carbon nucleus. Because of the greater mass of the 13C isotope, an electron can give a 12C atom a slightly harder kick, knocking it out more easily. How many electrons are on average required gives an estimate of the local isotope concentration. "The key to making this work was combining accurate experiments with an improved theoretical model of the process", says Toma Susi, the lead author of the study. Publishing in Nature Communications allowed the team to fully embrace open science. In addition to releasing the peer review reports alongside the article, a comprehensive description of the methods and analyses is included. However, the researchers went one step further and uploaded their microscopy data onto the open repository figshare. Anyone with an Internet connection can thus freely access, use and cite the gigabytes of high-quality images. Toma Susi continues: "To our knowledge, this is the first time electron microscopy data have been openly shared at this scale." The results show that atomic-resolution electron microscopes can distinguish between different isotopes of carbon. Although the method was now demonstrated only for graphene, it can in principle be extended for other two-dimensional materials, and the researchers have a patent pending on this invention. "Modern microscopes already allow us to resolve all atomic distances in solids and to see which chemical elements compose them. Now we can add isotopes to the list", Jani Kotakoski concludes. The lighter the atom, the fewer electrons are on everage needed to eject it. Credit: Copyright: Koponen+Hildén, Creative Commons BY 4.0
10.1038/ncomms13040
Other
Researchers unearth largest Silurian vertebrate to date—meter long Megamastax
Scientific Reports 4, Article number: 5242 DOI: 10.1038/srep05242 Press release: phys.org/wire-news/164107588/s … ent-fossil-fish.html Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep05242
https://phys.org/news/2014-06-unearth-largest-silurian-vertebrate-datemeter.html
Abstract An apparent absence of Silurian fishes more than half-a-metre in length has been viewed as evidence that gnathostomes were restricted in size and diversity prior to the Devonian. Here we describe the largest pre-Devonian vertebrate ( Megamastax amblyodus gen. et sp. nov.), a predatory marine osteichthyan from the Silurian Kuanti Formation (late Ludlow, ~423 million years ago) of Yunnan, China, with an estimated length of about 1 meter. The unusual dentition of the new form suggests a durophagous diet which, combined with its large size, indicates a considerable degree of trophic specialisation among early osteichthyans. The lack of large Silurian vertebrates has recently been used as constraint in palaeoatmospheric modelling, with purported lower oxygen levels imposing a physiological size limit. Regardless of the exact causal relationship between oxygen availability and evolutionary success, this finding refutes the assumption that pre-Emsian vertebrates were restricted to small body sizes. Introduction The Devonian Period has been considered to mark a major transition in the size and diversity of early gnathostomes (jawed vertebrates), including the earliest appearance of large vertebrate predators 1 . In contrast to the rich Devonian fossil record, gnathostomes from earlier strata have long been represented by scarce and highly fragmentary remains 2 . Traditional depictions of Silurian marine faunas typically either lack fish altogether 3 or are dominated by diminutive jawless forms 4 . In addition to this apparent low diversity, the maximum size of pre-Devonian gnathostomes and vertebrates in general, has been noted as being considerably smaller than later periods 1 . Until recently, the largest known Silurian gnathostomes were the osteichthyan Guiyu 5 and the antiarch placoderm Silurolepis 6 from the Ludlow Kuanti Formation of Yunnan, both with total body lengths of roughly 35 cm. Beyond the Silurian, the Ordovician agnathan Sacabambaspis from Bolivia is of comparable size 7 . The absence of pre-Devonian gnathostomes more than a few tens of centimeters in length, coupled with an apparent increase in size and diversity in the Early Devonian, has led to suggestions that jawed vertebrates were minor components of aquatic faunas prior to the Emsian 1 , 8 . Such an extended period of time with no apparent increase in body size is striking, given that the gnathostome fossil record may extend as far back as the Ordovician 9 , 10 . Recent discoveries reveal that Silurian gnathostomes were far more diverse and widely distributed than previously recognized 10 , 11 . Of particular importance is Xiaoxiang fauna of Yunnan Province, southwestern China, based on fossils from a series of marine sediments of which the Kuanti Formation is by far the most productive 12 , 13 . This unit has produced a diverse assemblage of early fishes, including the only articulated specimens of pre-Devonian gnathostomes. Here we present a bony fish from the Kuanti Formation ( Fig. 1 ) with an estimated length of about 1 meter, revealing that pre-Devonian gnathostomes could attain comparatively large sizes. The likely specialized predatory feeding habits of this form and anatomical disparity to other early osteichthyans, reinforce earlier indications of a significant degree of morphological and ecological diversity among gnathostomes well before the Devonian 10 , 14 . Figure 1 Silurian sequence in Qujing (Yunnan, China) with stratigraphic position of Megamastax amblyodus gen. et sp. nov. and other vertebrate taxa (modified from ref. 5 , using Adobe Illustrator 10). Full size image The apparent small size and limited diversity of Silurian gnathostomes has recently been employed as a constraint in paleoatmospheric reconstruction 1 , 8 . Models of atmospheric history based on geochemical data indicate a mid-Palaeozoic episode of global oceanic oxygenation, likely linked to the formation of a global terrestrial vascular flora and the concurrent widespread burial of organic matter 15 , 16 and roughly coinciding with the appearance of large gnathostomes in the fossil record. Our new finding refutes suggestions that there were significant environmental constraints to vertebrate body size prior to the Emsian (~400 Ma). Results Systematic palaeontology Gnathostomata, Gegenbaur, 1874 Osteichthyes, Huxley, 1880 Sarcopterygii, Romer, 1955 Megamastax amblyodus gen. et sp. nov. Etymology Genus named from megalos and mastax (Greek), meaning “big mouth”. The specific epithet is derived from amblys and odous (Greek) meaning “blunt tooth”. Holotype Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) V18499.1, complete left mandible. Referred material IVPP V18499.2, partial left mandible; IVPP V18499.3, right maxilla. Type locality and horizon The Kuanti Formation, at a hill close to the Xiaoxiang Reservoir, Qujing, Yunnan, southwestern China ( Fig. 1 ), dating to the late Ludlow (Ludfordian Stage) 11 , 12 , 13 , with a youngest age of ~423 million years ago 17 . The fossils were collected from a horizon immediately below the first appearance of the conodont Ozarkodina crispa . Other fishes from this horizon include the galeaspid Dunyu 18 , the remarkable placoderm Entelognathus 19 and the osteichthyan Guiyu 5 , 20 . Diagnosis Osteichthyan with multiple rows of closely packed conical teeth on the marginal jaw bones and widely spaced pairs of blunt teeth fused to each of the four coronoids. Coronoids fused to the lingual face of the mandible with the posterior three flanked by an elongate anterior ramus of the prearticular. Outer surfaces of the mandible and maxilla covered in cosmine with numerous embedded pores. Description The external faces of the mandible ( Fig. 2A, F ) and maxilla ( Fig. 2I ) have a cosmine surface with numerous pores, as in Achoania and Psarolepis 21 . The mandible is long and low in overall shape, tapering anteriorly as in some Devonian limbed tetrapods 22 . It is gently convex in longitudinal and vertical axes, with slight medial curvature in dorsal view suggesting a narrow tapering snout. The sutured margins of the dermal bones are not clearly visible, although a small notch on the anteroventral jaw margin likely marks the posteromedial boundary of the splenial as in Achoania and Psarolepis 21 . There is a shallow semi-lunate overlap area for the maxilla and quadratojugal, while a horizontal pit-line runs almost end to end in the upper portion of the mandible. Internally, a narrow flange runs along the dorsal margin of the dentary, bearing at least two longitudinal rows of conical, slender teeth ( Fig. 2E ). All marginal teeth on the holotype are of roughly uniform height, but those on the inner-most row are broader and more sparsely arranged. The teeth extend almost to the tip of the jaw, well past the level of the parasymphysial articulation. On V18499.2 the marginal dentition is reduced to weathered stumps and empty tooth sockets. It is unclear if this feature is pre- or post-mortem. Figure 2 Fossils of Megamastax amblyodus gen. et sp. nov. (A–E) Holotype mandible (IVPP V18499.1) in (A) lateral, (B) lingular and (C) dorsal views; close-up of prearticular bone, showing surface ridges (D) and close-up of the marginal dentition in lingual view (E). (F–H) Partial mandible (V18499.2) in (F) lateral, (G) lingular and (H) dorsal views. (I) Right maxilla (V18499.3) in lateral view. (J) Reconstruction of (i1) Guiyu oneiros (ref. 13 ) alongside hypothetical silhouettes of (J2–3) Megamastax with superimposed fossil outlines (drawn by B.C.). The (J2) smaller fish is based on the V18499.1 and V18499.3, the (J3) larger on V.18499.2. ar.psym, knob-like parasymphysial structure; Co 1–4, coronoids 1–4; coT 1–8, coronoid teeth 1–8; De, dentary; fo.add, adductor fossa; fo.gl, glenoid fossa; fo.Mk, Meckelian foramen; Id, infradentary; mpl, mandibular pit line; maT, marginal teeth; oaMx, overlap area for maxilla and quadratojugal; Pat, prearticular; sym, area for parasymphysial plate; tr, indented track bordering splenial. Full size image Antero-medially there is a knob-like articular structure and symphysial overlap area for a small parasymphysial dental plate. The knob is not as strongly developed as in Psarolepis , Achoania 21 or Guiyu 5 and is concealed by the dentary in lateral view. The large prearticular is devoid of denticles, but is covered in numerous parallel ridges ( Fig. 2D ) as in Styloichthys 21 . The broad posterior section covers the dorsal and medial face of the Meckelian ossification near the adductor fossa, terminating posteriorly just behind the level of the glenoid fossa. Anteriorly, it narrows to an elongate ramus, mesially flanking the coronoid series to terminate against the posteromedial margin of the 1 st coronoid. The Meckelian cartilage is ossified for most of its length, although a large oval cavity anteroventral of the adductor fossa may indicate a region of incomplete ossification. The Meckelian bone extends ventrally beyond the prearticular with a series of small fenestra piercing the posteroventral margin. Posteriorly, it contributes to the rim of the adductor fossa and a small bipartite glenoid fossa. It anteriorly tapers to a narrow shelf that is fused to the knob-like parasymphysial area and the anterior tip of the prearticular. The four coronoids are smooth save for a row of widely-spaced blunt, semi-circular teeth, with two on each coronoid ( Fig. 2B, C, G, H and Fig. 3F, G ). The dentition is ankylosed to a continuous median ridge, with no sockets. Tooth surfaces are smooth and lack infolding, with weathered sections on V18499.2 exposing the pulp cavity. Figure 3 Lingual views of mandibles from selected pre-Emsian osteichthyans. Except for Megamastax , all are from the Lochkovian Xitun Formation, Qujing, eastern Yunnan, China. (A) Psarolepis romeri , IVPP V8138 (reversed). (B) Achoania jarviki , IVPP V12492.1 (reversed). (C) Jaw tentatively assigned to Meemannia eos , IVPP V14536.5. (D) Styloichthys changae , IVPP V8143.1. (E) Partial dentary of an indeterminable osteichthyan, IVPP V12493 (reversed). (F) Megamastax amblyodus , IVPP V18499.1 (holotype), dark grey = matrix-filled areas. (G) Megamastax amblyodus , IVPP V18499.2 with restored silhouette in black. (A–G) drawn by B.C. ar.psym, knob-like parasymphysial articular structure; ar.Co1–4, articulation for coronoid 1–4; Co1–5, coronoid 1–5; coT1–8, 1 st –8 th coronoid tooth; fo.add, adductor fossa; fo.gl, glenoid fossa; Pat, Prearticular; sym, area for parasymphysial tooth plate. Scale bars = 5 mm. Full size image The 9.5 cm long maxilla (V18499.3, Fig. 2I ) represents an individual of similar size to the holotype. It has identical ornamentation and corresponding contours of the occlusal margins. The biting margin is straight with no posteroventral flexion. In overall shape, the maxilla is most suggestive of porolepiforms 23 in lacking a posterior expansion that is known in actinopterygians, onychodonts and stem sarcopterygians 5 . Multiple rows of closely packed conical teeth are arranged over the entire ventral margin. Comparisons In possessing true marginal teeth, cosmine, coronoids, prearticular and a biconcave glenoid, Megamastax is unambiguously an osteichthyan. The presence of cosmine, the shape of the maxilla and the configuration of the prearticular relative to the coronoids indicate sarcopterygian affinities. Porous cosmine is found in many crown sarcopterygians 23 , 24 as well as Psarolepis and Achoania , taxa that are usually resolved as stem sarcopterygians 25 , 26 , 27 , although a stem-osteichthyan position is also suggested 20 , 27 , 28 . While not universally distributed among sarcopterygians 24 , cosmine is unknown in actinopterygians. The maxilla lacks the pronounced posteroventral curvature and posterior expansion of Guiyu 5 , Psarolepis , onychodonts 29 , 30 and early actinopterygians 31 , 32 , 33 , 34 and in this respect is more similar to porolepiforms 35 . As in early sarcopterygians, the prearticular extends anteriorly to mesially flank the coronoids 5 , 21 , 36 , differing from the condition in primitive actinopterygians where the prearticular sutures against the posterior margin of the most posterior coronoid 31 , 37 . The dentition is highly unusual. As in crown osteichthyans, the marginal teeth are discrete structures unlike the enlarged denticles of Lophosteus and Andreolepis 38 . However the marginal dentition of most early tooth-bearing osteichthyans is segregated into a single inner row of large conical teeth bordered laterally by sharpened denticles 5 , 21 , 29 , 31 , 32 , 33 , 34 . The dentary and the maxilla of Megamastax exhibit at least two parallel rows of sharp conical teeth of roughly uniform length. The 4-bone coronoid series of Megamastax , with large blunt teeth fused to the dermal surface, is unlike that of other osteichthyans where the teeth, if present, are discrete structures demarcated at the base from the adjacent bone. Psarolepis 21 and Guiyu 5 have five coronoids per jaw, each with sharp fangs housed in semi-lunate sockets. Those of early actinopterygians 31 , actinistians and onychodonts 29 possess numerous minute teeth or denticles. Porolepiforms and early tetrapodomorphs have a 3-coronoid series, bearing sharp tusks with infolded surfaces and an additional row of small denticles 35 . Dipnoans lack discrete coronoids. The coronoids of Megamastax share a striking similarity to the dentigerous jaw bones of some acanthodians, notably the Ischnacanthiformes and Acanthodopsis 39 , 40 and to a lesser extent, the infragnathals of certain arthrodires with purported teeth 41 . As the coronoids of unambiguous stem-osteichthyans are unknown, it is unclear if this is a convergence with non-osteichthyans, or is instead a plesiomorphic relict. Examining purported ischnacanthiform jaw fragments in museum collections may yield additional early osteichthyan coronoids. A previously described 6 cm long section of a dentary (V12493, Fig. 3E ) from the Lochkovian Xitun Formation, Yunnan, is superficially similar to Megamastax in its large size, ornamentation and prominent marginal tooth-bearing flange 21 . It differs in the greater degree of anterodorsal curvature and in bearing only a single row of conical marginal teeth. The unpreserved coronoids were evidently not fused to the dentary. Regardless of its relationships, the specimen provides additional evidence of large osteichthyans well before the Emsian. Discussion The size of Megamastax To determine the maximum size of Megamastax ( Fig. 2J ), the total length of the large but incomplete V18499.2 was extrapolated based on the complete holotype jaw, using the distance between the 2 nd and 8 th coronoid teeth as landmarks. Fusion of the dermal bones suggests that both represent adult or near-adult specimens despite the roughly 35% difference in size. V18499.2 has a preserved length of 109 mm, missing most of the posterior section, including the adductor fossa and the front of the jaw anterior to the second coronoid tooth. The apices of the 2 nd and 8 th coronoid teeth are 70 mm apart. V18499.1 has a total mandibular length of 129 mm with a 52 mm distance between the 2 nd –8 th coronoid teeth. V18499.2 is thus calculated to be 1.346 times longer than the holotype, with a restored total length of 173.65 mm ( Fig. 3G ). While errors in scaling due to ontogenetic or individual variation cannot be ruled out, mandibles of Achoania and Psarolepis from the Lower Devonian Xitun Formation exhibit an even greater degree of relative size differences; the jaws of Achoania ranging from 32.5 to 72 mm in length 21 . While larger specimens exhibit a proportionally greater depth, due primarily to a deepening of the infradentaries, they do not exhibit consistent differences in the relative anteroposterior proportions of the glenoid fossa, adductor fossa and coronoid series, regardless of the size of the mandible 21 . To provide an estimate for the total body length of Megamastax , comparisons were made with more completely known Siluro-Devonian osteichthyans ( Fig. 2J ). Calculations based on isolated jaws must be tentative as relative mandible-to-body size is subject to individual and ontogenetic variation. Guiyu is currently the only Silurian osteichthyan known from reasonably complete remains, with the holotype (V15541) measuring about 260 mm from snout-to-anal fin for a likely total length of roughly 350 mm ( Fig. 2J1 ); the lower jaw accounting for about 1/7 th of that length 20 . Excluding tetrapods, Devonian osteichthyans, both sarcopterygians and actinopterygians, share a conservative fusiform anatomy with no unusually elongate or truncated body configurations. This includes Dialipina , an Early Devonian taxon usually resolved as a stem-osteichthyan in recent analyses 19 , 25 and thus likely a more basal taxon than Megamastax , suggesting a fusiform-body via phylogenetic bracketing. Mandibular lengths in Devonian bony fishes generally account from between 1/5 th of body length in forms like Strunius 30 and Miguashaia 42 to 1/7 th in more elongate taxa like Howqualepis 34 and Gogosardina 32 . Extrapolating from this provides estimates of between 645 to 903 mm for V18499.1 with a 129 mm jaw and between 868 to 1215 mm for V18499.2 with a 173.65 mm jaw ( Fig. 2J2–3 ). The earliest durophagous predatory osteichthyan? The coronoid teeth ( Fig. 2B, C, G, H ) differ from the sharp tusks of other Silurian bony fishes from the South China block, notably Guiyu 5 and Psarolepis 21 , 27 . When coupled with the much larger size of Megamastax , this suggests widely divergent feeding strategies and alludes to a considerable degree of trophic specialisation well before the Devonian. Nothing is currently known of the palate, but the rounded coronoid dentition is suggestive some sort of crushing role, perhaps against a complimentary row on the dermopalatine. Among extant fishes, dentition combining grasping and crushing morphologies is common in durophagous predators. These target hard-shelled prey, which require processing prior to injestion 43 . Such forms usually employ anterior conical teeth for initial prey capture before food is passed posteriorly to flattened or rounded molariform teeth. The shell-crushing dentition may be located on the marginal jaws as in hornsharks 44 and wolf-eels 43 , or set within pharyngeal batteries as in many wrasses 45 . Megamastax differs from extant forms in that the processing dentition is on the coronoids, medial to rather than posterior to the conical teeth, which are distributed throughout the jaw margins rather than anteriorly restricted. However the contrasting tooth-form suggests a separation of activity (capture vs processing) that is broadly analogous to extant piscine durophages, possibly making it the earliest osteichthyan with specific adaptations for such a diet. The sub-tidal marine invertebrate fauna of the Ludlow of Yunnan included a rich variety of potential prey, including brachiopods, molluscs and trilobites 12 , 13 , 46 . Megamastax may have also consumed the heavily armoured fishes whose fossils are well represented in the Kuanti Formation ( Fig. 4 ), including placoderms 19 and galeaspids 18 . Given its great size, Megamastax could have potentially eaten any other animal in the assemblage and may thus represent the earliest vertebrate apex-predator. As an apparently specialised predator that differs substantially from contemporary osteichthyans, Megamastax correlates well with a documented initial increase in the functional disparity of the earliest gnathostomes which had stabilized by the Early Devonian 14 . Figure 4 Life reconstruction of Megamastax amblyodus consuming the galeaspid Dunyu longiforus (drawn by B.C.). Full size image Implications for palaeoatmospheric modelling The role of oxygen availability as a significant factor in the appearance of large animals in the mid-late Palaeozoic has been the subject of considerable scrutiny, although the exact causal relationships are ambiguous and controversial due to the likely influence of other variables such as trophic tiering and cascades, temperature and biotic interactions 47 , 48 , 49 . Recent advances in geochemistry 1 , 10 , 15 , 50 , 51 , 52 have provided a wealth of data on early Phanerozoic climate and atmospheric conditions, allowing for correlation with key biological events. Earlier attempts at palaeoatmospheric modelling suggest consistently low Silurian O 2 concentrations, substantially below the current atmospheric level of 21% 53 , 54 , 55 . Of the two most recent models, GEOCARBSULF 51 , 52 is based new isotopic data of carbon and sulphur. It indicates a gradual increase of atmospheric O 2 from the end of Ordovician with a peak exceeding modern levels towards the end of the Silurian, followed by a decrease in the Early-Middle Devonian with a low point during the Frasnian ( Fig. 5A ). This correlates with the relative abundance of charcoal during the Silurian to Permian 12 . Figure 5 Competing models of mid-Palaeozoic oxygenation from 500 Ma to 300 Ma. Vertical blue line indicates minimum age of the Kuanti Formation and Megamastax . (A) From figure 2 in ref. 51 . Estimates of atmospheric O2 over time based on calculations from the GEOCARBSULF model (solid line = modern O2%). (B) From figure 3B in ref. 8 . Mo sediment samples with seawater (SW) values inferred from highly euxinic (red) and mildly euxinic sediments (pink). δ 98 Mo is a measure of the relative proportions of heavy and light Mo isotopes, with higher values inferring oxygenated oceans (δ 98 Mo modern SW = 2.3). Solid lines represent 90% percentiles while values above the dashed line require a substantial oxic Mo sink. Full size image An alternative model based on molybdenum (Mo) isotopes 1 , 8 ( Fig. 5B ), while with initial results spanning a broad possible time range of ~430–390 Ma, suggests a peak in the later part of the Early Devonian (~400 Ma, during the Emsian Stage) based in part on calibration with the vertebrate fossil record 1 . As body size in extant predatory marine fishes has been claimed to scale positively with both oxygen demand and uptake, with vulnerability to hypoxic mortality in large predatory forms being considerably greater than their smaller kin 1 while fishes in general have been recorded as less tolerant of hypoxia than many marine invertebrates 56 . These observations have served as a proxy for the Emsian oxygenation scenario, with earlier limitations to oxygen availability, estimated to have been 15–50% of present atmospheric levels (PAL), imposing physiological constraints on maximal body size 8 . A date of ~400 Ma for O 2 concentration attaining to 40% PAL (the minimum estimated requirement for predatory fishes above 1 m) was favoured when correlated against the low maximum length of Silurian gnathostomes (no taxa more than a few tens of centimeters) and the apparent rise of large predatory fishes, with presumably greater metabolic requirements, during the Devonian 1 . Although a simple causal relationship between size and hypoxia tolerance has been challenged 49 , 57 , 58 , extant marine fishes in general are also known to be less tolerant of hypoxic conditions than many marine invertebrates 1 , 56 , 59 . This suggests that low oxygen levels would have imposed some degree of extrinsic constraint on the maximum body size and available niche opportunities of the earliest gnathostomes. Bambach 60 proposed that the emergence of large predatory fish in the Devonian was linked to the rise of a global terrestrial flora, with an expanded trophic pyramid fuelled by phosphate-laden runoff from plant-covered continental zones. However, recent palaeobotanical discoveries have brought the timing of the evolution of vascular plants into question 61 and indicate a well established terrestrial flora by the latest Silurian 62 . Cryptospore records suggest a floristic invasion of the land as far back as the latest Ordovician 63 . As such, the benefits of terrestrial vegetation to aquatic biotas may have been active for considerably longer than initially thought, accounting for the large size of Megamastax and the rich diversity in the Xiaoxiang fauna. While it might be argued that Megamastax , being presumably a foraging predator of slow-moving or sessile shelled prey, likely had lower oxygen requirements than a fast midwater piscivore, it has been demonstrated that even relatively sedate benthic fishes in modern coastal communities exhibit high vulnerability to hypoxia 64 , 65 , whereas some modern foraging reef omnivores, such as the picasso triggerfish 66 , employ highly energetic forms of locomotion. A recent time-calibrated phylogenetic analysis of a broad sample of living actinopterygians presented a striking correlation between speciation and increases in body size 67 . Based on this result, it could be argued that the large size of Megamastax is a simple corollary of early gnathostome diversification, rather than an indicator of extrinsic environmental factors such as oxygen level. Regardless, the existence of a metre-long predatory fish in the Ludlow raises doubts on the use of restricted vertebrate body size as a proxy for low Silurian O2 levels. This discovery does not necessarily dispute the use of Mo isotopes in palaeoatmospheric reconstruction as the ~423 Ma Kuanti Formation falls within the lower extreme of the estimated time interval of the mid-Phanerozoic peak, although it suggests that the present model requires recalibration in light of this new datum. The size of Megamastax and the emerging diversity of late Silurian gnathostomes based on ongoing fossil discoveries are not indicative of any significant restrictions on pre-Devonian gnathostome size and diversity. While not in itself a reliable indicator of ancient atmospheric conditions, these fossils are at least consistent with the high Silurian oxygen levels predicted by GEOCARBSULF. Given the presence of big osteichthyans in the Kuanti and Xitun formations, the purported absence of large pre-Emsian jawed fishes is seen to be a sampling artefact at least partially due to preservational and environmental biases 68 . Methods All fossils are housed at the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP), Chinese Academy of Sciences, Beijing. The blocks were collected from the Kuanti Formation (late Ludlow) in Qujing, Yunnan, China and prepared mechanically at IVPP using pneumatic air scribes and needles under microscopes. Nomenclatural acts This published work and the nomenclatural acts it contains have been registered in ZooBank, the proposed online registration system for the International Code of Zoological Nomenclature (ICZN). The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix ‘ ’. The LSID for this publication is: urn:lsid:zoobank.org:pub:4FA91224-FF35-4DD1-9618-BA950DF073FE, urn:lsid:zoobank.org:act:6A4DA6A3-B675-4B4D-8F79-5CA5F9366327 and urn:lsid:zoobank.org:act:89729F30-1562-4FD9-9801-6604003C514B.
A team of researchers working at China's Kuanti formation has unearthed the largest known example of a jawed vertebrate from the early Dvonian, commonly known as the Silurian period. In their paper published in Scientific Reports, the team describes the predatory fish as being approximately 1 meter long with two types of teeth, one for catching prey, the other for crushing hard shells. The discovery adds new evidence to the theory that animals with backbones and jaws first developed in what is now China and also disrupts current theories regarding atmospheric oxygen levels during early Earth history. The researchers believe the fish, Big Mouth, Blunt Tooth (Megamastax amblyodus), lived approximately 423 million years ago—a time period that until this new discovery was thought to be characterized by low atmospheric oxygen levels. But a large fish such as Megamastax couldn't survive under such conditions, thus, levels must have been higher. The find actually consisted of three fossils from three different fish—one a whole lower jaw, the other two, both fragments of an upper jaw—all found at the Yunnan province dig site. The size of the jaw and teeth allowed the researchers to suggest the entire fish, when alive, would have been approximately 1 meter long. The teeth in front were sharp, for grabbing, while those in the back were clearly meant for grinding, likely hard shelled prey. The jaw was approximately 16 cm in length. Megamastax lower jaw: Holotype mandible (IVPP V18499.1) of Megamastax amblyodus gen. et sp. nov. in lateral, lingular, and dorsal views. Credit: Min Zhu Fossils of Megamastax amblyodus gen. et sp. nov. (A–E) Holotype mandible (IVPP V18499.1) in (A) lateral, (B) lingular, and (C) dorsal views; close-up of prearticular bone, showing surface ridges (D), and close-up of the marginal dentition in lingual view (E). (F–H) Partial mandible (V18499.2) in (F) lateral, (G) lingular, and (H) dorsal views. (I) Right maxilla (V18499.3) in lateral view. (J) Reconstruction of (i1) Guiyu oneiros alongside hypothetical silhouettes of (J2–3) Megamastax with superimposed fossil outlines. The (J2) smaller fish is based on the V18499.1 and V18499.3, the (J3) larger on V.18499.2. Credit, Min Zhu The researchers believe the fish was likely the largest predator in its environment—about triple the size of any other known fish from that time period—making it the dominant fish in the sea. During the Silurian period, the part of China where the fish was unearthed was part of the South China Sea. Fossil finds from the region predate jawed vertebrates found anywhere else thus far, suggesting the area was the birthplace of such creatures. They also believe that the reason Megamastax grew so large was because of intense competition between the many types of fish that existed at the time. But making it possible was the amount of oxygen available. Prior to the Silurian period, levels would have been too low. Interestingly, the most recent climate models used to depict early Earth conditions during the same period have also indicated higher atmospheric oxygen levels—this latest fossil find now backs that up.
10.1038/srep05242
Medicine
Over 20.5 million years of life may have been lost due to COVID-19
Years of life lost to COVID-19 in 81 countries, Scientific Reports (2021). DOI: 10.1038/s41598-021-83040-3 , www.nature.com/articles/s41598-021-83040-3 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-021-83040-3
https://medicalxpress.com/news/2021-02-million-years-life-lost-due.html
Abstract Understanding the mortality impact of COVID-19 requires not only counting the dead, but analyzing how premature the deaths are. We calculate years of life lost (YLL) across 81 countries due to COVID-19 attributable deaths, and also conduct an analysis based on estimated excess deaths. We find that over 20.5 million years of life have been lost to COVID-19 globally. As of January 6, 2021, YLL in heavily affected countries are 2–9 times the average seasonal influenza; three quarters of the YLL result from deaths in ages below 75 and almost a third from deaths below 55; and men have lost 45% more life years than women. The results confirm the large mortality impact of COVID-19 among the elderly. They also call for heightened awareness in devising policies that protect vulnerable demographics losing the largest number of life-years. Introduction The large direct and indirect effects of the COVID-19 pandemic have necessitated the delivery of policy responses that, when reasonable, are a balancing act between minimizing the immediate health impact of the pandemic, and containing the long-term damage to the society that may arise from the protective policies. A key input parameter in the calculation of how restrictive policies might be justified is the mortality impact of COVID-19. Attempts to evaluate the total mortality impact of COVID-19 are proceeding on several fronts. Progress is being made in estimating the infection fatality rate of COVID-19 and how this might vary across sub-populations 1 . Large, coordinated international collaborations have been set up to collect data that records COVID-19 attributable deaths. Attempts to estimate total excess mortality related to the COVID-19 are underway, and emphasized as an important measure 2 , 3 . Each of these research avenues and their associated health measures (infection rate, deaths and excess deaths) is important in informing the public and policymakers about the mortality impact of COVID-19. However, each come with their own limitations. Infection fatality rates apply only to the relatively small sub-population that has been confirmed to have the disease, and without knowledge about the true number of infected, these rates are inherently difficult to estimate. COVID-19 attributable deaths may over- or underestimate the true number of deaths that are due to the disease, as both policies and practices about coding the deaths are only being developed and standardized. Excess death approaches that compare mortality rates during the COVID-19 outbreak to a baseline depend on correctly estimating the baseline. The most important limitation in COVID-19 attributable death or excess death approaches, however, is that these approaches do not provide information on how many life years have been lost. Deaths at very old ages can be considered to result in fewer life years lost, when compared to deaths at very young ages. In fact, several policy responses (or non-responses) have been motivated with the argument that COVID-19 is mostly killing individuals who, even in the absence of COVID-19, would have had few life years remaining. However, comprehensive evaluation of the true mortality impact of COVID-19 has not been conducted. We analyze the premature mortality impact of COVID-19 by calculating the amount of life-years lost across 81 countries covering over 1,279,866 deaths. We base our analysis on two large recently established and continuously growing databases 4 , 5 and on two different methodological approaches, one based on COVID-19 attributable deaths, and, for selected countries, one based on estimated excess deaths comparing recent mortality levels to an estimated baseline. We are not able to solve the measurement limitations of either of these approaches, but the complementary nature of the two ways of measuring COVID-19 deaths makes these concerns explicit and allows us to evaluate the implications. This study is also limited to premature mortality only; a full health impact evaluation might consider for instance, the burden of disability associated with the disease 6 . This latter dimension requires thorough understanding of sequelae associated with COVID-19, for which data are limited at this point on a cross-country, comparable level. As such, we focus on premature mortality here. Methods Country death counts by age and gender due to COVID-19 come from the COVerAge-DB 4 ; the analysis includes all countries with at least one COVID-19 related death in 4 at the time of the study. Population data are drawn from the Human Mortality Database 5 and the World Population Prospects 7 . Country life expectancies are from the life tables in the World Population Prospects for the period 2015-2020. The dates at which data are collected, and death counts by country are reported in the Supplementary information materials (SI Table S1 ). Projections for total number of deaths due to COVID-19 by country are from the Imperial College 8 . Death counts due to other causes of mortality are from data in Global Burden of Disease 9 . Finally, we use weekly excess mortality data from the Short-Term Mortality Fluctuations Database (STMF, from the Human Mortality Database 5 ). A full description of the data, its sources, and the methodology is provided in the Supplementary Information. Results In total, 20,507,518 years of life have been lost to COVID-19 among the studied 81 countries, due to 1,279,866 deaths from the disease. The average years of life lost per death is 16 years. As countries are at different stages of the pandemic trajectory, this study is a snapshot of the impacts of COVID-19 on years of life lost (YLL) as of January 6, 2021 (a complete list of countries and their dates at measurement is in the Supplementary Information). In 35 of the countries in our sample, coverage of the data spans at least 9 months; in such cases, this suggests that the full impacts of the pandemic in 2020, or at least the first waves of the pandemic, are likely captured. For other countries still on an upwards incline of transmission rates or for which data is yet forthcoming for end of 2020, the YLL experienced are likely to further increase substantially in the next few months. We encourage context-based interpretation of the results presented here, especially when used for evaluation of the effectiveness of COVID-19 oriented policies. Figure 1 Panels A through C report the ratio of COVID-19 YLL rates over influenza YLL rates (in median/maximum deadly years by country), traffic accidents, and heart conditions respectively. Panel D reports, countries with available data, the ratio of YLL rates of COVID-19 deaths over YLL rates of excess deaths. When two causes of mortality affect YLL equally, the ratio is precisely 1; larger ratio values suggest COVID-19 YLL rates are higher than the alternative cause. Average ratios in vertical lines in each panel. Each country name is followed by (in parentheses) the number of days passed since the country’s first official COVID-case up to the last day of available COVID-19 deaths data for that country. Countries always sorted by ratio of COVID-19 YLL vs seasonal flu (in median years) across panels for ease of reading. Full size image Comparisons with other causes of mortality To put the impacts of COVID-19 on YLL in perspective, we compare it against the premature mortality impacts of three other global common causes of death: heart conditions (cardiovascular diseases), traffic accidents (transport injuries), and the seasonal “flu” or influenza (see the Supplementary Information for definitions and cause ids). Heart conditions are one of the leading causes of YLL 6 , while traffic accidents are a mid level cause of YLL, providing sensible medium and high cause comparison baselines. Finally, common seasonal influenza has been compared against COVID-19, as both are infectious respiratory diseases (though see 10 , which suggests vascular aspects to the disease). We compare YLL rates (per 100,000) for COVID-19 against YLL rates for other causes of death. There is substantial variation in the mortality burden of seasonal influenza by country across years and so we compare YLL rates for the worst and median influenza years for each country in the period 1990–2017. Comparisons of YLL rates for COVID-19 over YLL rates for other causes are presented in Fig. 1 . We find that in heavily impacted highly developed countries, COVID-19 is 2–9 times that of the common seasonal influenza (as compared to a median flu year for the same country), between 2 and 8 times traffic related YLL rates, between a quarter and a half of the YLL rates attributable to heart conditions in countries (with rates as high as parity to twice that of heart conditions in Latin America). Variation across countries is large, as many countries have YLL rates due to COVID-19 still at very low levels. Results in our Supplementary Information show that these countries are often countries where relatively fewer days have passed since first confirmed case of COVID-19. A noted problem in attributing deaths to COVID-19 has been systematic undercounting of deaths due to COVID-19, as official death counts may reflect limitations in testing as well as difficulties in counting in out-of-hospital contexts. In order to asses the importance of undercounting in our results, we compute excess deaths for 19 countries with available weekly mortality data. A mortality baseline is estimated for each country and age group for weekly all-cause mortality since the first week of 2010. Our results (Fig. 1 , fourth panel) support the claim that the true mortality burden of COVID-19 is likely to be substantially higher. Comparisons of COVID-19 attributable deaths and excess deaths approaches to calculating YLL suggests that the former on average may underestimate YLL by a factor of 3. Variation across countries is large, in Belgium the two approaches deliver comparable results, but for Croatia, Greece and South Korea the excess deaths approach suggests that we may underestimate the YLL by a factor of more than 12. Figure 2 Panel A displays the country-specific proportions of YLL traced back to each age group. The global average proportion is presented at the top, and countries are in decreasing proportion of YLL in the under 55 age bracket. Panel B reports the ratio of male YLL rates to female YLL rates for countries with available gender specific COVID-19 death counts. Countries with genders equally affected by YLL rate are closer to the parity line at 1, while countries with women more affected have points lying on the left; countries with men more severely affected display points lying to the right. Global average and global weighted average of male to female YLL are presented at the top. Full size image Age specific years of life lost As has been noted early on in the pandemic, mortality rates for COVID-19 are higher for the elderly 11 , with postulations that this may be due to correlations with the greater likelihood of these individuals suffering from underlying risk factors 12 , 13 . This study’s sample presents an average age-at-death of 72.9 years; yet only a fraction of the YLL can be attributed to the individuals in the oldest age brackets. Globally, 44.9% of the total YLL can be attributed to the deaths of individuals between 55 and 75 years old, 30.2% to younger than 55, and 25% to those older than 75. That is, the average figure of 16 YLL includes the years lost from individuals close to the end of their expected lives, but the majority of those years are from individuals with significant remaining life expectancy. Across countries, a substantial proportion of YLL can be traced back to the 55–75 age interval, however there remain stark differences in the relative contribution of the oldest and youngest age groups (Fig. 2 , Panel A). These patterns account for the proportion of YLL for each age group out of the global YLL (see Table S7 ). In higher income countries, a larger proportion of the YLL is borne by the oldest group compared to the youngest age groups. The opposite pattern appears in low and mid-income countries, where a large fraction of the YLL are from individuals dying at ages 55 or younger. Gender specific years of life lost It has also become apparent that there are gender disparities in the experience of COVID-19 14 ; our study finds this to be true not only in mortality rates, but in absolute years of life lost as well. In the sample of countries for which death counts by gender are available, men have lost 44% more years than women. Two causes directly affect this disparity: (1) a higher average age-at-death of female COVID-19 deaths (71.3 for males, 75.9 for females), resulting in a relatively lower YLL per death (15.7 and 15.1 for males and females respectively); and (2) more male deaths than female deaths in absolute number (1.39 ratio of male to female deaths). Though this general pattern is shared by most countries, the size of the disparity varies, as well as the importance of the two above causes. The ratio of male YLL rates (per 100,000) to female YLL rates for COVID-19 spans from near parity, such as in Finland or Canada, to more than double the YLL rates countries like Peru or quadruple like in Taiwan (Fig. 2 , Panel B). For countries that present highly skewed male to female YLL rates (most prevalent in low-income countries), the death count differences across genders contribute the most to this imbalance. Yet, the substantial imbalances remain starkly present among high-income countries as well (see Supplementary Information for details). Discussion Understanding the full health impact of the COVID-19 pandemic is critical for evaluating the potential policy responses. We analyzed the mortality impact of COVID-19 by calculating the amount of life-years lost across 81 countries covering over 1,279,866 deaths. From a public health standpoint, years of life lost is crucial in that it assesses how much life has been cut short for populations affected by the disease. We considered COVID-19 attributable deaths throughout in identifying patterns of years of life lost, and as an important robustness check, conducted analysis based on estimated excess deaths comparing recent mortality levels to a (estimated) baseline. Our results deliver three key insights. First, the total years life lost (YLL) as of January 06, 2021 is 20,507,518, which in heavily affected countries is between 2 and 9 times the median YLL of seasonal influenza or between a quarter and a half of heart disease. This implies 273,947 “full lives lost” – or over two hundred thousand lives lived from birth to the average life expectancy at birth in our sample (74.85 years). Second, three quarters of the YLL are borne by people dying in ages below 75. Third, men have lost 45% more years of life than women. These results must be understood in the context of an as-of-yet ongoing pandemic and after the implementation of unprecedented policy measures. Existing estimates on the counterfactual of no policy response suggest much higher death tolls and, consequently, YLL. Our calculations based on the projections by 8 yield a total impact several orders of magnitude higher, especially considering projections based on a complete absence of interventions (see Supplementary Information for details on projections). This is in line with further evidence of the life-saving impacts of lockdowns and social distancing measures 15 . There are two key sources of potential bias to our results, and these biases operate in different directions. First, COVID-19 deaths may not be accurately recorded, and most of the evidence suggests that on the aggregate level, they may be an undercount of the total death toll. As a result, our YLL estimates may be underestimates as well. We compare our YLL estimates to estimates based on excess death approaches that require more modeling assumptions but are robust to missclassification of deaths. The results of this comparison suggest that on average across countries, we might underestimate COVID-19 YLL rates by a factor of 3. Second, those dying from COVID-19 may be an at-risk population whose remaining life expectancy is shorter than the average person’s remaining life expectancy 16 , 17 , 18 . This methodological concern is likely to be valid, and consequently our estimate of the total YLL due to COVID-19 may be an overestimate. However, our key results are not the total YLL but YLL ratios and YLL distributions which are relatively robust to the co-morbidity bias. Indeed, this bias also applies to the YLL calculations for the seasonal influenza or heart disease. Thus, the ratio of YLL for COVID-19 compared to other causes of death is more robust to the co-morbidity bias than the estimate on the level of YLL as the biases are present in both the numerator and the denominator. Likewise, the age- and gender distributions of YLL would suffer from serious co-morbidity bias only if these factors vary strongly across the age or gender spectrum. As noted earlier, our analysis is limited to premature mortality. A full health impact evaluation ought to consider the burden of disability associated with the disease.Indeed, YLL are often presented jointly with years lived with disability (YLD) in a measure known as disability-adjusted life year (DALY), constructed by adding YLD to YLL 19 . In order to compute YLD, though, we must have a thorough understanding of the sequela associated with the disease, as well their prevalence. Several sequelae have been linked to COVID-19 recently 20 , 21 in China, but we still lack the full understanding of the extent that would be needed to compute reliable cross-national YLD measures at the scale of this article. We see collection of such measures as therefore of key importance in next steps in advancing our understanding of the magnitude of the COVID-19 effects on public health. Some of our findings are consistent with dominant narratives of the COVID-19 impact, others suggest places where more nuanced policy-making can affect how the effects of COVID-19 might be spread among society. Our results confirm that the mortality impact of COVID-19 is large, not only in terms of numbers of death, but also in terms of years of life lost. While the majority of deaths are occurring at ages above 75, justifying policy responses aimed at protecting these vulnerable ages, our results on the age pattern call for heightened awareness of devising policies protecting also the young. The gender differential in years of life lost arises from two components: more men are dying from COVID-19, but men are also dying at younger ages with more potential life years lost than women. Holding the current age distribution of deaths constant, eliminating the gender differential in YLL would require on average a 34% reduction in male death counts; this suggests that gender-specific policies might be equally well justified as those based on age. Data availability All study code and data are fully replicable and available in the following Open Science Framework (OSF) repository: . Change history 14 April 2021 A Correction to this paper has been published:
Over 20.5 million years of life may have been lost due to COVID-19 globally, with an average of 16 years lost per death, according to a study published in Scientific Reports. Years of life lost (YLL)—the difference between an individual's age at death and their life expectancy—due to COVID-19 in heavily affected countries may be two to nine times higher than YLL due to average seasonal influenza. Héctor Pifarré i Arolas, Mikko Mÿrskyla and colleagues estimated YLL due to COVID-19 using data on over 1,279,866 deaths in 81 countries, as well as life expectancy data and projections for total deaths of COVID-19 by country. The authors estimate that in total, 20,507,518 years of life may have been lost due to COVID-19 in the 81 countries included in this study—16 years per individual death. Of the total YLL, 44.9% seems to have occurred in individuals between 55 and 75 years of age, 30.2% in individuals younger than 55, and 25% in those older than 75. In countries for which death counts by gender were available, YLL was 44% higher in men than in women. Compared with other global common causes of death, YLL associated with COVID-19 is two to nine times greater than YLL associated with seasonal flu, and between a quarter and a half as much as the YLL attributable to heart conditions. The authors caution that the results need to be understood in the context of an ongoing pandemic: they provide a snapshot of the possible impacts of COVID-19 on YLL as of 6 January, 2021. Estimates of YLL may be over- or under-estimates due to the difficulty of accurately recording COVID-19-related deaths.
10.1038/s41598-021-83040-3
Medicine
Blue Brain finds how neurons in the mouse neocortex form billions of synaptic connections
Michael W. Reimann et al, A null model of the mouse whole-neocortex micro-connectome, Nature Communications (2019). DOI: 10.1038/s41467-019-11630-x Journal information: Nature Communications , Cerebral Cortex
http://dx.doi.org/10.1038/s41467-019-11630-x
https://medicalxpress.com/news/2019-08-blue-brain-neurons-mouse-neocortex.html
Abstract In connectomics, the study of the network structure of connected neurons, great advances are being made on two different scales: that of macro- and meso-scale connectomics, studying the connectivity between populations of neurons, and that of micro-scale connectomics, studying connectivity between individual neurons. We combine these two complementary views of connectomics to build a first draft statistical model of the micro-connectome of a whole mouse neocortex based on available data on region-to-region connectivity and individual whole-brain axon reconstructions. This process reveals a targeting principle that allows us to predict the innervation logic of individual axons from meso-scale data. The resulting connectome recreates biological trends of targeting on all scales and predicts that an established principle of scale invariant topological organization of connectivity can be extended down to the level of individual neurons. It can serve as a powerful null model and as a substrate for whole-brain simulations. Introduction The study of connectomics has to date largely taken place on two separate levels with disjunct methods and results: macro-connectomics, studying the structure and strength of long-range projections between brain regions, and micro-connectomics, studying the topology of individual neuron-to-neuron connectivity within a region. In macro-connectomics, the absence or presence and strength of projections between brain regions are measured using for example, histological pathway tracing, retrograde 1 , 2 or anterograde 3 tracers, or MR diffusion tractography 4 , 5 . While recent advances made it possible to turn such data into connectome models with a resolution of 100 μm 6 , this is still far away from single-neuron resolution. In micro-connectomics, two complementary approaches prevail: stochastic models and direct measures of synaptic connectivity using, for example, electron microscopy. The first uses biological findings to formulate principles that rule out certain classes of wiring diagrams and prescribe probabilities to the remaining ones, while with electron microscopy, snapshots of individual biological wiring diagrams are taken 7 , 8 , 9 , 10 , 11 , 12 , 13 . However, published reconstructed volumes at this point only contain incomplete dendritic trees, and therefore incomplete connectivity. To gain a full understanding of, for example the role of an individual neuron or small groups of neurons in a given behavior, we will have to integrate the advantages of both scales: single-neuron resolution on a whole-brain or at least whole-neocortex level. This has been recognized before 14 , but steps toward this goal have until now remained limited. At this point, electron-microscopic reconstructions at that scale are not viable, leaving only statistical approaches to dense micro-connectivity, based on identifying biological principles in the data. Scaling it up to a whole-neocortex level will amplify the uncertainty about the biological accuracy of the results, as many of the resulting connections will be between rarely studied brain regions with little available biological data. Nevertheless, it can serve as a first draft micro-connectome defining a null model to compare and evaluate future findings against. It will also allow us to perform full-neocortex simulations at cellular resolution to gain insights, as to which brain function can or cannot be explained with a given connectome. We have completed such a first-draft connectome of mouse neocortex by using an improved version of our previously published circuit and connectivity modeling pipeline 15 . It has been improved to place neurons in brain-atlas defined 3d spaces instead of hexagonal prisms, taking into account the geometry and cellular composition of individual brain regions. However, this did not include long-range connections between brain regions, especially the ones formed via projections along the white matter. We therefore set out to identify possible principles, hypotheses of rules constraining the long-range connectivity, and develop stochastic methods to instantiate micro-connectomes fulfilling them. A first constraint was given by the data on macro- or mesoscale connectivity, which is often reported as a region-to-region connection matrix, yielding a measure proportional to the total number of synapses forming a projection between pairs of brain regions 1 , 14 , 16 , 17 . We used for this purpose, the recently published mesoscale mouse brain connectome of Harris et al. 3 . This data set splits the mouse neocortex into 86 separate regions (43 per hemisphere) and further splits each region when considered as a source of a projection into five individual projection classes, by layer or pathway (Layer 23IT, Layer 4IT, Layer 5IT, Layer 5PT, and Layer 6CT). IT refers to intratelencephalic projections, targeting the ipsilateral and contralateral cortex and striatum; PT refers to pyramidal tract projections, predominantly targeting subcortical structures, but also ipsilateral cortex; CT refers to corticothalamic projections. From here on, we will leave out this additional distinction for projections from layers 2/3, 4, and 6, where only one class is specified in the data of ref. 3 . While the data set does not include GABAergic projection neurons 18 , it provides the most comprehensive information on connection strengths of individual projection classes to date. We further constrained the spatial structure of each projection within the target region. Along the vertical axis (orthogonal to layer boundaries), this was achieved by assigning a layer profile to each projection, as provided by Harris et al. 3 . Along the horizontal axes, we assumed a generalized topographical mapping between regions, parameterized using a voxelized (resolution 100 μm) version of the data provided by Knox et al. 6 . As a final constraint, we applied rules on the number and identity of brain regions innervated by individual neurons in a given source region. To this end, we analyzed the brain regions innervated by individual in vivo reconstructions of whole-brain axons in a published data set (MouseLight project at Janelia, mouselight.janelia.org 19 ). Based on the analysis, we conceptualized and parameterized a decision tree of long-range axon targeting that reproduced the targeting rules found in the in vivo data. This approach was generalized to other brain regions for which few or no axonal reconstructions are available. Finally, we implemented a stochastic algorithm that connected morphologically detailed neurons in a 3d-volume representing the entire mouse neocortex. Synapses were placed onto the dendrites of target neurons according to all the derived constraints by a modified version of a previously used algorithm 15 . Analyzing the results, we found that the constraints we added on top of the region-to-region projection matrices led to a surprisingly complex and non-random micro-structure of neuron-to-neuron connectivity. We characterized this structure to be an extension of an established principle of hierarchical organization of modular connectivity 20 to the level of individual neurons. Results Neuronal composition and local connectivity We placed around 10 million morphological neuron reconstructions in a 3d space representing the entirety of a mouse neocortex. Neuron densities and excitatory to inhibitory ratios at each location were taken from a voxelized brain atlas 21 , which is consistent with version 3 of the brain parcellation of the Allen Brain Atlas 22 , 23 . The composition in terms of morphological neuron types was as in Markram et al. 15 . Reconstructed morphologies were placed in the volume according to densities for individual, morphologically defined subtypes, and correctly oriented with respect to layer boundaries. For simplicity, we made a strict distinction between local and long-range connectivity, defining local connectivity to comprise any connection where source and target neuron were in the same brain region according to the parcellation in Harris et al. 3 , and derived it using previously published methods 24 . All other connections were considered long-range and were derived using the methods described below. Constraining the anatomical strengths of projections For long-range connectivity, we handled each combination of a projection class (Layer 23, Layer 4, Layer 5IT, Layer 5PT, and Layer 6), a source region, and a target region as a conceptually separate projection. As a first constraint, we determined the average volumetric density of synapses in each projection using published data 3 , 25 , using a programmatic interface provided by the authors. Two further steps were required to apply their data: scaling from projection strength to synapse density, and splitting into densities for individual projection classes. The biological data provided a measure proportional to the mean volumetric density of projection axons in the target region. Assuming a uniform mean density of synapses on axons across projections, the volumetric synapse density is simply a scaled version of this. We calculated a scaling factor such that the resulting total synapse density in ipsilateral and contralateral projections matches previously published results 26 . From their measured average synapse density (0.72 μm −3 ), we subtracted the synapses we predicted in local connectivity within a region. While a part of the remaining synapses is formed by projections from the hippocampus and extracortical structures, their total number is unclear, but likely comparatively small. For example, the density of synapses in the prominent pathway from VPM into the barrel field 27 , when averaged over the whole-cortical depth, is only ~1.5% of the average total density. For now we left no explicit space for synapses from such projections due to the difficulties in parameterizing it for all potential sources. We then generated matrices of synapse densities for different projection classes by considering projection strengths derived only from tracer experiments in cre-lines associated with a given projection class. Unfortunately, there were no experiments available for some combinations of cre-line and source region. Instead we generated individual matrices by first averaging the reported projection strengths of a line associated with a projection class over modules of several contiguous brain regions (see Supplementary Table 1 ), and then using that information to generate scaled versions of the wild-type matrix (see the Methods section). Each combination of source and target module were scaled individually, and we enforced the sum of matrices over projection types to be equal to the wild-type. The result is a prediction of the mean volumetric synapse densities from the bottom of layer 6 to the top of layer 1 for all projections (Fig. 1 ). Fig. 1 Predicted synapse densities in target regions. Modules are labeled: PF: prefrontal, AL: anterolateral, SoM: somatomotor, Vis: visual, Med: medial, Temp: temporal. Exact order of brain regions and assignment to modules by Harris et al. 3 are also listed in Supplementary Table 1 . White regions indicate no projections placed for that combination of source and target region Full size image Constraining layer profiles So far, we have constrained density and consequently the total number of synapses formed by each individual projection. This reproduces the spatial structure of projections on the macroscale. However, it is likely that there is also spatial structure within a projection, on the mesoscale or microscale. One such structure, acting along the vertical axis is a distinct targeting of specific layers 28 . To constrain the layer profiles of projections, we once more tended to the data published in Harris et al. 3 . The authors provide extensive data on layer profiles, measured hundreds of them, and then clustered them into six prototype profiles using unsupervised hierarchical clustering using spearman correlation and average linkages. As they demonstrate that these prototypes occur in significantly different numbers in feedforward against feedback projections and for the various projection classes and modules, we concluded that they capture sufficient biological detail. We therefore decided to follow this classification and assign one of the prototype profiles to each projection. Harris et al. 3 already measured the relative frequencies of their prototypical layer profiles for individual projection classes (their Fig. 5o) and for individual source modules, within and across modules (their Fig. 8c, d). They also classified profiles as belonging to feedforward or feedback projections. We combined the constraints by first calculating which layer profiles are overexpressed or underexpressed between pairs of modules, relative to the base profile frequencies for projection classes (see the Methods section). We then classified each projection as feedforward or feedback, based on the hierarchical position of the participating regions, and cut the assumed frequencies of profiles belonging to the other type in half. Finally, we picked for each combination of projection class, source, and target region the layer profile with the highest derived frequency. We chose to pick the single most likely profile for each projection and ignore the others, as mixing several profiles would have diminished their sharp, distinguishable peaks and troughs. The approach resulted in a prediction, where each profile is used for between 10 and 20% of the projections (Fig. 2a ). Based on the prediction, we calculated the resulting relative frequencies of layer profiles per module and per projection class and compared them against the data (Supplementary Fig. 1 ). We found that in spite of the simplifying step of picking only the most likely profile, the trends in the data were well preserved, although the peaks and troughs were more exaggerated in the model. Fig. 2 Predicted layer profiles. a Predictions for all projection classes. Exact order of brain regions and assignment to modules by Harris et al. 3 are also listed in Supplementary Table 1 . b – f Relative error of the predicted synapse densities in all layers. That is, the difference between prediction and the mean of the raw biological data, divided by the standard deviation of the biological data. b For projections from L2/3. c From L4. d From L5IT. e From L5PT. f From L6. Dashed black lines indicate the biological variability of density under the assumption that it is Gaussian distributed. We used only projections where more than five raw data points to establish the biological variability were available. g Fraction of projections where with the relative error under two standard deviations for each source layer Full size image We have demonstrated that our simplified predictions recreate the tendencies demonstrated in Harris et al. 3 , but the question remains, how do they compare against the raw biological data? As we moved through two consecutive simplifications—from the raw data to six prototypical profiles and from six profiles to a single profile per projection—how much biological detail was lost? To address this question, we generated raw layer profiles from the voxelized experimental data on projection strength of individual cre-lines 3 using a programmatic interface provided by the authors 6 , and compared them to our single prediction. We obtained the connection strength measured with individual tracer experiments in voxels with a resolution of 100 μm 3 and grouped them by cre-lines associated with the projection classes (see Methods). For each experiment, we calculated a profile of the connection strengths in each layer of a region, relative to the mean across all layers. As a representative example, Supplementary Fig. 2 depicts the model for projections from MOs (blue line) and the data from individual experiments (gray lines). We see an overall fair match between simplified prediction and data, albeit with some errors. For example, the data for 5PT projections show very shallow profiles in four regions that were not predicted. For 5IT projections to visual regions, the data flattens out in layer 1 instead of peaking, although this may be partly artificial, because the data resolution of 100 μm 3 is close to the width of layer 1, leading to unreliable sampling. Overall, we find a substantial degree of variability in the biological data, especially for projections from layer 2/3. For example, the density in layer 4 of VISpor due to projections from layer 2/3 of MOs varies between 0.2 and 2.5 times mean. As such, we evaluated the overall match of our predictions relative to the biological variability by calculating the deviation from the biological mean in multiples of the biological standard deviation (z-score, Fig. 2b–f ). As a certain number of samples is required to estimate the biological variability, we limited this validation to projections where data from at least five experiments were available. Under the assumption of a Gaussian profile, the data randomly sampled from the biological distribution would follow a standard normal distribution of z-scores (Fig. 2b–f , black dashed lines). We found that the bulk of our predictions fall within that distribution, although a significant number have a z-score exceeding four standard deviations, especially for projections from layer 4 (Fig. 2c ). Yet, 75% of z-scores fall within two standard deviations (Fig. 2g ). We conclude that the predicted layer profiles fall within the range of biological variability for most projections, but do result in imperfect densities in individual cases. We judge this to be sufficient for a first draft null model of a white matter micro-connectome, but refinement should be attempted in the future, as more data, such as whole-brain axonal reconstructions become available. Constraining the mapping of projections The previous section constrained projection by imposing a spatial structure along the vertical axis, a layer profile. Yet, it is likely that there is also a structure along the other two spatial dimensions. That is, that neurons around a given point in the source region project not equally to all points in the target region, but with certain spatial preferences, which we assumed can be expressed by a topographical mapping. To define the mapping, we once more used the voxelized version of the mouse meso-connectome model 6 . As each brain region comprised many voxels in the model, we could use this data to determine whether any given part of a brain region projected more strongly to some part of the target region than to other parts. This would indicate a structured, nonrandom mapping that we would have to recreate to preserve the biologically accurate cortical architecture. We started by projecting 3d representations of the source and target regions into 2d, preserving distances along the cortical surface (as in Harris et al. 3 ). This effectively collapsed the vertical axis, as we had constrained structure along that axis in the previous step. Next, we defined a local barycentric coordinate system in the 2d representation of the source region by picking three points inside the region that maximize the sum of pairwise distances between them, then moving them 25% toward the center. We visualized the result by setting each of the red, green, and blue color channels of an image of the source region to one of the three barycentric coordinates (Fig. 3a , C src ). By extension, we also associated each voxel of the macro-connectome model ( x , y , z ) with a color ( B x , y , z ) by first projecting its center into the 2d plane, then looking up the barycentric coordinate. Next, we considered the strengths of projections from each source voxel and visualized the results by coloring each target pixel according to the product of the 2d-projected projection strength and the color associated with the source voxel: $$I^{{\mathrm{raw}}} = f \cdot \mathop {\sum}\limits_{x,y,z \in V_{src}} {B_{x,y,z}} \cdot F\left( {p_{x,y,z}} \right),$$ (1) where p x , y , z refers to the voxelized projection strength from the voxel at x , y , z , F ( p x , y , z ) to its 2d projection and f to a scaling factor effectively deciding the overall lightness of the resulting image. The result is a two-dimensional image with three color channels \(\left( {I_R^{{\mathrm{raw}}},I_G^{{\mathrm{raw}}},I_B^{{\mathrm{raw}}}} \right)\) . To more clearly reveal the structure of projections, we ignored source voxels associated with a color saturation below 0.5. Fig. 3 Projection mapping in the visual system. a The primary visual area (VISp) and its defined source coordinate system. The three points defining the barycentric system are indicated as colored triangles. Each coordinate is associated with the indicated red, green, or blue color channel to decide the color of each pixel in the region. b The spatial structure of projections from VISp is indicated by coloring pixels in the surrounding regions according to the color in a of the area they are innervated from. c Center: as in b , but the color of each pixel is normalized such that the sum of the red, green, and blue channels is constant. Periphery: target coordinate systems for the surrounding regions were fit to recreate the color scheme of the center, when colored as in a Full size image The results showed a clear nonrandom structure of targeting in the other regions (e.g., for projections from VISp: Fig. 3b ). To parameterize this structure, we first normalized the color values of each pixel, dividing them by the total projection strength reaching that pixel from src . We set the denominator to a minimum of 25% of the maximum strength from src in the target region to ensure that weakly innervated parts of tgt would be depicted as such. $$N_{src}^{tgt}[a,b] = \frac{{I^{{\mathrm{raw}}}[a,b]}}{{{\mathrm{max}}\left( {I_R^{{\mathrm{raw}}}[a,b] + I_G^{{\mathrm{raw}}}[a,b] + I_B^{{\mathrm{raw}}}[a,b],\sigma _{25}} \right)}},$$ (2) where I [ a , b ] denotes the pixel of image I at coordinates a , b and $$\sigma _{tgt} = 0.25 \cdot {\mathrm{max}}_{[a,b] \in tgt}\left( {I_R^{{\mathrm{raw}}}[a,b] + I_G^{{\mathrm{raw}}}[a,b] + I_B^{{\mathrm{raw}}}[a,b]} \right)$$ (3) This represented a projection as pixels with normalized lightness, that faded to black in weakly innervated parts of the target region (Fig. 3c , center, \(N_{src}^{tgt}\) ). Next, we optimized a barycentric coordinate system in the 2d-projected target region to most closely recreate the color scheme observed in \(N_{src}^{tgt}\) (Fig. 3c , periphery, \(M_{src}^{tgt}\) ). We then assume that a neuron at any coordinate in C src is mapped to neurons at the same coordinate in \(M_{src}^{tgt}\) . Thus, the two local coordinate systems, each parameterized by three points, together define the topographical mapping between regions src and tgt . We validated our predicted mapping against established data on the retinotopic mapping in the visual system. This is functional data on the mapping between a brain region and locations in the visual field instead of anatomical data on the projections between brain regions. Yet, we can use it for validation under the assumption that areas corresponding to the same location in the visual field are preferably projecting to each other. Analyzing the retinotopy, Wang and Burkhalter 29 found certain trends: In adjacent regions, points close to the boundary between them on both sides are mapped together. A counter-clockwise cycle in one area is mapped to a clockwise cycle in an adjacent one. This change in chirality indicates that the mapping must contain a reflection operation. Juavinett et al. 30 utilize this to identify borders between brain areas from intrinsic signal imaging of retinotopy. When we systematically examined the reflections and rotations in our predicted mapping (Table 1 ), we found identical results. Table 1 Validation of predicted mapping Full size table Finally, we quantified to what degree barycentric coordinate systems in source and target region can capture the biological trends present in the projection data. As this type of mapping is always continuous and cannot capture nonlinear trends, biological accuracy could be lost. To this end, we calculated the difference between the image of the target region, colored according to the target coordinate system, \(M_{src}^{tgt}\) , and the normalized image of the target region according to the projection data, \(N_{src}^{tgt}\) . We defined the relative error of a target coordinate system as the sum of absolute differences of the two images, divided by their average and the number of pixels (Fig. 4 ). We found that for over half of the projections the error was below 5% and the maximum error was 17%. Fig. 4 Validation of predicted mapping. Relative error of the mapping defined by the barycentric coordinate systems in the target area, compared with the data. Values along the main diagonal: for contralateral mapping; all others: ipsilateral mapping. The data shown where the sum of densities from all projection classes is above 0.025 μm −3 Full size image Constraining projection types Thus far, we have considered constraints on the spatial structure of projections on a global scale (the macro-connectome matrix) and a local scale (the layer profiles and the mapping). The topographical mapping also limited which individual neurons in a target region can be reached by a given neuron in a source region, severely constraining the topology of the potential connectome graphs on a local scale. Yet, an important aspect of neocortical connectivity not yet considered is which combinations of regions are innervated by single-source neurons 31 . Even if we know which regions are innervated by a population of neurons in a given region, each individual neuron is likely to innervate only a subset of those regions. We call that subset its projection type or p-type . It is unclear to what degree the process is pre-determined or stochastic, and if it is stochastic, what mechanisms further shape and constrain the randomness. This is a complex problem, as a region such as SSp-tr innervates 27 other regions, yielding 2 27 = 134217728 potential p-types. To tackle this problem, we analyzed that reconstructed axons made available by the MouseLight project at Janelia 19 . These are whole-brain neuron reconstructions of cortical neurons that include their long-range projections. We first classified their neuron types, then placed the axons in the context of the Allen Brain Atlas and finally evaluated the amount of axonal length projecting into the 43 ipsilateral and 43 contralateral brain regions. Figure 5a shows an example of 61 analyzed axons originating in MOs. The scale of the p-type problem is clear at first glance: only a single combination of innervated regions is repeated in this data set, all others represent unique p-types. Yet, a structure is also apparent: while only 11 out of the 61 axons innervate the visual or medial modules, the ones that do tend to innervate more than a single of their regions. Moreover, it appears that the projection strength (Fig. 5a , first row) is a strong predictor of the probability that any given axon innervates a region (innervation probability), indicating that a projection is strong because many neurons participate in it, not because of few participating neurons with large axonal trees in the target region. Fig. 5 Innervation of brain regions by individual axons. a Projection density according to Harris et al. 3 (top row), ranging from no projection (white) to strong projections (black), and brain regions innervated by 61 reconstructed axons (rows) indicated by gray squares. b Probability to innervate individual brain regions, predicted from the normalized projection strength from MOs, against the observed innervation probability (L2/3: calculated from n = 25 axons, L5: n = 61 axons, L6a: n = 35 axons). c Normalized projection strength against the mean total length of axon branches in individual brain regions ( n as in b ). d Observed interactions between the innervation of individual brain regions, i.e., increase in innervation probability of one region when the other is known to be innervated. e Increase in innervation probability as in d against the innervation probability of a pair of regions under the assumption of independence. Gray dotted line indicates the point where the product of independent probability and increase is one that can logically not be exceeded. All innervations and projection strengths in this figure are for projections from MOs Full size image Next, we analyzed these observations systematically. We only had for the source region MOs a sufficient number of reconstructed axons to robustly estimate the innervation probabilities. We found that innervation probability was proportional to the normalized projection strength, i.e., the amount of axon in the target region, normalized by the volume of the source region. We determined projection class-specific constants of proportionality with a linear fit, resulting in a predicted innervation probability \(P = 0.5 \cdot \sqrt {nps}\) for projections from L2/3 and L6, \(0.33 \cdot \sqrt {nps}\) from L4 and L5PT and \(0.22 \cdot \sqrt {nps}\) from L5IT. Figure 5b compares the innervation probability predicted this way to the one observed in 25 samples for L2/3 of MOs, 61 samples for L5 of MOs, and 35 samples of its L6 ( p = 3 · 10 −9 , two-tailed pearsonr, n = 3·86, i.e., one sample per region x hemisphere x projection class). Conversely, the projection strength was less a predictor of the axon length in a target region for individual axons innervating the region (Fig. 5c ). Projection strength being a predictor of innervation probability is in line with the findings of Han et al. 31 . Assuming the principle holds for other brain regions as well, we were able to predict the first-order innervation probabilities for all combinations of source and target region. Next, we analyzed statistical interactions of the innervation probabilities for axons originating in MOs. For pairs of target regions, we evaluated the null hypothesis that their innervations are statistically independent, and if it was rejected ( p ≥ 0.05; see the Methods section) calculated the strength of the statistical interaction as the conditional increase in innervation probability \(\left( {\frac{{P(s \to t_1|s \to t_2)}}{{P(s \to t_1)}}} \right)\) . We found significant interactions for 283 pairs (Fig. 5d ), with some strengths exceeding a 15-fold increase. However, there were several problems preventing us from simply using these observed interactions to constrain connectivity. First, we only had data for axons originating from one of 43 brain regions and it is likely that interactions differ for source regions. Second, the data were incomplete, as some targeted regions were not innervated by a single reconstructed axon (Fig. 5d , white patches), and others were based on only a single or two axons. Third, evaluating 86·(86−1)/2 = 3655 potential interactions based on only 61 data points (i.e., axons) are statistically inherently unstable and likely to dramatically overfit. A model to generate projection types Instead, we tried to use the available axon data to develop a conceptual model of how the interactions arise. We first observed that the largest interactions strengths occurred for target regions in the medial and visual modules that are otherwise only weakly innervated. Evaluating this observation systematically, we found that indeed the strength of an interaction was strongly negatively correlated with the product of the first-order innervation probabilities of the pair (Fig. 5e ). Second, we observed only conditional increases in innervation probability (values ≥1), i.e., innervation of pairs of brain regions is not mutually exclusive. One model explaining both our observations is the following: consider a tree with the brain regions in both hemispheres as the leaves. Let each edge in the tree be associated with a probability that the edge is successfully crossed by an axon, these probabilities can be different in both directions of the edge. To generate the set of innervated regions for a random axon, start at the leaf representing its source region and then consecutively spread to other nodes further into the tree along its edges with the probabilities associated with the edges (Fig. 6a ). Once it has been decided that an edge is not crossed, it cannot be crossed in future steps. Every leaf reached this way is then considered to be innervated by the axon. Fig. 6 A model to generate p-types. a Toy example of a p-type generating model with four regions (A–D). The regions are associated with the leaves of a directed tree (black), edges of the tree are associated with a probability to cross it. Two exemplary axons (orange, blue) spread from region D either crossing an edge (dashed lines) or not (dashed X-marks). Inset: resulting p-types; black regions are innervated; s indicates the source region. b Examples of innervation of brain regions predicted by the full model for L5IT (left column) and of reconstructed axons (right column). Sampled axons along the y -axis, brain regions along the x -axis. A black pixel indicates that an axon is innervating a region. Top row: axons originating from L5 of MOs; bottom row: from MOp. c Pairwise distances (hamming distance) between the profiles of brain region innervation. Blue: the data from reconstructed axons (see a ); orange: from 10,000 profiles sampled from the tree-based model; green: from 1000 profiles sampled from a naive model taking only the first-order innervation probabilities into account. Left: for axons originating from MOp; right: from MOs. d Increase in innervation probability against the basic innervation probability as in Fig. 5e Full size image If we set the length of an edge in this model to the negative logarithm of the associated probability, then the first-order probability that a region T is innervated by an axon originating in region S is easily calculated: $$P(S \to T) = 10^{ - L(S,T)},$$ (4) Where L ( S , T ) denotes the length of the shortest path between S and T . Similarly, the increase in conditional innervation probability of T 1 and T 2 is given as: $$I(S,T): = \frac{{P(S \to T_1|S \to T_2)}}{{P(S \to T)}} = \frac{{10^{ - L(lca(T_1,T_2),T_2)}}}{{10^{ - L(S,T_2)}}},$$ (5) Where lca ( T 1 , T 2) is the lowest common ancestor of T 1 and T 2 . Due to the underlying tree structure, the lowest common ancestor is always an inner node that is closer or of equal distance to T 2 , therefore the strengths of interactions are always larger than one indicating an increase of innervation probability, which is in line with our earlier observations. Fitting the model consisted of two steps: first, we generated the topology of the tree using the normalized connection density of projections, i.e., the amount of signal (axon) in the target region normalized by the volume of both source and target region. Specifically, we used the Louvain heuristics 32 with successively decreasing values for the gamma parameter to detect successively larger communities in the matrix of normalized connection densities (see Methods). Next, we replaced each edge with two directed edges, one in each direction. Then we optimized the probabilities associated with edges using the first-order innervation probabilities predicted from the normalized connection strength of projections as in Fig. 5b . These predictions then served as constraints on the path lengths between leaves. Specifically, we locally optimized the edges in small motifs consisting of two sibling nodes and their parent, based on differences in the distances of the siblings to all leaves (see Methods). As the pair of edges between nodes can have different associated probabilities, the predicted statistical interactions are not symmetric (see Supplementary Fig. 4 ) and there can be region-specific differences in the number of regions innervated or innervated from. We used the fitted model to generate 10,000 profiles of brain region innervation for axons originating from L5 of MOs and MOp. Figure 6b compares a number of randomly picked profiles against the data from reconstructed axons for both regions. As the model was constrained with the predicted first-order innervation probabilities, it manages to recreate the observed high-level trends: strong innervation of the ipsilateral and contralateral prefrontal, anterolateral, and somatomotor modules; weaker, but highly correlated innervation of the other modules. To test the model further, we calculated the pairwise hamming distances between innervation profiles from reconstructed axons and from the model (Fig. 6c ). We also compared the data against a naive model using only the observed first-order innervation probabilities and assuming no interactions. We found that the naive model resulted in a narrow, symmetrical distribution with a single peak at around 9 (MOp) or 13 (MOs). In contrast, the axon data led to a much wider, asymmetrical, and long-tailed distribution that were much better approximated by the tree-based model. The difference between the distribution resulting from the tree-based model and the axon data was, in fact, not statistically significant (MOp: p = 0.44, n = 9 axons; MOs: p = 0.12, n = 61 axons; kstest). Using the tree-model, we could predict the strengths of interactions as described in Eq. ( 5 ) (Supplementary Fig. 4 ). When comparing the strength of the interactions against the naive innervation probabilities without interactions, we found in the model the strong negative correlation that was present in the axon data (Figs. 5e, 6d ). For the model, we found more data points toward the lower left corner of the plot that indicates low naive probability and low increase. The lack of such points in the data from axon reconstructions can be explained by the fact that points associated with extremely low probabilities are unlikely to show up in a relatively small sample of reconstructed axons. As a final validation, we compared the model against the results of Han et al. 31 , which considered brain region targeting of single axons originating from VISp. We have not taken into account axons from this source region when we formulated or fitted the model, making this a powerful validation of the generalization power of the model (Fig. 7 ). Comparing the number of visual regions innervated (out of VISli, VISl, VISal, VISpm, VISam, and VISrl) by individual axons originating in layer 2/3 of VISp, we find comparable results (Fig. 7a ). Although in the model, the mean number of regions innervated is slightly higher (1.84 vs 1.7 (fluorescence-based) or 1.56 (MAPseq)) we find the same roughly binomial distribution where fractions decrease with increasing number of innervated regions. Fig. 7 Validation of the tree-model. Validation against the results of Han et al. 31 . a Top, results of 31 in terms of the number of visual areas innervated by single axons originating in layer 2/3 of VISp. Bottom, corresponding results of the tree-model for axons originating in layers 2/3, 4, 5, and 6 (top left to bottom right; n = 10,000 innervation profiles each). b Top, results of Han et al. 31 in terms of common innervation of pairs of visual brain areas by axons originating in layer 2/3 of VISp. Bottom, corresponding results of the tree-model, based on n = 10,000 innervation profiles Full size image We were also able to predict this distribution for axons from other layers using our model. We predict similar shapes of the distribution with an even higher mean for layer 5 and a lower mean for layer 4 and especially layer 6. Next, we also considered the statistical interactions between the six visual target regions (Fig. 7b ). Again, we found overall comparable conditional probabilities, with a comparable structure, although strong common innervations of regions VISl and VISal and VISpm and VISam were underestimated. Connectome instantiations and their micro-structure Finally, we developed a stochastic algorithm to generate instances of a neuron-to-neuron connectome that fulfills all constraints in the long-range projection recipe and used it to connect a model of the entire mouse neocortex (see the Methods section). We considered slender tufted and untufted pyramidal cells in layer 5 to participate in projection class L5IT and half of the thick tufted layer 5 pyramidal cells in L5PT, with the other half participating in L5CT, which is not covered by the present, purely cortical model. Pyramidal cells in other layer all participated in the corresponding projection class. As a result, we obtained connectome instances with 88 billion modeled synapses, each associated with a presynaptic neuron, postsynaptic neuron, and an exact location on the postsynaptic morphology (Supplementary Fig. 6 ). This allowed us to analyze the microstructure emerging from the constraints we added on top of the matrix of connection strengths. While the additional constraints on layer profiles and topographical mapping were arguably on the meso- rather than microscale, and the p-types governed the targeting of regions rather than individual neurons, they were together likely to affect measurements of the microstructure. For example, an overexpression of reciprocally connected neuron pairs is traditionally a measure of microstructure 33 , 34 . Topographical mapping between regions A and B can lead to such an overexpression for pairs where one neuron is in A and the other in B . This occurs when a location in A is mapped to a location in B that is in turn mapped back to the same location in A , leading to reciprocal connectivity of neurons in those locations that is higher than expected from the average unidirectional probabilities between the regions. In order for this trend to emerge in an experiment, neurons would have to be sampled over sufficiently large volumes for the mapping to have a significant effect. We evaluated the strength of this effect in an exemplary pair of connected regions, VISa and VISam (Fig. 8 ). We calculated unidirectional and reciprocal connection probabilities between parts of the regions, where we first defined a subvolume of VISam with increasing radius, then found the center of its projection to VISa according to the mapping and defined a subvolume with the same radius around the center (Fig. 8a , sampling radius). We found that the connection probabilities decreased with increased radius, as more and more parts of the regions are considered that are not mapped to each other (Fig. 8b ). However, the expected reciprocal connection probability obtained from multiplying the unidirectional probabilities fell off faster than the measured one. Indeed for all radii over 150 μm, the reciprocal overexpression, i.e., the measured divided by the expected reciprocal probability, was in three connectivity instances larger than one, reaching values as high as 2.5 for radii over 500 μm (Fig. 8c ). Fig. 8 Bidirectional micro-connectivity and modularity. a Connectivity between individual neurons in VISa and VISam was sampled by defining a subvolume with various radii in VISam (sampling radius), then by finding the center of the projection from the subvolume to VISa according to the mapping (dashed arrow), moving it (sampling offset) and defining a subvolume with the same radius around it. b Unidirectional (red) and reciprocal connection probabilities for various sampling radii with zero-sampling offset. Gray: expected from unidirectional connectivity; black: model. c Ratio of reciprocal connectivity measured in the model over the expected value. Gray: three instances; black: mean of n = 3 instances. d As b , but for a sampling radius of 150 μm with various sampling offsets. e As c , but for sampling offsets. f Bottom: edge density, i.e., the number of connections over the number of pairs, of the microconnectivity between within-region modules that were defined by clustering the connectivity within the two brain regions (see the Methods section). Top: neuron-to-neuron connectivity between 7 × 7 within-region modules outlined in green. Gray lines indicate boundaries between within-region modules. g Distribution of edge densities in f (top right quadrant) compared with a random control. h Width of the distribution of edge densities (as in g ) at half height, model against control, for projections with a density over 0.02 μm −1 . Circles: projection originating in the prefrontal module; stars: anterolateral module; left-pointing triangles: medial; downward-pointing triangles: somatomotor; right pointing: temporal; upward pointing: visual; dark blue: intramodule projection; light blue: intermodule Full size image In addition, we found that measuring connection probabilities not at the center of the projection of the subvolume, but offset from it (Fig. 8a , sampling offset) lead to an overexpression of reciprocally connected pairs. For a sampling radius of 150 μm, we shifted the center of the subvolume in VISa in a random direction by various amounts, finding that it decreased all connection probabilities while simultaneously leading to an increase in the reciprocal overexpression (Fig. 8d, e ). Motif counts in neuron triplets is another traditional measure of microstructure 33 , 34 ; its equivalent in long-range connectivity is motif counts in triplets where each neuron is in a different brain region. The p-types dictate that certain pairs of regions tend to be innervated together, which would lead to overexpression of the corresponding motifs. Using the same method of sampling from subvolumes as above (Supplementary Fig. 5a ), we performed such an analysis for three regions that are strongly connected to each other, FRP, MOs, and MOp, confirming the trend. Based on 100,000 triplets in the subvolumes, we found that motifs where a neuron in FRP innervates only a neuron in MOp or a neuron in MOs only innervates a neuron in FRP where significantly underexpressed in favor of motifs where they innervate neurons in both other regions (Supplementary Fig. 5b ). The constraints on topographical mapping and the p-types are specific implementations of a principle of structured connectivity on various levels; not only between modules and regions, but also successively smaller subregions, leading to a scale-invariant structure, previously identified in human MRI data 20 . The topographical mapping generates a structure of subregions, as outlined above, while the p-types generate larger structures of groups of regions that tend to be innervated together. As such, the the micro-connectome instances can be thought of as extending this principle—so far demonstrated for voxelized connectivity—further down to the level of individual neurons. Taylor et al. 20 quantified this structure for diffusion imaging voxels by detecting modules in the internal connectivity structure of two contiguous brain regions, and then considering the connectivity between the brain regions in terms of connection strengths between pairs of such within-area modules. They found that the distribution of strengths was much wider than in a random control, indicating that the within-area modules also structure the connectivity between areas. We replicated this experiment on the microstructure, i.e., the predicted neuron-to-neuron connection matrices within and between VISa and VISam (Fig. 8f, g ). Upon grouping individual neurons in the two regions into 93 (VISa) and 179 (VISam) within-area modules and comparing the connectivity between them to a random control preserving individual neuron in- and out-degrees, we found comparable results. Repeating the analysis for all sufficiently strong projections (Fig. 8h ), we found the same, predicting that the principle extends down to the level of individual neurons. Taken together, we conclude that the constraints and principles we identified lead to a highly nonrandom microstructure of connectivity. While the structure is a prediction that will need to be validated, this demonstrates the utility of generating statistical connectome instances, as they reveal and quantify the interactions between mesoscale and micro-scale connectivity. Discussion We have developed a way to generate statistical instances of a whole-neocortex mouse micro-connectome. This approach takes into account the current state of knowledge on region-to-region connectivity strengths, the laminar pattern of projection synapses, the structure of topographical mapping between regions and the logic of regional targeting of individual projection axons as derived from over 100 whole-brain axon reconstructions, and a comprehensive mesoscale model of projections, built from thousands of experiments 3 . Combining these data with a morphologically detailed model of neocortex 21 has allowed us to statistically predict connections with sub-cellular resolution, i.e., including the the locations of individual synapses on dendritic trees. Our approach is timely, as it leverages and integrates three very recent, publicly available data sets. Furthermore, its flexibility and modularity will allow it to readily use future data sets in place and in addition to the currently used ones. The resulting wiring diagram allows fundamental questions to be addressed, such as the nature and dynamics of clinically relevant brain rhythms as well as hierarchical interactions in the cortex, which are fundamental for understanding cortical coding and whole-brain regional dynamics. As the available data on this topic remains sparse, our approach was as follows: we considered the formation of the connectivity as a stochastic process selecting one out of a space of possible wiring diagrams, and then sought out biological principles and rules that consecutively restrict this space of biologically viable wiring diagrams. The principles we identified were not only based on the biological data but also a number of assumptions. The assumptions were necessary to break down the scale of the problem, to interpret the data (data assumptions) and structure it into principles (structuring assumptions), to formulate principles mathematically (modeling assumption), and to apply them to infer missing data (generalizing assumption). In order to interpret the resulting micro-connectome and predictions, one needs to first understand these assumptions. While we have made them explicit in this paper, they are also summarized in Table 2 and discussed in the Supplementary Discussion . Table 2 List of assumptions used in the formulation of the model Full size table As in any model, there is the implicit assumption of completeness, that our model captures all pertinent biological principles. We make no claim that this is true. This assumption is formally necessary for us to achieve the following modeling goal: given the assumptions, find the most general model that completely describes the data. In this context, we have drastically improved the strength of the null model of the microstructure of long-range connectivity. Previously, the most general model of the data was the null model implicit in long-range connection matrices—that of unstructured connectivity beyond the region-to-region level—or with at most some layer targeting rules. We have not only systematically integrated the data on this level but also added constraints that lead to a nonrandom microstructure with testable predictions. Comparing potential experimental data against our improved model will lead to a better interpretation of the results. For example, we have demonstrated that an increased reciprocal microconnectivity between regions does not only necessarily imply a mechanism selectively stabilizing such motifs but can to some degree be explained by the mechanisms leading to topographical mapping. We have further demonstrated that in the presence of strong mapping, reciprocity must be evaluated relative to the trends present in the mapping to be correctly understood. Findings violating the naive, unstructured null model but in line with our improved model can be explained by the principles of connectivity we implemented. For data points invalidating the model, for example conflicting triplet motif counts, we can try to pinpoint which assumption it violates and thus provide it context. Alternatively, data contradicting the model can be simply a result of biological variability between individuals. At this stage, we positioned the model to represent an average adult mouse where such false positives are least likely. Further, some constraints—such as the mapping and p-types—remained statistical and consequently captured a large degree of variability between individual instances. For the other constraints—such as average synapse density and layer profiles—we can estimate an upper bound on variability in the future by running our programmatic pipeline to parameterize connectome constraints on outlier data points instead of averaged data. Similarly, other ages or specific strains can be modeled by using different data in the same pipeline. We can already hypothesize about additional principles that might have to be added in the future. In terms of targeting of connectivity, we have implemented many aspects of spatial targeting of brain regions and locations within a region, and we have demonstrated that this leads to a highly nonrandom microstructure. However, it is possible that similar rules apply for the incoming long-range projections, i.e., which set of brain regions individual neurons are innervated by, and possible interactions between incoming and outgoing. In that case, we will be able to extent our definition of p-types to be the concatenation of incoming p-types and outgoing p-types. In terms of the large-scale inter-area connectivity trends, i.e., the macro-connectome, our approach does not make any predictions, but is instead explicitly recreating the input data used. While Harris et al. 3 provided sufficient data for five projection classes, it missed for example a GABAergic projection class 18 . Additional sources could be used in the future to add such a type. In principle, completely different data sets could be used to define projection strengths. For example, Gămănuţ et al. 2 report a cortical mouse macro-connectome that recreates biological trends, such as a lognormal distribution over several orders of magnitude of projection strengths. They argue that their data captures several projections that are missed by Oh et al. 25 (and consequently also potentially by Harris et al. 3 , which is based on similar computational methods). As their data provides potential sub-area resolution (see their Fig. S2 ), it could be used to also constrain the mapping and consequently serve as the basis of a stochastic micro-connectome predicted with our method, albeit without distinction of projection classes. The assumption of a continuous, linear mapping between regions appears to solidly recreate the projection data, with only three regions leading to significant error (Fig. 4 ; MOs, MOp, and SSs). One explanation for the error would be that these regions contain subregions that each send and receive their own, continuous projections. Indeed, for the projections from SSp-ll and SSp-ul to SSs (Supplementary Fig. 3b , right), we see several peaks of the green and blue color channels in the data, whereas a single continuous mapping can only generate single peaks. This is not surprising, as MOs, MOp, and SSs are not broken up by body part, unlike SSp that it strongly interacts with. In the future, the projection data could thus be used to further break up these regions, at least for the purpose of analyzing projections. With more advanced analyses and more data it may even become possible to hypothesize a brain parcellation scheme ab initio based on projection data. Even with the imperfections outlined above, the present model will lead to advances in our understanding of brain function, when employed in simulations of whole-neocortex activity. The explicit parameterization of the constraints will allow us to change parameters to assess their impact. For example, it is at this point unclear whether the targeting rules for individual axons (p-types) will have an effect on high-level brain activity. Similarly, we can investigate to what degree the relatively simple topographical mapping in the model is sufficient for the upstream propagation of spatial information from VISp. Steps into that direction can be undertaken both in morphologically detailed models and point neuron models using the publicly available model connectome. Methods Accessing the mouse connectivity model Unless noted otherwise, the data from the voxelized mouse connectivity model of the Allen Institute was accessed using the mcmodels python package provided by the authors ( ). Volumetric synapse densities of projections We formulated a target mean density of synapses of 0.72 μm −3 in the model, as measured by Schüz and Palm 26 . Multiplied with the neocortex volume of the isocortex in the Allen mouse brain atlas (123.2 mm 3 ), this yielded a target number of 88.74 billion synapses. From this number, we subtracted 36 billion synapses we predicted in local connectivity within a brain region. This local connectivity was predicted by detecting axo-dendritic appositions in the model and filtering them to fulfill biological constraints, such as bouton density and synapses per connection 24 . We then derived a matrix of synapse densities in all projections between pairs of brain region by scaling the wild-type connection density matrices provided by Harris et al. 3 in the following way: Let M i and M c be the 43 × 43 matrices of connection densities in ipsilateral and contralateral projections between brain regions, provided by Harris et al. 3 . Entries along the main diagonal of M i , corresponding to connectivity within a region are set to 0. Furthermore, let V be the vector of region volumes and C t the matrix of target region coverage in Supplementary Fig. 3d . Then we can calculate the scaling factor σ : $$\sigma \cdot \mathop {\sum}\limits_{a,b} {\left( {M_i[a,b] + M_c[a,b]} \right)} \cdot V[b] \cdot C_t[a,b] = 68.74 \cdot 10^9$$ (6) This factor was then applied to both M i and M c to convert them into matrices of the average density of synapses in the target region due to a projection, measured in μm −3 . While this left no explicit room for synapses from extracortical sources, we estimate them to contribute comparatively little. For example, the density of thalamic synapses projected from VPM into SSp-bfd 27 , when averaged over the whole-cortical depth, is only about 1.5% of the average total density (0.72 μm −3 ) 26 . Projection density matrices for individual projection types We combined the wild-type projection matrix from Harris et al. 3 with their incomplete information on projections in individual projection classes, to get five individual projection matrices, one for each projection class. As their wild-type experiments affected neurons in all layers and classes of the source region, we assumed that the sum of synapse densities over projection classes is equal to the density for the wild-type. Furthermore, based on qualitative observations, we assumed that the region-to-region connection matrices for each projection class are versions to the wild-type matrix, where individual module-to-module submatrices are scaled by individual values. The modules were six groups of contiguous brain regions (prefrontal, anterolateral, somatomotor, visual, medial, and temporal) identified in Harris et al. 3 . This assumption means that connectivity trends between modules will be preserved for all projection classes, but more fine grained trends for regions within a module will simply replicate the overall trends observed in the wild-type matrix for all classes. Based on these assumptions, we derived matrices of synapse densities for individual projection classes with the following algorithm. First, we digitized the available information for individual projection classes from the Harris paper using the following mapping to cre-lines: 2/3: Cux2-IRES-Cre; 4: Scnn1a-Tg3-Cre; 5it: Tlx3-Cre_PL56; 5 pt: A93-Tg1-Cre; 6: Ntsr1-Cre_GN220. Then we condensed the information into five 6 × 6 matrices of average projection strengths between modules and normalized results such that the sum of the five matrices is 1 for each entry. Finally, we generated full-size 43 by 43 matrices for each projection type by scaling module-to-module specific submatrices of the wild-type matrix by the corresponding entry in the condensed and normalized matrix (Supplementary Fig. 7 ). To reduce the computational demand of generating connectome instances, we determined a minimal projection strength and removed projections weaker than the cutoff. The cutoff was calculated as 0.0006 μm −3 , such that <5% of projection synapses would be lost. Projection density assumed symmetrical for both hemispheres As the data in Harris et al. 3 are focused on the right hemisphere, we assumed connectivity to be symmetrical between hemispheres to be able to model both of them. This lead to 5 (projection classes) × 43 (source regions) × 86 (ipsilateral and contralateral target regions) potential projections parameterized in terms of their strength by the data. However, we considered the 5 × 43 ipsilateral projections within the same region to be local connectivity, which we instead derived with our established approach 24 . A number of regions also lack layer 4, rendering projections in that projection class void. Predicting layer profiles To assign one out of six layer profiles to each projection, we digitized the data on profile frequencies of Harris et al. 3 and combined it according to the process illustrated in Supplementary Fig. 8 : first, for a source module we counted the number of intra-module or inter-module projections originating from it in each projection class. The example illustrates inter-module feedforward projections from the prefrontal module (Supplementary Fig. 8 , top left). For the presence of a projection, we defined a minimum projection strength, selected such that <5% of the total number of projection synapses are lost to the cutoff. The counts were then used as weights for a weighted average of the vectors of layer profile frequencies associated with each projection class. The result is a vector of expected profile frequencies for intra- or inter-module projections from the source module, if only the layer profile frequencies associated with projection classes are considered (Supplementary Fig. 8 , top right). Next, we looked up the observed profile frequencies for the source module in the data of ref. 3 and compared them to the expected ones (Supplementary Fig. 8 , bottom left). Dividing the observed by the expected frequencies yielded adjustment factors for each layer profile that expressed which profiles were overexpressed or underexpressed in intra- or inter-module projections from the source module under consideration (Supplementary Fig. 8 , bottom middle). We categorized projections as feedforward or feedback, based on the hierarchical positions of brain regions, reported in Fig. 8e of ref. 3 , and, in accordance with their findings, reduced by 50% the adjustment factors for profiles 1, 3, and 5 when considering feedback projections and of profiles 2, 4, 6 when considering feed-forward projections. Finally, we multiplied the vector of adjustment factors with the vectors of profile frequencies for individual projection classes to get adjusted profile frequencies (Supplementary Fig. 8 , bottom right). The method yielded unique profile frequencies for each combination of source module, projection class and intra- or inter-module projection. To reduce the vectors of adjusted frequencies to a single profile, we simply picked the profile with the highest adjusted frequency (Supplementary Fig. 8 , bottom right). Topographical mapping of projections The topographical mapping of projections was defined by barycentric coordinate systems in the source and target regions and the assumption that a point in one region is mapped to the corresponding point in the other. The local coordinate systems were derived using the methods described in the Results section, implemented in custom python code available at: . However, due to the potentially large extent in the target region of single-projection axons, the biological mapping is rather point-to-area than point-to-point. Therefore, we additionally predicted for each projection the width of the targeted area. A point-to-area mapping would result in an \(N_{src}^{tgt}\) with lower saturation values, i.e., when depicted as in Fig. 3 in an image with slightly washed-out colors. Indeed, we found for most projections low saturation values in \(N_{src}^{tgt}\) and consequently the optimal solution for the target coordinate system \(M_{src}^{tgt}\) would place all three defining points outside the target region. However, we assumed that low saturation values were rather a result of a large extent of projection axons leading to a weak mapping. We therefore added another objective to the optimization procedure for \(M_{src}^{tgt}\) : minimizing the fraction of the source region that is mapped to points outside the target region. To compensate, we defined points in the source region to be mapped to 2d Gaussian kernels at their target location instead of a single point. The width of the Gaussian was optimized such that a convolution of \(M_{src}^{tgt}\) with the same Gaussian resulted in the same distribution of saturation values as \(N_{src}^{tgt}\) . Analyzing whole-brain axons We acquired 183 neuron reconstructions from the Janelia Mouselight data portal 19 by querying for reconstructions where the soma location is within the neocortex. We first manually annotated the apical dendrite using Neurolucida (MBF Bioscience, Williston, VT, USA) given that it was not available in the original data. Based on this, we classified the neuron as a pyramidal cell or interneuron. Then, we performed a spatial analysis of the axon projection of each neuron by mapping the terminal points of the axon as well as the soma location into the Allen CCFv3 atlas coordinate system 25 . This yielded a complete list of brain regions containing axon terminal branches, as well as the brain region and layer containing the soma. Together with the information previously extracted from the annotated apical dendrite (e.g., shape, layer, number of branches), this spatial information is used to perform classification of the m-type and projection type (p-type) of the neuron. Testing statistical independence of region innervation Let N be the number of analyzed axons (here: 61 for innervation from L5 of MOs). Let n a and n b be the number of them that innervate regions a and b , respectively. Then under the assumption of statistical independence, the number of axons innervating both a and b is distributed according to the hypergeometric distribution with parameters N , n a , n b . We tested where the observed number of dual innervations fell along the cumulative distribution, and rejected the null hypothesis of independence if it was within the first or last 2.5% (two-tailed test). Constructing the p-type generating tree morphology The Louvain algorithm takes a weighted adjacency matrix as input, and then clusters the nodes into communities trying to maximize the weights within a community and minimize the weights across. An additional parameter is γ , which defines the granularity of the result: The smaller the value, the fewer communities it will result in, until a value of zero resulting in a single community. We began by setting gamma to a value of 6.0, such that every brain region resulted in its own community. Correspondingly, we began constructing the tree topology by associating every brain region with its own leaf node. We then continuously lowered the value of γ , such that regions and communities began to merge into larger communities. We considered a pair of communities to be merged when through lowering γ a new community appeared that contained more than half of the regions of each of the original communities. In that case, we placed a new node in the graph representing the new community and connected it with the two nodes representing the original communities. We continued lowering γ until it reached zero, at which point everything merged into a single community and the root of the tree was placed. We fit the weights of the edges to the predicted innervation probabilities using a recursive algorithm that optimized the local weights in small motifs consisting of two sibling nodes and their parent. It is based on the following observations (Supplementary Fig. 9 ): Let T 1 and T 2 be two sibling nodes and R their parent. In the model, any difference in the innervation probabilities for axons originating in T 1 and T 2 can only be due to differences in the lengths of the edges connecting each of them to their parent. This is because once the parent is reached, the shortest paths to any other region will be identical. Therefore: $$w_{T_1 \to R} - w_{T_2 \to R} \approx |M[T_1,:]| - |M[T_2,:]|$$ (7) $$w_{R \to T_1} - w_{R \to T_2} \approx |M[:,T_1]| - |M[:,T_2]|,$$ (8) Where M denotes the matrix of the negative logarithm of predicted innervation probabilities, M [ x , :] a single row of it (i.e., the probabilities of neurons in x to innervate each other region), and M [:, x ] a single column of it (i.e., the probabilities of x to be innervated by neurons in each other region). Further, the probability a neuron in T 1 innervates T 2 is given by the path from T 1 via R to T 2 : $$M[T_1,T_2] = w_{T_1 \to R} + w_{R \to T_2}$$ (9) $$M[T_2,T_1] = w_{T_2 \to R} + w_{R \to T_1}$$ (10) We found values for the four edge lengths in the motif by finding the least-squares solution of the system of linear equations. After this, we continued by performing the same step for node R and its sibling, until the root was reached. Generating connectome instances according to the constraints As mentioned previously, a long-range projection recipe is created which describes constraints on the desired connectivity. By doing so, the same recipe can be used to instantiate long-range projections with different circuit models, and to allow for different implementations to create these instantiations. This section describes the implementation used to generate the connectomes published under . The circuit representation and input data required for this implementation are: A placement of neuron morphologies in space A table describe their morphological types A spatial index, allowing the querying of morphology segments in a bounding region An atlas describing the different regions and layers that are addressed by the recipe A “flat-mapping” from 3d to 2d space the recipe, which has: populations: defining which regions, subregions and morphological types are part of the various source and target populations projections: organized by source population; specifies per target population, the expected synapse density, layer profile, and the barycentric source and target triangles p-types: organized by source population; specifies per target population the first-order innervation probabilities for neurons of the source population, and for pairs of target populations the conditional increase in innervation probabilities The basic circuit representation (first three items) was generated by a scaled-up version of a published algorithm 15 . The atlas was based on the Allen Common Coordinate Framework 35 . For the flat-mapping, we used the Allen Dorsal Flatmap of the mcmodels python package (see above). With this data, the implementation proceeds with the following steps: Neuron allocation : For each source population, the neurons in those populations are allocated to participate in projections to a number of target populations according to specified fractions and statistical interactions. Where no interaction is specified, the overlap (neurons participating in both projections) is calculated from the fractions participating in one projection multiplied by the other; this default value is scaled-up, where interactions are specified. The challenge is then to assign neurons to each of the projections, such that the desired fractions and overlap sizes are reached. A simplistic greedy algorithm was used to perform this allocation. Each source population group is assigned a sampled set of neurons, and pairwise the overlap is calculated, and adjusted based on the first-order interactions. When the overlap is too small, it is enforced by randomly sampling neurons from each group, and replacing neurons in the other group such that the overlap is achieved. Attempts were made to use a SAT solver to perform exact allocations, but the size of the neuron counts and the constraint counts meant the model could not be solved in the available memory. Synapse sampling : Sampling happens at the target region level. The target populations in the region are grouped, and the required densities per incoming projection are computed based on the long-range projections recipe. The densities are translated into counts based on the constrained volume created by intersecting the area occupied by the barycentric triangles, and reversed mapped using the “flat-map” to the voxels of the atlas within the region. Finally, all morphological segments of a target population within this volume are found, and sampled with replacement with weights proportional to their length. Synapses are placed at random offsets within these segments. This structure allows for parallelization, as each combination of target and source populations can be run at the same time, subject to computation and memory limits. In practice, finding all segments within the volume demands significant memory which constrains the implementation. This, in turn, gives rise to per target population order: all samples for this population are loaded, and then all the sources referencing this population are calculated sequentially, with the calculations parallelized when possible—the initial sampling, picking the segments with in the barycentric coordinates, etc. Further parallelization can be achieved by running many of these process on different machines, in a batch style. Mapping : Following the allocation and sampling, the two results are brought together in mapping: source neurons that are allocated to a projection are matched to the synapses created during the sampling of the same projection. Because both these data sets work with 3d coordinates, they are projected into the 2d representation so that the barycentric coordinates, described earlier, can be used to create the desired spatial organization. To that end, since source neurons are less numerous, they are first projected into the flat space, and from there mapped into the barycentric coordinate system of the source region. The same coordinates in the barycentric system of the target region are then mapped back into the flat space and considered the mapped locations of the source neurons in the target region. Synapses in the target region are directly mapped to the flat space. Finally, in parallel, synapses are then stochastically assigned to a target neuron with a weighting based on the distance to their mapped location in the flat space and the specified width of the mapping. To speed this process, the source locations are put in a k-dimensional tree, and only the 100 closest source locations are queried per potential target synapses. Output : The final step is to output the circuit in a format that can be used for simulation. For this, the SONATA file format was chosen: . In addition, for structural analysis, we output for each target region a connectivity matrix of all incoming connections in the scipy.sparse.csc_matrix format. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The recipe constraining the long-range connectivity—underlying Figs. 1 – 4 , S1 – S3 —and stochastic instances fulfilling the constraints—underlying Figs. 6 – 8 , S5 and S6 —can be downloaded from the Mouse whole-neocortex connectome model portal ( ). The reconstructions of individual axons—underlying Fig. 5 —are available at the MouseLight project at Janelia, mouselight.janelia.org . Code availability The model was constructed using python 2.7 with custom code available at .
Researchers at EPFL's Blue Brain Project, a Swiss brain research Initiative, have combined two high profile, large-scale datasets to produce something completely new—a first draft model of the rules guiding neuron-to-neuron connectivity of a whole mouse neocortex. They generated statistical instances of the micro-connectome of 10 million neurons, a model spanning five orders of magnitude and containing 88 billion synaptic connections. A basis for the world's largest-scale simulations of detailed neural circuits. Identifying the connections across all neurons in every region of the neocortex The structure of synaptic connections between neurons shapes their activity and function. Measuring a comprehensive snapshot of this so-called connectome has so far only been accomplished within tiny volumes, smaller than the head of a pin. For larger volumes, the long-range connectivity, formed by bundles of extremely thin but long fibers, has only been studied for small numbers of individual neurons, which is far from a complete picture. Alternatively, it has been studied at the macro-scale, a 'zoomed-out' view of average features that does not provide single-cell resolution. In a paper published in Nature Communications, the Blue Brain researchers have shown that the trick lies in combining these two views. By integrating data from two recent datasets—the Allen Mouse Brain Connectivity Atlas and Janelia MouseLight—the researchers identified some of the key rules that dictate which individual neurons can form connections over large distances within the neocortex. This was possible because the two datasets complemented each other in terms of entirety of the neocortex and the cellular resolution provided. Emergence of a surprisingly complex structure at single-cell resolution Building on their previous work in modelling local brain circuits, the researchers were then able to parameterize these principles of neocortical connectivity and generate statistical connectome instances compatible with them. When they studied the resulting structure, they found something fascinating; at cellular resolution, a surprisingly complex structure that had so far only been seen between neighboring neurons now also tied together neurons in different regions and at opposite ends of the brain. This was comparable to a rule of self-similarity that has been previously found in the human brain (MRI data) and predicts that it extends all the way down to the level of individual neurons. "This made me re-think how I think about these long-range connections," reveals lead researcher Michael Reimann. "They have been depicted as these blunt cables, connecting or synchronizing whole brain regions. But maybe there is more to them, more specific targeting of individual neurons. And this is what we learned from just a few, relatively course-grained principles. I expect that with improved methods we will find more in the future." Researchers at EPFL's Blue Brain Project, a Swiss brain research Initiative have combined two high profile, large-scale datasets to produce something completely new - a first draft model of the rules guiding neuron-to-neuron connectivity of a whole mouse neocortex. Credit: Blue Brain Project / EPFL Openly accessible connectome can serve as a powerful null model to compare experimental findings "We have completed such a first-draft connectome of mouse neocortex by using an improved version of our previously published circuit building pipeline (Markram et al., 2015)," explains Michael Reimann. "It has been improved to place neurons in brain-atlas defined 3d spaces instead of hexagonal prisms, taking into account the geometry and cellular composition of individual brain regions. The composition was based on data from the open source Blue Brain Cell Atlas. Further constraints were derived from other openly accessible datasets. Additional constraints that are so far unknown are likely to limit long-range connectivity even more. To start a process of iterative refinement, we made the model and data available to the public. The parameterized constraints on projection strength, mapping, layer profiles and individual axon targeting (i.e. the projection recipe), as well as stochastic instantiations of whole-neocortex micro-connectomes can be found under https://portal.bluebrain.epfl.ch/resources/models/mouse-projections". This openly accessible connectome can serve as a powerful null model to compare experimental findings to and as a substrate for whole-brain simulations of detailed neural networks. Sparse connection matrices of several instances of the predicted null model of neocortical long-range connectivity have also been publicly available as this result actively demonstrates the power of making datasets available to the public. Further advancing the case for Simulation The simulation (in-silico) method allowed the scientists to target volumes several orders of magnitude smaller, than would be possible with experimental methods, right down to the innervation of individual neurons with sub-cellular resolution. Going forward, this will allow the simulation of the electrical activity of individual neurons, entire regions or of the entire neocortex. "This paper builds upon Blue Brain's earlier work on evaluating morphological constraints on connectivity, "Morphological Diversity Strongly Constrains Synaptic Connectivity and Plasticity," (Cerebral Cortex, 2017) and "Reconstruction and Simulation of Neocortical Microcircuitry' (Cell 2015) explains Blue Brain Founder and Director Prof. Henry Markram. "The findings enable us continue our simulation experiments at an exponentially increasing rate meaning, we can now build biologically accurate brain models of bigger and bigger brain regions and at a higher and higher resolution thereby further advancing the case for simulation."
10.1038/s41467-019-11630-x
Physics
Study observes spin-orbit-parity coupled superconductivity in thin 2M-WS2
Enze Zhang et al, Spin–orbit–parity coupled superconductivity in atomically thin 2M-WS2, Nature Physics (2022). DOI: 10.1038/s41567-022-01812-8 Ying-Ming Xie et al, Spin-Orbit-Parity-Coupled Superconductivity in Topological Monolayer WTe2, Physical Review Letters (2020). DOI: 10.1103/PhysRevLett.125.107001 Journal information: Physical Review Letters , Nature Physics
https://dx.doi.org/10.1038/s41567-022-01812-8
https://phys.org/news/2022-12-spin-orbit-parity-coupled-superconductivity-thin-2m-ws2.html
Abstract The investigation of two-dimensional atomically thin superconductors—especially those hosting topological states—attracts growing interest in condensed-matter physics. Here we report the observation of spin–orbit–parity coupled superconducting state in centrosymmetric atomically thin 2M-WS 2 , a material that has been predicted to exhibit topological band inversions. Our magnetotransport measurements show that the in-plane upper critical field not only exceeds the Pauli paramagnetic limit but also exhibits a strongly anisotropic two-fold symmetry in response to the in-plane magnetic field direction. Furthermore, tunnelling spectroscopy measurements conducted under high in-plane magnetic fields reveal that the superconducting gap possesses an anisotropic magnetic response along different in-plane magnetic field directions, and it persists much above the Pauli limit. Self-consistent mean-field calculations show that this unusual behaviour originates from the strong spin–orbit–parity coupling arising from the topological band inversion in 2M-WS 2 , which effectively pins the spin of states near the topological band crossing and gives rise to an anisotropic renormalization of the effect of external Zeeman fields. Our results identify the unconventional superconductivity in atomically thin 2M-WS 2 , which serves as a promising platform for exploring the interplay between superconductivity, topology and strong spin–orbit–parity coupling. Main Two-dimensional (2D) crystalline superconductors serve as wonderful platforms 1 , 2 for the search of intriguing quantum phenomena, such as quantum metallic ground state 3 , 4 , non-reciprocal charge transport 5 , 6 , 7 and large in-plane upper critical field \(B_{{\mathrm{C2}}}^{||}\) (refs. 8 , 9 , 10 , 11 , 12 ). In non-centrosymmetric superconductors, the spin–orbit coupling (SOC) lifts spin degeneracies of the electronic bands, which enhances \(B_{{\mathrm{C2}}}^{||}\) and gives rise to the Zeeman-protected superconductivity 2 , 8 , 10 , 11 , 12 , 13 . One particular example is the Ising superconductivity in liquid-gated MoS 2 (molybdenum disulfide) (refs. 8 , 10 ), 2D NbSe 2 (niobium diselenide) (refs. 11 , 12 ) and monolayer TaS 2 (tantalum disulfide) (ref. 13 ). Beyond non-centrosymmetric superconductors, the study of Ising-protected superconductivity has recently been extended to centrosymmetric superconductors, such as stanene 9 and PdTe 2 (palladium ditelluride) (ref. 14 ) thin films, where the SOC induces spin–orbit locking near the Γ point 15 and generates enhanced \(B_{{\mathrm{C2}}}^{||}\) . In general, exploring and understanding the microscopic origin of the novel superconducting states that are resilient to large magnetic fields is of great interest to both fundamental and applied physics. When combining superconductivity and topology, topological superconducting states with Majorana fermions can emerge, which is the central component for fault-tolerant quantum computing 16 , 17 , 18 . Moreover, the further presence of inversion symmetry can enrich the topological structure of a system and enable the manifestation of topological crystalline superconductors 19 , 20 , 21 . Recently, a theory proposed that 2D centrosymmetric superconductors with a topological band inversion 22 , such as the 1T′-WTe 2 (refs. 23 , 24 , 25 , 26 , 27 ), exhibit a distinct type of superconductivity termed as spin–orbit–parity coupled superconductivity 28 . As depicted in Fig. 1a , near the topological band inversion where bands with opposite parities invert, a topological gap opens. In this scenario, the conventional SOC terms that involve only spin and momentum are forbidden by inversion symmetry, but the spin, momentum and parities of the electronic states are allowed to couple together near the topological band inversion, referred to as spin–orbit–parity coupling (SOPC). This SOPC is predicted to produce novel superconductivity near the topological band crossing with both largely enhanced \(B_{{\mathrm{C2}}}^{||}\) and anisotropic spin susceptibility with respect to in-plane magnetic field directions 28 . Experimentally, the emergent van der Waals superconductor 2M-WS 2 (2M phase tungsten disulfide) (ref. 29 ) is believed to be a promising candidate for spin–orbit–parity coupled superconductivity. Monolayer 2M-WS 2 shares an identical structure with 1T′-WTe 2 , but it possesses a stacking mode distinct from other transition metal dichalcogenides 30 . Its bulk material exhibits a high superconducting transition temperature T C of 8.8 K (ref. 30 ) and hosts many intriguing phenomena, including the evidence of anisotropic Majorana bound states 31 and topological surface states 32 . Furthermore, theoretical calculations predict that 2M-WS 2 holds topological edge states with band inversion in the atomically thin limit 33 , 34 , making it an attractive platform to explore exotic superconducting states. Fig. 1: Crystal structure and characterizations of 2M-WS 2 . a , Schematic plot of two bands of opposite parity getting inverted at Γ with colour indicating different orbitals (represented by dark blue and red, respectively). The spectrum after projection is depicted to show such topological band inversion that can give rise to edge states. The SOPC superconductivity appears when cooper pairs are formed with the states near the topological band crossing (such as near Fermi level E F ), where SOPC is strong and crucial. b , Top and side views of the crystal structure of 2M-WS 2 , where the a axis (purple dashed line), b axis (pink dashed line), c axis (light blue dashed line) and c* axis (dark blue dashed line oriented perpendicular to the {001} planes) are marked. Tungsten atoms are shifted from their octahedral sites due to the strong intermetallic bonding, forming the visible zigzag metal–metal chains along the a axis. c , Density functional theory calculated d states for the tungsten atoms and p states for the sulfur atoms projected onto the monolayer (left) and bilayer (right) electronic bands of the 2M-WS 2 , where a clear band inversion between W and S bands can be observed around the Γ point. d , Optical images of few-layer flakes of 2M-WS 2 cleaved on a SiO 2 /Si substrate. The number of layers (L) is labelled in the left image and the a axis of each crystal is marked by cyan dashed lines in both the left and right images. Scale bars, 4 μm. e , TEM bright-field image taken from a section of an exfoliated 2M-WS 2 ribbon-like flake, with the inset being the selected area electron diffraction pattern. It shows that the flake long axis is along the <100> direction ( a axis, as marked by the cyan dashed line). Scale bar, 500 nm. f , Experimental annular dark-field scanning transmission electron microscopy image taken from the 2M-WS 2 flake viewed along the c * axis. The inset shows the simulated image. Scale bar, 0.5 nm. Full size image Here, through magnetotransport and tunnelling spectroscopy measurements conducted under high magnetic fields and low temperatures, we demonstrate the observation of spin–orbit–parity coupled superconductivity in centrosymmetric few-layer 2M-WS 2 . Figure 1b shows the 2M-WS 2 monoclinic structure (space group C 2/ m ) where the inversion symmetry is preserved 30 . Due to strong intermetallic bonding, the tungsten atoms are shifted from their regular octahedral sites in the sulfur octahedrons, forming the zigzag metal–metal chains along the a axis. Monolayer 2M-WS 2 has an identical structure to 1T′-WTe 2 . However, rather than a glide mirror operation in the 1T′ structure, in bulk 2M-WS 2 , the layers stack differently along the c- axis direction with neighbouring layers offset through a translation operation (Supplementary Fig. 1 ). As a result, there always exists a global inversion centre in atomically thin 2M-WS 2 , as marked in Supplementary Fig. 1c,d . Figure 1c shows the d orbitals of tungsten atoms and p orbitals of sulfur atoms projected onto the monolayer and bilayer electronic bands of 2M-WS 2 (see Supplementary Fig. 4 for thicker layers), where the bands are doubly degenerate due to the combination of inversion symmetry and time-reversal symmetry. Notably, the tungsten d- orbital-dominated bands and the sulfur p- orbital-dominated bands get inverted near the Fermi energy. This feature is essential for the appearance of the SOPC superconductivity 28 . Figure 1d depicts an optical image of exfoliated atomically thin 2M-WS 2 on SiO 2 (silicon dioxide)/Si substrates. It is worth noting that most of the exfoliated samples have a long-ribbon-like shape, with the long axis parallel to the crystal a axis (tungsten–tungsten chains direction). Figure 1e is a transmission electron microscopy (TEM) bright-field image taken from a 2M-WS 2 nanoribbon, and the corresponding selective-area electron diffraction is displayed in the inset, which shows the single-crystalline feature of the sample and confirms that the long axis direction of the ribbon is indeed along [100], that is, the a axis. The high crystal quality of the nanoribbon is further confirmed by atomic-resolution imaging using scanning TEM, as shown in Fig. 1f . To probe the nature of superconductivity in atomically thin 2M-WS 2 , four-terminal contacts were fabricated ( Methods ). Figure 2a illustrates the temperature-dependent normalized resistance R/R N of a 2M-WS 2 device (device 01, thickness approximately 4 nm; Supplementary Table 1 ) with an out-of-plane magnetic field B ⊥ changing from 0 T to 9 T. R N is the normal state resistance right above the superconducting transition. At B ⊥ = 0 T, the device becomes superconducting at T C = 7.62 K, where T C is defined as the temperature corresponding to 50% R N . Compared with R/R N –T behaviour measured under the in-plane magnetic field B || (Fig. 2b , γ = 0°. Here, γ is the angle between the in-plane magnetic field and the positive direction of the x axis), an obvious suppression of the superconductivity is observed when the B ⊥ are applied. This substantially large magnetic anisotropy of the superconductivity is also observed in the temperature-dependent critical fields shown in Fig. 2c , which is notably larger than its bulk counterpart 30 . For the out-of-plane magnetic fields, \(B_{{\mathrm{C2}}}^ \bot\) , defined as the magnetic field corresponding to 0.5 R N , can be described by the linearized Ginzburg–Landau (GL) expression 35 , 36 , \(B_{{\mathrm{C2}}}^ \bot = {{{{{\varPhi}}}}}_0/2\uppi \xi \left( 0 \right)^2\left( {1 - T/T_{\mathrm{C}}} \right)\) , where Φ 0 is the magnetic flux quantum and ξ (0) is the zero-temperature GL in-plane coherence length. From the fitting, we can obtain ξ (0) = 8.6 nm. Under the in-plane magnetic field, \(B_{{\mathrm{C2}}}^{||}\) follows the 2D GL model 36 , \(B_{{\mathrm{C2}}}^{||} = {{{{{\varPhi}}}}}_0\sqrt {12} /2\uppi \xi \left( 0 \right)d_{{\mathrm{SC}}}\sqrt {1 - T/T_{\mathrm{C}}}\) , where d SC is the superconducting thickness. The fitting in Fig. 2c gives d SC = 3.8 nm. Also, the extrapolated zero-temperature \(B_{{\mathrm{C2}}}^{||}\) reaches approximately 35 T, which is beyond the Pauli paramagnetic limit for Bardeen–Cooper–Schrieffer (BCS) superconductors of this device ( B P = 1.86 T C = 14.17 T). Figure 2d shows the normalized magnetoresistance R/R N under different magnetic field direction as θ changes from 0° to 90° at 6.8 K; here the magnetic field rotates in the x – z plane in Fig. 2b inset and θ is the angle between the out-of-plane magnetic field and the positive direction of x axis. The extracted angular dependence of B C2 is shown in Fig. 2e , where a sharp cusp is observed when θ ≈ 0°. The 2D Tinkham formula 35 (red solid line) and 3D anisotropic GL model 8 (green solid line) can both fit the data at | θ | > 1°, but only the former can describe the sharp cusp at | θ | ≤ 1°, indicating the 2D superconductivity here. Figure 2f shows the current–voltage ( I – V ) relation of device 02 (thickness approximately 4 nm) at various temperatures. The I – V characteristics follow a power law dependence V ∝ I α (here α is the power-law exponent), which agrees well with the Berezinskii–Kosterlitz–Thouless (BKT) transition model for a 2D superconductor 37 , 38 . At α = 3, the BKT transition temperature T BKT = 5 K is obtained (Fig. 2f , inset). All these results above unambiguously conclude that the superconductivity here is of the 2D nature. Fig. 2: Two-dimensional superconductivity in few-layer 2M-WS 2 . a , b , Temperature dependence of the normalized resistance of a 2M-WS 2 device (device 01, thickness approximately 4 nm) measured under various out-of-plane and in-plane (along the a axis of the 2M-WS 2 crystal) magnetic fields, ( a ), under out-of-plane magnetic fields. ( b ), under in-plane magnetic fields. Under zero magnetic field, the device goes to superconducting at T C = 7.6 K, where T C is defined as the temperature corresponding to 50% R N . Inset in ( b ): schematic configuration of the angular-dependent magnetoresistance measurement. θ denotes the angle between the out-of-plane magnetic field and the positive direction of x axis (the magnetic field rotates in the x–z plane, and the x axis is also the a axis of the 2M-WS 2 crystal). γ is defined as the angle between the in-plane magnetic field and the positive direction of the x axis. c , Temperature-dependent critical magnetic field B C2 of the device for the magnetic field along the out-of-plane ( \(B_{{\mathrm{C2}}}^ \bot\) , θ = 90°) and in-plane directions ( \(B_{{\mathrm{C2}}}^{||}\) , θ = 0°, γ = 0°). The violet dashed line is the linear fit to \(B_{{\mathrm{C2}}}^ \bot = {{{{{\varPhi}}}}}_0/2\uppi \xi \left( 0 \right)^2\left( {1 - T/T_{\mathrm{C}}} \right)\) . The pink dashed line is the theoretical fit to \(B_{{\mathrm{C2}}}^\parallel = {{{{{\varPhi}}}}}_0\sqrt {12} /2\uppi \xi \left( 0 \right)d_{{\mathrm{SC}}}\sqrt {1 - T/T_{\mathrm{C}}}\) . d , Normalized magnetoresistance of the device with the magnetic field direction rotating from in-plane to out-of-plane direction ( θ varies from 0° to 90°, T = 7.2 K). e , The extracted angular dependence of B C2 fitted by both 2D Tinkham model \(\left( {\left( {B_{{\mathrm{C2}}}\left( \theta \right)\sin \theta } \right)/B_{{\mathrm{C2}}}^\parallel } \right)^2 + \left| {\left( {B_{{\mathrm{C2}}}\left( \theta \right)\sin \theta } \right)/B_{{\mathrm{C2}}}^ \bot } \right| = 1\) (red) and the 3D anisotropic GL model \(\left( {\left( {B_{{\mathrm{C2}}}\left( \theta \right)\sin \theta } \right)/B_{{\mathrm{C2}}}^\parallel } \right)^2 + \left( {\left( {B_{{\mathrm{C2}}}\left( \theta \right)\sin \theta } \right)/B_{{\mathrm{C2}}}^ \bot } \right)^2 = 1\) (green). Inset: a magnified view of the region around θ = 0°. f , Current–voltage relation of a 2D 2M-WS 2 device (device 02, thickness approximately 4 nm) at various temperatures plotted on a logarithmic scale. The solid black line corresponds to V ∝ I 3 . Inset: power-law exponent α (extracted by fitting the data in e to the power law V ∝ I α ) as a function of temperature, where a BKT transition temperature T BKT = 5 K is obtained. Source data Full size image We then study the superconductivity characteristic of 2M-WS 2 under various B || directions. Figure 3a shows the normalized magnetoresistance R/R N of device 03 (thickness approximately 4 nm) in the vicinity of superconducting transition with the magnetic field rotating in-plane at 7.6 K ( γ varies from 0° to 360°). A two-fold oscillation behaviour is observed in which the minima of R/R N appear at γ ≈ 0° and 180° (the magnetic field parallel to a axis) and the maxima at γ ≈ 90° and 270° (the magnetic field parallel to b axis). The oscillation amplitude becomes more prominent at a larger magnetic field, while it has been greatly suppressed when the temperature is above T C (Extended Data Fig. 1 ). This indicates that the two-fold oscillation of R/R N comes from the quenching of superconductivity by the magnetic field rather than from the resistance anisotropy. To confirm this, we measure the \(B_{{\mathrm{C2}}}^{||}\) of the device with γ varying from 0° to 360°. As shown in Fig. 3b , \(B_{{\mathrm{C2}}}^{||}\) also exhibits a two-fold symmetry where its maxima appear at γ ≈ 0° and 180° and the minima at γ ≈ 90° and 270° ( T = 7.4 K and 7.6 K; different definitions of \(B_{{\mathrm{C2}}}^{||}\) including 0.5 R N and 0.75 R N are used here), consistent with the two-fold oscillation behaviour of R/R N . The two-fold oscillation behaviour was also observed in the angular-dependent critical current measured under various B || directions (Fig. 3c ), suggesting the anisotropic response of the superconducting gap Δ to the B || , which will be further discussed. Fig. 3: Large in-plane upper critical fields and strong anisotropy in 2D 2M-WS 2 . a , Polar plot of angular-dependent normalized sheet resistance for a 2M-WS 2 device (device 03, thickness approximately 4 nm) under various in-plane magnetic fields measured at T = 7.6 K. b , Polar plot of angular-dependent in-plane critical magnetic fields \(B_{{\mathrm{C2}}}^{||}\) at different temperatures. Different definitions of \(B_{{\mathrm{C2}}}^{||}\) that correspond to 0.5 R N and 0.75 R N are used here. c , Polar plot of the critical current ( I C ) for a 2M-WS 2 device (device 04, thickness approximately 7 nm) under various in-plane magnetic fields at T = 2 K. d , Normalized magnetoresistance of a 2M-WS 2 device (device 05, thickness approximately 3 nm) with the magnetic field direction rotating from the crystal a axis to the b axis ( γ has various values from 0° to 90°, T = 1.6 K). e , f , Normalized magnetoresistance of the device measured at various temperatures with the in-plane magnetic field direction applied along the a axis ( γ = 0°; e ) and the b axis ( γ = 90°; f ). g , Temperature-dependent critical magnetic field \(B_{{\mathrm{C2}}}^{||}\) of the device with the in-plane magnetic field direction applied along the a axis ( γ = 0°) and the b axis ( γ = 90°). h , Theoretical calculated \(B_{{\mathrm{C2}}}^{||}/B_{\mathrm{p}} - T/T_{\mathrm{C}}\) curves for bilayer 2M-WS 2 in the presence of an in-plane magnetic field along γ = 0° (red solid line) and γ = 90° (black solid line). i , Calculated angular dependence of \(B_{{\mathrm{C2}}}^{||}/B_{\mathrm{p}}\) at T = 0.5 T C ( T C is set as 7.6 K during the calculation) for the case with SOPC and without SOPC. Source data Full size image Next we explore the \(B_{{\mathrm{C2}}}^{||}\) of 2D 2M-WS 2 through high-magnetic-field measurements. Figure 3d depicts the normalized magnetoresistance of device 05 (thickness approximately 3 nm) at different B || with a tilt angle γ at 1.6 K. As the B || direction changes from the b axis ( γ = 90°) to the a axis ( γ = 0°), the superconducting transition gradually shifts to higher magnetic fields, and at γ = 0°, even the highest magnetic field (31.1 T) cannot fully quench the superconductivity. We further measure the magnetoresistance isotherms of the device under various temperatures at γ = 0° (Fig. 3e ) and γ = 90° (Fig. 3f ). Compared with γ = 0°, a much narrower superconducting transition is observed in the magnetoresistance isotherms at γ = 90°. As a result, the \(B_{{\mathrm{C2}}}^{||}\) at γ = 0° is much larger than that in γ = 90° (Fig. 3d ). Also, for both in-plane magnetic field directions, \(B_{{\mathrm{C2}}}^{||}\) goes beyond the Pauli limit ( B P = 14.14 T for this device, green dashed line in Fig. 3g ) at low temperatures: \(B_{{\mathrm{C2}}}^{||}\) at γ = 0° (90°) is 30.51 T (20.63 T), which is approximately 2.16 (1.46) times the B P . As the inversion symmetry is preserved in atomically thin 2M-WS 2 , the mechanisms that rely on inversion symmetry breaking, such as Ising superconductivity in non-centrosymmetric MoS 2 (refs. 8 , 10 ), NbSe 2 (refs. 11 , 12 ) and TaS 2 (ref. 13 ), the electron scattering process involved in Rashba-type SOC 39 and asymmetric spin–orbit coupling 40 cannot explain the observed enhancement of \(B_{{\mathrm{C2}}}^{||}\) here. Besides, \(B_{{\mathrm{C2}}}^{||}\) is strongly anisotropic, and the spin–orbit locking is expected to be absent in 2M-WS 2 , readily ruling out the possibility of type II Ising superconductivity 9 . Note that the superconductivity here is in the clean limit. In Supplementary Text 4 , we show that the effect of spin–orbit scattering also cannot interpret the enhanced \(B_{{\mathrm{C2}}}^{||}\) here. Nevertheless, we find that the observed enhancement of \(B_{{\mathrm{C2}}}^{||}\) and its anisotropy are fully consistent with the theory of the SOPC superconductivity 28 . To validate this quantitatively, we constructed a low-energy effective Hamiltonian H N ( k ) ( k labels the momentum) for bilayer 2M-WS 2 (Supplementary Text 7 ), where the model parameters are obtained from the fit to the realistic band structures. Importantly, the SOPC term \(\hat {\mathbf {g}}\) · σ is taken into account in the model, with \({{{{{\hat{{\mathbf{ g}}}}}}}} = \left( {A_yk_y,A_xk_x,A_zk_y} \right)s_x\) dictated by the C 2 h point group symmetry of 2M-WS 2 . Here, A is the SOPC coefficient, and Pauli matrices, s and σ , operate on orbital and spin space, respectively. To obtain the \(B_{{\mathrm{C2}}}^{||}\) , we solved the linearized gap equation (see Supplementary Text 8 for detail) $$\frac{1}{{U_0}} = \frac{1}{2}{\int} {\frac{{d{{{\mathbf{k}}}}}}{{(2\uppi )^2}}\mathop {\sum}\limits_{i,j} {\left| {O_{ij}\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right)} \right|} ^2\frac{{1 - f\left( {E_i\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right)} \right) - f\left( {E_j\left( {{{{\mathbf{k}}}}, - {{{\mathbf{B}}}}} \right)} \right)}}{{E_i\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right) + E_j\left( {{{{\mathbf{k}}}}, - {{{\mathbf{B}}}}} \right)}},}$$ where U 0 denotes the attractive interaction strength, f is the Fermi–Dirac distribution function and the overlap function \(O_{ij}({{{\mathbf{k}}}},{{{\mathbf{B}}}}) = \left\langle {\left. {u_i\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right)} \right|u_j\left( {{{{\mathbf{k}}}}, - {{{\mathbf{B}}}}} \right)} \right\rangle\) with \(H_{\mathrm{N}}\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right)\left| {u_i\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right) = E_i} \right|u_i\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right),\) \(H_{\mathrm{N}}\left( {{{{\mathbf{k}}}},{{{\mathbf{B}}}}} \right) = H_{\mathrm{N}}\left( {{{\mathbf{k}}}} \right) + H_z,\) and the Zeeman term \(H_{\mathrm{z}} = u_{\mathrm{B}}{{{\mathbf{B}}}} \cdot {{{{\sigma }}}}\) capturing the paramagnetic effect. E and u denote the eigenenergy and eigenstate of H N ( k , B ), u B is the Bohr magneton, i , j are band indices. The calculated \(B_{{\mathrm{C2}}}^{||}\) – T relation at γ = 0° and γ = 90° is shown in Fig. 3h . The corresponding angular dependence of \(B_{{\mathrm{C2}}}^{||}\) is shown in Fig. 3i (blue solid line). Indeed, we found a two-fold anisotropic enhancement in \(B_{{\mathrm{C2}}}^{||}\) , consistent with our experiments. It is worth noting that the observed two-fold symmetric superconducting states under in-plane magnetic fields here, respecting the crystal symmetry C 2 h , do not indicate nematic superconductivity 41 where the superconducting state breaks the three-fold rotational symmetry, as studied in iron-based superconductors 42 , 43 , 44 , doped Bi 2 Se 3 (refs. 45 , 46 , 47 ), superconducting magic-angle graphene 48 and few-layer NbSe 2 (refs. 49 , 50 ). Instead, we point out that this anisotropic enhancement of \(B_{{\mathrm{C2}}}^{||}\) in atomically thin 2M-WS 2 directly results from the strong SOPC, which renormalizes the effect of external Zeeman fields near the topological band crossing, leading to this anisotropic enhancement in \(B_{{\mathrm{C2}}}^{||}\) . We have also calculated the normalized spin susceptibility χ S / χ 0 of bilayer 2M-WS 2 in Extended Data Fig. 2a (also see Supplementary Text 10 for details), where distinct differences between γ = 0° and γ = 90° are obtained. Here χ S is the spin susceptibility of superconducting states, and χ 0 is the Pauli spin susceptibility of free electron gas. It can also be seen that due to the presence of SOPC, the normal state susceptibility χ N is anisotropically reduced to values that are smaller than χ 0 ( T > T C regime in Extended Data Fig. 2a ). Then the anisotropic enhancement of \(B_{{\mathrm{C2}}}^{||}\) becomes apprehensible since, for a SOPC superconductor, \(B_{{\mathrm{C2}}}^{||}\) at low temperature can be estimated using 28 \(B_{{\mathrm{C2}}}^{||} \cong B_{\mathrm{p}}\sqrt {\chi _0/\chi _{\mathrm{N}}}\) . The angular dependence of χ N is further displayed in Extended Data Fig. 2b , being consistent with the two-fold enhancement of \(B_{{\mathrm{C2}}}^{||}\) in Fig. 3i . To demonstrate that the SOPC is crucial for the anisotropic enhancement of \(B_{{\mathrm{C2}}}^{||}\) , we artificially turned off the SOPC and replotted the angular dependence of χ N and \(B_{{\mathrm{C2}}}^{||}\) . As shown in Extended Data Figs. 2b and 3i , the two-fold anisotropic reduction of χ N and enhancement of \(B_{{\mathrm{C2}}}^{||}\) indeed disappear. To further understand the superconductivity in 2D 2M-WS 2 , we carried out tunnelling spectroscopy measurements on 2M-WS 2 tunnelling devices under various in-plane magnetic field conditions. A schematic device structure is displayed in the left inset of Fig. 4a (right inset, the optical image of the device), where the gold electrode, aluminium oxide (AlO x ) and 2D superconducting 2M-WS 2 form a typical normal metal–insulator–superconductor junction. Figure 4a demonstrates the normalized tunnelling conductance ( G S / G N ) of a few-layer 2M-WS 2 tunnelling device (device 06, thickness approximately 5 nm) under different in-plane magnetic field directions at 2 K ( B || = 9 T; γ varies from 0° to 90°). There are two symmetric peaks in the G S / G N , which are due to the tunnelling of normal electrons through the electron and hole branches of the quasiparticles 35 in the superconducting 2M-WS 2 . Under the same B || magnitude with different in-plane directions ( γ varies from 0° to 90°), the separation between these two symmetric peaks of the G S / G N shrinks, which indicates the size of superconducting gap Δ is suppressed anisotropically at different in-plane magnetic field directions. The corresponding 2D plot of the angular-dependent G S / G N is shown in Fig. 4b ( γ varies from 0° to 360°), where a prominent two-fold modulation behaviour is observed. To provide quantitative analysis on the tunnelling conductance spectra, we fit the angular dependence tunnelling conductance spectra to the Blonder–Tinkham–Klapwijk (BTK) model for the normal metal–insulator–superconductor junction 35 , 51 . The deduced angular-dependent superconducting gap Δ under B || is illustrated in Fig. 4c (pink dots, B || = 8 T; green dots, B || = 9 T). A clear two-fold symmetry for the angular-dependent Δ is observed. The maximum values ( Δ 8T-MAX ≈ 1.2 meV; Δ 9T-MAX ≈ 1.15 meV) of the superconducting gap are around γ = 0° and 180°, while the minimum values ( Δ 8T-MIN ≈ 0.94 meV; Δ 9T-MIN ≈ 0.85 meV) are around γ = 90° and 270°, which agrees well with the observed two-fold symmetry of the angular-dependent \(B_{{\mathrm{C2}}}^{||}\) above because the maximum (minimum) points of Δ correspond to the smallest (largest) suppression of the superconductivity under the in-plane magnetic fields in 2D 2M-WS 2 . We note that the anisotropic response of Δ to in-plane magnetic field here does not indicate an anisotropic gap under zero magnetic field in 2D 2M-WS 2 . Additionally, in Extended Data Fig. 3 , we show that the temperature-dependent tunnelling spectra of the device follow the BCS theory and give an extrapolated Δ (0) ≈ 1.33 meV at zero magnetic field. Fig. 4: Tunnelling spectroscopy of 2D 2M-WS 2 under in-plane magnetic fields. a , Normalized tunnelling conductance as a function of bias voltage ( V B ) of a 2M-WS 2 tunnelling device (device 06, thickness approximately 6 nm) with the in-plane magnetic field direction changing from 0° to 90°. Left inset: schematic structure of the 2M-WS 2 tunnelling device, where the gold electrode, aluminium oxide (AlO x ) and 2D superconducting 2M-WS 2 form a typical normal metal–insulator–superconductor junction. Right inset: an optical image of a typical 2M-WS 2 tunnelling device. Scale bar, 6 μm. b , Colour plot of the normalized tunnelling conductance of the device 06 measured under different in-plane magnetic field directions ( B || = 9 T; γ changes from 0° to 360°, T = 2 K). c , The extracted superconducting gap from the tunnelling spectra as a function of γ under different in-plane magnetic fields (purple and green dots are B || = 8 T and 9 T, respectively). d , Normalized tunnelling conductance of a 2M-WS 2 tunnelling device (device 07, thickness approximately 3 nm) under various in-plane magnetic fields ( γ = 0°, T = 1.6 K). e , Colour plot of the normalized tunnelling conductance of the device as a function of bias voltage and in-plane magnetic field (the magnetic field is along a axis of the crystal, γ = 0°, T = 1.6 K). f , Extracted magnetic-field - dependent superconducting gap with the magnetic field direction along the a axis and b axis of the crystal, respectively. The error bars representing the height of the Δ values are obtained during the BTK fitting. Source data Full size image Figure 4d shows the G S / G N as a function of bias voltage V for a 2D 2M-WS 2 tunnelling device (device 07, thickness approximately 3 nm) with the B || up to 32 T ( γ = 0°, T = 1.6 K; for γ = 90°, see Supplementary Fig. 14 ). The corresponding contour plot of the magnetic-field-dependent G S / G N is shown in Fig. 4e . As the B || increases, the tunnelling behaviour becomes less obvious. The extracted Δ–B || relations along the a axis ( γ = 0°, blue circles) and b axis ( γ = 90°, pink circles) are illustrated in Fig. 4f , in which a large anisotropy is observed. At γ = 0° and γ = 90°, the superconducting gap can persist in the B || up to 32 T and 22.5 T, respectively. These values are far beyond the Pauli limit approximately 13.95 T of this device, which agrees well with the observed largely enhanced \(B_{{\mathrm{C2}}}^{||}\) in atomically thin 2M-WS 2 above. Also, as the B || increases, Δ decreases continuously to zero at the upper critical field in both low- and high-temperature regimes (Supplementary Fig. 14 ), indicating a possible second-order superconductor-to-metal phase transition (Supplementary Text 6 ). In Extended Data Fig. 4 , our theoretical calculations show that for atomically thin 2M-WS 2 with spin–orbit–parity coupled superconductivity, the calculated Δ / Δ 0 ( Δ 0 = 1.764 k B T C is the zero-field BCS gap) can indeed persist much beyond the Pauli limit, agreeing with our experimental observations (also see Supplementary Text 11 for the role of weak orbital effect in the SOPC superconductivity). It is noted that the quasiparticle peaks are broadened and the peak separations become less obvious at relatively high magnetic field regimes in Fig. 4d . As a result, the error bar of Δ becomes larger in the relatively high magnetic field regimes, reflecting the reduced precision near the critical point. Nevertheless, our tunnelling experiments here still clearly show that the superconducting gap in atomically thin 2M-WS 2 possesses an anisotropic magnetic response along different in-plane magnetic field directions, and it persists much above the Pauli limit. In conclusion, our study demonstrated that atomically thin 2M-WS 2 is an unusual centrosymmetric superconductor that exhibits strong SOPC arising from the topological band inversion, leading to the large and anisotropically enhanced \(B_{{\mathrm{C2}}}^{||}\) . Our findings thus uncover a mechanism for generating an anisotropically enhanced \(B_{{\mathrm{C2}}}^{||}\) in centrosymmetric superconductors with topological band inversions. The application of this mechanism to other centrosymmetric superconducting transition metal dichalcogenides remains to be explored. Methods Sample growth High-quality 2M-WS 2 crystals were synthesized by the topochemical method of K + deintercalation from K 0.7 WS 2 (potassium ion intercalated tungsten disulfide) crystals. The parent compound K 0.7 WS 2 was prepared by stoichiometric mixing of K, S and K 2 S 2 (dipotassium disulfide, prepared via liquid ammonia) in an argon glove box. The mixed reagent was pressed and sealed in an evacuated silica tube with the condition of 10 −5 torr. The tube was heated up to 850 °C for 5 °C min −1 , maintained at this temperature for 3,000 min and cooled to 550 °C at a rate of 0.1 °C min −1 . The as-synthesized K 0.7 WS 2 (0.1 g) crystals were dispersed and stirred in acidic K 2 Cr 2 O 7 (potassium dichromate, 0.01 mol l −1 ) aqueous solution for 1 h at room temperature. Finally, the 2M-WS 2 crystals were obtained after washing in distilled water several times and drying in the vacuum oven. Sample characterization Scanning transmission electron microscopy characterization was conducted using a probe-side corrected FEI Titan G2 80-200 ChemiSTEM microscope, operated at 200 kV with convergence semi-angles of 21 mrad and collection inner (outer) semi-angle ranges of approximately 48 (196) mrad. Selective-area electron diffraction and TEM bright-field images were obtained using the same microscope but operated in TEM mode. For the (scanning)TEM samples, 2D 2M-WS 2 was exfoliated on SiO 2 (285 nm)/Si substrates and then spin-coated with a poly(methyl methacrylate) polymer layer. After etching away the SiO 2 layer in the KOH solution, the polymer layer with 2D 2M-WS 2 was fished up and transferred onto a copper TEM grid with a holey carbon support film. Device fabrication The device fabrication process was carried out in the cleanroom. Different thicknesses of 2M-WS 2 were obtained through mechanical exfoliation of bulk single crystals onto pre-patterned SiO 2 (285 nm)/Si substrates using polydimethylsiloxane stamps. Multi-terminal electrical contacts were fabricated by standard electron beam lithography (EBL) process using poly(methyl methacrylate)/methyl methacrylate bilayer polymer and subsequent metal deposition of Ti/Au (5 nm/80 nm). For the tunnelling device, an EBL and metal deposition process was performed to fabricate the first pair of electrodes on the exfoliated atomically thin 2M-WS 2 , and a thin layer of Al (0.1–0.5 nm) was then deposited and oxidized in the air to act as an insulating layer. Another pair of electrodes were subsequently fabricated to form the normal metal–insulator–superconductor junction structure. Transport measurements Four-terminal temperature-dependent magnetotransport, I – V and differential conductance measurements were carried out in a Physical Property Measurement System (Quantum Design). Typically, the external filter is not involved in the measurement circuit, while twisted pair of wiring is used to reduce the environmental noise. High-magnetic-field transport experiments were performed in water-cooled resistive magnets at the High Magnetic Field Laboratory in Hefei and the National High Magnetic Field Laboratory in Tallahassee. During the measurements, a rotating in-plane magnetic field with a misalignment of <0.1° was applied to the devices using rotating probes and in-plane sample holders (Supplementary Fig. 2 ). A multi-channel lock-in amplifier system (SR830 and SR865) was used for the measurement of alternating current (a.c.) resistance. During the a.c. resistance measurements, the applied current frequency is between 10 and 100 Hz. Current-driven I – V measurements were performed using Agilent 2912 and Keithley 2182A. In differential conductance measurements, a direct current bias current (generated by Agilent 2912) superimposed with a small a.c. bias current was applied to the normal metal–insulator–superconductor junction device through one of the split electrodes. The applied small a.c. bias current is chosen to be as small as possible but also large enough to obtain a decent signal-to-noise ratio, typically ≤500 nA. The a.c. voltage drop was collected via another electrode of the split pair using SR830 and SR865 (see Supplementary Text 1 for detail). Density functional theory calculations The Vienna Ab initio Simulation Package 52 was used to perform the density functional theory calculations 53 . The projector augmented wave method 54 and the Perdew–Burke–Ernzerhof exchange–correlation functional in the generalized gradient approximation 55 , 56 were adopted, and the SOC could be self-consistently included when necessary. For the calculation of bulk 2M-WS 2 , a 10 × 10 × 4 k -mesh was used, while in the case of monolayer and bilayer 2M-WS 2 , the adopted k -mesh was 10 × 10 × 1, and a 15-Å-thick vacuum layer was added along the out-of-plane direction. The crystal structure was obtained from the experimental results 30 . Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request. Source data are provided with this paper.
In recent years, many physicists and material scientists have been studying superconductors, materials that can conduct direct current electricity without energy loss when cooled under a particular temperature. These materials could have numerous valuable applications, for instance generating energy for imaging machines (e.g., MRI scanners), trains, and other technological systems. Researchers at Fudan University, Shanghai Qi Zhi Institute, Hong Kong University of Science and Technology, and other institutes in China have recently uncovered a new mechanism to generate anisotropically-enhanced in-plane upper critical field in atomically thin centrosymmetric superconductors with topological band inversions. Their paper, published in Nature Physics, specifically demonstrated this mechanism on a thin layer of 2M-WS2, a material that has recently attracted much research attention. "In 2020, a paper by our theoretical collaborator Prof. K.T. Law proposed that 2D centrosymmetric superconductors with a topological band inversion, such as 1T′-WTe2 exhibit a distinct type of superconductivity, called spin-orbit-parity coupled (SOPC) superconductivity," Enze Zhang, one of the researchers who carried out the study, told Phys.org. "SOPC is predicted to produce novel superconductivity near the topological band crossing with both largely enhanced and anisotropic spin susceptibility with respect to in-plane magnetic field directions. At that time, we were conducting research on the superconducting properties of atomically thin 2M-WS2, so after talking with Prof. K.T. Law, we felt that the emergent van der Waals superconductor 2M-WS2 would most likely be a promising candidate for spin-orbit-parity coupled superconductivity." The structure of monolayer 2M-WS2 is identical to that of 1T′-WTe2, the material previously investigated by Prof. Law and his team. 2M-WS2, however, has a unique stacking mode, which distinguishes it from other transition metal dichalcogenides. The researchers previously found that in its bulk form, this material exhibit a high superconducting transition temperature TC of 8.8 K. In addition, theoretical calculations suggested that atomically thin layers of 2M-WS2 hold topological edge states with band inversion. In their experiments, Zhang and his colleagues measured the in-plane upper critical field at a high magnetic field and confirmed the violation of the Pauli limit law. They also observed a strongly anisotropic two-fold symmetry in the material, in response to the in-plane magnetic field direction. "Tunneling experiments conducted under high in-plane magnetic fields also showed that the superconducting gap in atomically thin 2M-WS2 possesses an anisotropic magnetic response along different in-plane magnetic field directions, and it persists much above the Pauli limit," Zhang explained. "Using self-consistent mean-field calculations, our theoretical collaborators conclude that these unusual behaviors originate from the strong spin-orbit-parity coupling arising from the topological band inversion in 2M-WS2." The researchers' experiments spanned across several steps. Firstly, the team performed magneto-transport measurements on atomically thin 2M-WS2 and found that its in-plane upper critical field is not only far beyond the Pauli paramagnetic limit, but also exhibits a strongly anisotropic two-fold symmetry in response to the in-plane magnetic field direction. Subsequently, they used tunneling spectroscopy to collect measurements under high in-plane magnetic fields. These measurements revealed that the superconducting gap in atomically thin 2M-WS2 possesses an anisotropic magnetic response along different in-plane magnetic field directions, which persists much above the Pauli limit. Finally, the researchers performed a series of self-consistent mean-field calculations to better understand the origin of the unusual behaviors they observed in their sample. Based on their results, they concluded that these behaviors originate from the strong spin-orbit-parity coupling arising from the topological band inversion in 2M-WS2, which effectively pins the spin of states near the topological band crossing and renormalizes the effect of external Zeeman fields anisotropically. "We uncovered a new mechanism for generating an anisotropically-enhanced in-plane upper critical field in atomically thin centrosymmetric superconductors with topological band inversions, highlighting 2D 2M-WS2 as a wonderful platform for the study of exotic superconducting phenomena such as higher-order topological superconductivity and further device applications," Zhang said. "The novel properties found here are highly nontrivial as they directly reflect a strong SOPC inheriting from the topological band inversion in the normal state of 2M-WS2, which had been ignored for many years in previous studies of centrosymmetric superconductors." In recent years, more research teams worldwide have been exploring the properties and mechanisms of centrosymmetric superconducting transition metal dichalcogenides (TMDs), such as monolayer superconducting 1T′-MoS2, and 1T′-WTe2, due to the characteristic co-existence of topological band structure and superconductivity within them. The recent paper by Zhang and his colleagues could pave the way towards the exploration of large enhanced and strongly anisotropic in-plane upper critical fields, which could further improve the current understanding of these materials' exotic physics. "We now plan to explore the usual superconducting properties (such as the in-plane upper critical field and tunneling spectroscopy behavior at high magnetic field) of more atomically thin centrosymmetric superconductors with topological band inversions," Zhang added.
10.1038/s41567-022-01812-8
Physics
New chip-scale laser isolator opens new research avenues in photonics
Alexander D. White et al, Integrated passive nonlinear optical isolators, Nature Photonics (2022). DOI: 10.1038/s41566-022-01110-y Journal information: Nature Photonics
https://dx.doi.org/10.1038/s41566-022-01110-y
https://phys.org/news/2022-12-chip-scale-laser-isolator-avenues-photonics.html
Abstract Fibre and bulk optical isolators are widely used to stabilize laser cavities by preventing unwanted feedback. However, their integrated counterparts have been slow to be adopted. Although several strategies for on-chip optical isolation have been realized, these rely on either integration of magneto-optic materials or high-frequency modulation with acousto-optic or electro-optic modulators. Here we demonstrate an integrated approach for passively isolating a continuous-wave laser using the intrinsically non-reciprocal Kerr nonlinearity in ring resonators. Using silicon nitride as a model platform, we achieve single ring isolation of 17–23 dB with 1.8–5.5-dB insertion loss, and a cascaded ring isolation of 35 dB with 5-dB insertion loss. Employing these devices, we demonstrate hybrid integration and isolation with a semiconductor laser chip. Main The effort to integrate high-performance optical systems on-chip has made tremendous progress in recent years. Advances in ultra-low-loss photonic platforms 1 , nonlinear photonics 2 and heterogeneous material integration 1 , 3 have enabled fully integrated turnkey frequency-comb sources 1 , 4 , on-chip lasers with hertz linewidth 5 , terabits-per-second (Tbps) communications on-chip 6 , 7 , on-chip optical amplifiers 8 and much more. Although these systems will continue to improve, a lack of integrated optical isolation limits their performance. Optical isolators allow for the transmission of light in one direction while preventing transmission in the other. This non-reciprocal behaviour is critical in optical systems in order to stabilize lasers and reduce noise by preventing unwanted back-reflection 9 . In traditional fibre and bulk optical systems, non-reciprocal transmission is achieved by the use of Faraday-effect-induced non-reciprocal polarization rotation under an external magnetic field 9 , 10 , 11 . This approach can be replicated on-chip by integrating magneto-optic materials into waveguides 10 . However, the scalability of the approach remains a substantial challenge due to the required custom material fabrication and lack of complementary metal–oxide–semiconductor (CMOS) compatibility. Furthermore, magneto-optic materials require a very strong magnet for their operation due to their weak effects in the visible to near-infrared (NIR) wavelength range 12 , 13 and are therefore difficult to operate in an integrated platform. More recently, there has been remarkable progress in integrating magnet-free isolators using an active drive to break reciprocity. This drive has taken the form of a synthetic magnet 14 , 15 , stimulated Brillouin scattering 16 , 17 and spatio-temporal modulation 18 , 19 , 20 . However, the requirement for an external drive increases the system complexity, often requires additional fabrication, and consumes power. Additionally, high-power radiofrequency drives contribute large amounts of electromagnetic background that can interfere with the sensitive electronics and photodetection in photonic integrated circuits. This poses inevitable challenges to the scalability and adoption of such devices. Therefore, to maximize the scalability and integration into current photonic integrated circuits, an ideal isolator would be fully passive and magnet-free. Optical nonlinearity is a promising path towards breaking reciprocity 21 , 22 , 23 , 24 , 25 , and is inherently present in most widely utilized photonic platforms, such as silicon nitride 2 , 26 , silicon 22 , gallium phosphide 27 , tantala 28 , silicon carbide 29 , 30 and lithium niobate 31 , 32 . Unfortunately, due to dynamic reciprocity, many proposals for non-reciprocal transmission using optical nonlinearities cannot function as isolators 33 . However, by carefully choosing the mode of operation, isolation using optical nonlinearity is possible and has been demonstrated with discrete components 24 . In this Article we demonstrate integrated continuous-wave isolators using the Kerr effect present in thin-film silicon-nitride ring resonators. The Kerr effect breaks the degeneracy between the clockwise and counterclockwise modes of the ring and allows for nonreciprocal transmission. These devices are fully passive and require no input besides the laser that is being isolated. As such, the only power overhead is the small insertion loss from coupling of the ring resonator. Additionally, many integrated optical systems that would benefit from isolators already have high-quality silicon-nitride or commensurate components and could easily integrate this type of isolator with CMOS-compatible fabrication 1 . By varying the coupling of the ring resonators we can trade off insertion loss and isolation. As two examples, we demonstrate devices with a peak isolation of 23 dB with 4.6-dB insertion loss and isolation of 17 dB with a 1.3-dB insertion loss with 90 mW of optical power. As we are using an integrated photonics platform, we can reproducibly fabricate and cascade multiple isolators on the same chip, allowing us to demonstrate two cascaded isolators with an overall isolation ratio of 35 dB. Finally, we butt-couple a semiconductor laser-diode chip to the silicon-nitride isolators and demonstrate optical isolation in a system on a chip. Theory of operation The Kerr effect is the change in refractive index of a material due to its third-order nonlinearity in susceptibility, χ (3) . In the presence of two electric fields, the nonlinear polarization corresponding to this term is given by \({P}^{(3)}{(t)}={\epsilon }_{0}{\chi }^{(3)}{({E}_{1}{\rm{e}}^{-i{\omega }_{1}t}+{E}_{2}{\rm{e}}^{-i{\omega }_{2}t}+{\rm{c.c.}})}^{3}\) . Expanding this polynomial and keeping only the terms with the same frequencies, we find that \({P}^{(3)}({\omega }_{1})={3}{\epsilon }_{0}{\chi }^{(3)}(| {E}_{1}{| }^{2}+{2}| {E}_{2}{| }^{2}){E}_{1}{\rm{e}}^{-i{\omega }_{1}t}\) and \({P}^{(3)}({\omega }_{2})={3}{\epsilon }_{0}{\chi }^{(3)}({2}| {E}_{1}{| }^{2}+| {E}_{2}{| }^{2}){E}_{2}{\rm{e}}^{-i{\omega }_{1}t}\) . Thus, there is an effective increase in the refractive index proportional to the optical intensity. Critically, the index change differs by a factor of two depending on the source of the optical power. The field that is degenerate with the mode under consideration contributes a refractive index increase of 3 ϵ 0 χ (3) | E | 2 , self-phase modulation (SPM). The field that is non-degenerate contributes a refractive index increase of 6 ϵ 0 χ (3) | E | 2 , cross-phase modulation (XPM). This difference provides an intrinsic non-reciprocity. If a strong pump beam is sent through a waveguide, and a weak probe is sent through in the other direction, the probe will accrue an additional phase shift due to the Kerr effect that is twice that of the pump. We can apply the same principle to construct an isolator. Consider the set-up shown in Fig. 1a . A strong pump (red) is sent through a ring resonator with degenerate clockwise and counterclockwise resonances. This pump heats the ring, leading to a reciprocal thermo-optic increase in refractive index and corresponding decrease in resonance frequency. Additionally, the high power in the ring leads to an SPM of the clockwise mode and an XPM of the counterclockwise mode. This shifts the resonance of the counterclockwise mode twice as far as the clockwise pump mode. The now split resonances allow for a near-unity transmission in the pump direction but substantially reduce the transmission at the same frequency in the reverse direction (blue). This reduction is represented by the Lorentzian lineshape of the cavity. Following ref. 24 , we can calculate the expected isolation by combining this transmission reduction with the SPM resonance shift: $${I}={\frac{1}{1+{(2Q\frac{{{\Delta }}\omega }{{\omega }_{0}})}^{2}},}$$ (1) where the shift Δ ω is given by $${{\Delta }}{\omega }={\omega }_{0}{\frac{{n}_{2}}{n}\frac{Q\lambda }{2\uppi {V}_{{{{\rm{mode}}}}}}}{\eta }{P}_{{{{\rm{in}}}}},$$ (2) where Q is the loaded quality factor of the ring, n 2 is the nonlinear refractive index, n is the linear refractive index, V mode is the mode volume of the ring, and η is the coupling efficiency of the pump to the ring. We can characterize the power required for isolation by considering the input power required to isolate by 3 dB. We will refer to this power level as the isolation threshold, P thresh , given by $${P}_{{{{\rm{thresh}}}}}={\frac{n}{{n}_{2}}}{\frac{\uppi {V}_{{{{\rm{mode}}}}}}{{Q}^{2}\lambda \eta }}.$$ (3) Fig. 1: Theory of operation. a , Schematic showing the operation principle of the integrated nonlinear optical isolators. Plot shows transmission (T) vs. frequency (ω). b , Illustration of the isolator coupled directly to the laser that drives it, in the presence of the laser only (red), unwanted backward transmission only (blue) and the laser with backward transmission. When the laser is on, the backward transmission is no longer resonant and the laser is isolated. c , Image of a silicon-nitride device. Scale bar, 100 μm. d , Theoretical (dashed line) and experimental (blue data points) backwards transmission with varied input pump power and at maximum pump detuning, illustrating the Lorentzian transmission shape. Full size image This isolation is achieved solely by the intrinsic non-reciprocity of the ring, so no additional power is required for operation. Critically, the operation is unaffected by dynamic reciprocity. When a backwards-propagating signal is at the same frequency as the pump, dynamic reciprocity does not apply, and when a signal is at a different frequency from the pump, there is reciprocal but near-zero transmission (Supplementary Section 1 ). Additionally, it is important to note that this isolation ratio holds true not only for backwards-propagating signals with powers that are small compared to the pump, but even for backwards signals commensurate to and stronger than the pump. When there is already pump power circulating in the ring, the backwards wave is not resonant with the cavity. Thus, the required input power to negate the mode splitting is in fact many times higher than the power of the pump 34 , 35 . Although the bandwidth of the isolation is limited by the resonance splitting, it is possible to add an additional linear filter that indefinitely extends the isolation bandwidth (Supplementary Section 2 ). Without this additional filter, the 3-dB bandwidth of the isolation can be given by $${\omega }_{3{{{\rm{dB}}}}}={2}{{\Delta }}{\omega }-{\sqrt{2}}{\sqrt{{{\Delta }}{\omega }^{2}-\frac{{\omega }_{0}^{2}}{4{Q}^{2}}}},$$ (4) which is on the order of the linewidth of the cavity and grows as the isolation increases. As this type of isolator requires continuous pump power (either with a continuous-wave pump or a pump that is pulsed at the ring free-spectral range), but no additional driving or modulation, it is ideal for directly isolating the output of a laser (Fig. 1b ). The laser itself acts as the sole driver of isolation, and the device incurs no power consumption, losing power only to the small insertion loss from traversing the ring. There is no need for strong magnetic fields, active optical modulation or high-power radiofrequency drives, and device operation is not limited to a single photonic platform or wavelength range. Device integration and measurement As the isolation depends on Q 2 , the mode volume, the nonlinear refractive index and the input power, it is critical to implement devices with a material that can support high-quality microresonators, has an appreciable χ (3) and can handle very high optical intensities without incurring loss. Here we demonstrate integrated isolators using silicon nitride as a model system, as it has become one of the most prominent platforms for integrated nonlinear photonics 1 . We use thin-film silicon nitride (<400 nm), as it has the potential for CMOS integration compatibility given the lower film stress present 36 , 37 . In addition, the thin-silicon-nitride process allows for geometric dispersion properties that easily lead to a strong normal dispersion 37 , allowing us to suppress spurious optical parametric oscillation (Supplementary Section 3 ). To maximize Q 2 / V mode while keeping the isolator compact, we use a ring diameter of 200 μm, as shown in Fig. 1c . To measure the isolation of these devices, we use the pump–probe set-up shown in Fig. 2a . As the pump and probe are sourced from the same laser, they have the same optical frequency. For the first set of measurements, shown in Fig. 2b,c , the pump and probe wavelengths are scanned across the ring resonance. In Fig. 2d the pump is kept fixed. We send a high-power pump through the ring and simultaneously modulate and send a low-power probe through the ring in the opposite direction. We then scan the pump and probe across the resonance and read the reverse transmission using a lock-in amplifier. During the scan, the pump thermally pulls the ring until the ring unlocks at the peak of its resonance 38 . As the laser approaches the frequency of the ring, more optical power couples to the resonance. As a consequence of a small linear material absorption, this heats the ring and detunes the resonance further away from the laser. This continues until the laser frequency matches that of the resonance and is coupled maximally to the ring. Once the laser detunes past this point, the power in the ring begins to decrease, allowing the ring to cool and collapse back to the original resonance position. By monitoring the probe transmission at the resonance peak, we can obtain a direct measurement of the isolation (Supplementary Section 4 ). Additionally, by varying the pump power, we can measure the power-dependent isolation (Fig. 2b,c ). As the pump power is increased, the peak isolation is redshifted and scales as a Lorentzian. We find excellent agreement between our measurements (Fig. 2b ) and the expected transmission from a simple model of a thermally pulled ring with a Lorentzian power-dependent isolation (Fig. 2b , inset). Fig. 2: Isolation measurement. a , Schematic of the measurement set-up for characterizing the nonlinear optical isolators. EDFA, erbium-doped fibre amplifier; EOM, electro-optic modulator. PC, polarization controller; LO, 90-kHz electronic oscillator. b , Pump-power-dependent measurement of backwards transmission. Inset: theoretical pump power dependence. The line colours in the inset correspond to the colours in the main panel. c , Corresponding theoretical (dashed line) and experimental (blue data points) device isolation. Data-point colours correspond to the colours used in b . d , Pulsed backward transmission measurement with increasing pump power (0 mW, 40 mW, 80 mW). The inset shows a magnification of the section of the plot in the dashed box. e , Theoretical (dashed line) and experimental (blue data points) frequency dependence of the backwards transmission. Here, the probe is split into two sidebands with an EOM, and this sideband separation is swept with a frequency synthesizer. As expected, the backwards frequency response is shifted in proportion to the pump power. Full size image We also validate the operation of the isolator with a static pump frequency. The ring remains locked to the laser, and we can directly measure the backwards transmission of the device by sending optical pulses at the same frequency as the pump (Fig. 2d ). Here, the resonator locking is initiated by tuning the laser frequency, but this can also be achieved by thermally tuning the ring (Supplementary Section 5 ). As the maximum transmission and isolation occur at the peak of the resonance, where the resonance can no longer follow the laser, locking can be disturbed by changes in ambient temperature. This can be alleviated through thermal stabilization of the ring 39 . However, the large thermal pulling allows ample overhead in laser detuning: for this device under 90-mW input power, a 1-GHz detuning from the unlocking point corresponds to only a 0.3-dB reduction in isolation and a 0.15-dB increase in insertion loss. Because of this, we are able to operate close to the maximum transmission without any temperature control of the photonic isolator chip and remain stably locked over the duration of the experiment. Finally, we can measure the frequency response of the isolation by modulating the probe using an electro-optic modulator (EOM). This generates sidebands that we can sweep across the resonance. As only the redshifted sideband will be resonant with the redshifted backwards resonance, we can sweep the sideband frequency to map out the frequency response (Fig. 2e ). We find, as expected from the XPM modulation, that the backward transmission has a Lorentzian profile detuned from the pump by the SPM resonance shift, Δ ω . To maximize the performance of these isolators, it is important to consider both insertion loss and isolation. In this system, these are determined by the coupling rates to the two waveguides, κ 1 and κ 2 , and the scattering rate of the ring into the environment, γ . Ideally, all power is transmitted into the ring, and all of the power in the ring is transmitted to the output port. This is made possible by increasing the ring coupling rates, but this has the effect of reducing the Q of the resonance and thus lowering the isolation. To maximize the isolation, the power must be transferred to the ring efficiently, but the coupling rates should be minimized to preserve the Q . This, of course, increases insertion loss. More precisely, the ring sees a power of \({\frac{4{\kappa }_{1}({\kappa }_{2}+\gamma )}{{({\kappa }_{1}+{\kappa }_{2}+\gamma )}^{2}}}\) , the Q is impacted by a factor of \({\frac{1}{{\kappa }_{1}+{\kappa }_{2}+\gamma }}\) , and the insertion loss is given by \({\frac{4{\kappa }_{1}{\kappa }_{2}}{{({\kappa }_{1}+{\kappa }_{2}+\gamma )}^{2}}}\) . To interrogate this trade-off experimentally, we fabricated an array of 16 air-clad silicon-nitride isolators with varying coupling strengths and coupling asymmetries (Fig. 3b,c ). We find these devices have an intrinsic quality factor of ~5 million (Supplementary Section 7 ). As expected, devices with weaker and more asymmetric coupling show higher isolation, but also higher insertion loss. We highlight the performance of two of the devices—a device with 1.8-dB insertion loss and an isolation threshold of 12.9 mW, and a device with 5.5-dB insertion loss and an isolation threshold of 6.5 mW (Fig. 3d ). These devices show peak isolations at 90 mW of 16.6 dB and 23.4 dB, respectively. Fig. 3: Performance optimization. a , Schematic of the isolator ring illustrating the key parameters: κ 1 , κ 2 and γ —the input coupling rate, output coupling rate and intrinsic loss rate, respectively. b , Heatmaps showing the measured insertion loss and peak isolation for varied coupling rates κ 1 and κ 2 . The colour bar limits are set by the min and max of each plot (white: 1.0-dB insertion loss, 3.3-dB peak isolation; dark blue: 10.1-dB insertion loss, 23.4-dB peak isolation). Well-performing parameters are highlighted with blue, green and orange circles. c , Correlations of the isolation and insertion losses from b . d , Pump-power-dependent isolation for the three highlighted rings. Full size image As these isolators are integrated and can have low insertion loss, it is possible to fabricate and cascade multiple devices on the same chip, enabling an exponential enhancement in isolation (Fig. 4a ). To test this, we fabricated two rings, the second slightly red-detuned from the first. This allows for the thermal shift to bring both rings onto resonance and lock them there. The isolation is maximized and overall insertion loss minimized at a given pump power when the second ring is red-detuned by a factor of the single ring insertion loss times the thermal pulling of the first ring (Supplementary Section 8 ). To characterize the isolation of cascaded rings, we first measure the power-dependent isolation of a single ring (Fig. 4c ), using the same pump–probe measurement as described in Fig. 2a . We then repeat this measurement for two cascaded rings, one slightly red-detuned from the second. These results are shown in Fig. 4d,e . The multiplicative effect of the cascaded rings enables us to achieve an isolation of 35 dB with an insertion loss of ~5 dB. Fig. 4: Isolator cascade. a , Schematic of cascaded isolator rings. b , Optical micrograph of fabricated cascaded isolator rings. Scale bar, 200 μm. c , Theoretical (dashed line) and experimental (blue data points) power-dependent single-ring isolation. d , Transmission in the forwards and backwards direction from the cascaded isolator rings with a 110-mW pump. e , Theoretical (dashed line) and experimental (blue data points) power-dependent isolation of cascaded rings. The theoretical fit is calculated by multiplying the isolation ratio from a single ring to a second ring redshifted from the first. Measurements start from 40 mW, as this much pump power is needed to overlap the two ring resonances. Full size image Finally, we demonstrate isolation using a distributed-feedback (DFB) laser chip (Fig. 5a ). To maximize the on-chip pump power, we couple the DFB laser to the chip using an oxide-clad inverted taper designed to match the output mode of the laser 4 . We first characterize the isolation by coupling the DFB laser to a lensed fibre and performing a pump–probe measurement, similar to Fig. 2a . To tune the DFB laser across the ring resonance we modulate its temperature using a Peltier device and a thermistor for feedback. We observe isolation up to 13.6 dB with 65-mW input power (Fig. 5b ), slightly lower than before due to the small reduction in the Q factor. We then directly butt-couple the DFB laser and isolator, and thermally lock the ring to the laser. To verify its isolation, we send pulses backwards through the device using a secondary laser, and measure their transmission (Fig. 5c,d ). To ensure that the secondary laser is at the same frequency as the DFB, we mix the laser outputs on a photodiode and minimize their beat-tone. Fig. 5: DFB hybrid integration. a , Optical image of hybrid integration of a DFB laser with the isolator. b , Power-dependent isolation measured with the amplified DFB laser. Blue data points show measurement and dashed line shows theoretical fit. c , Schematic of experimental measurement set-up for direct measurement of the hybrid integrated DFB–isolator operation. d , Transmission of backwards pulses with the directly coupled DFB laser on and off. Full size image Conclusion We have demonstrated on-chip optical isolators utilizing the Kerr effect that are fully passive. By tuning the coupling parameters we trade off between insertion loss and isolation, demonstrating devices with an insertion loss of only 1.8 dB with 17-dB isolation, and single-ring isolation of up to 23 dB. Due to the integrated nature of these isolators, they can be easily cascaded to improve performance. By cascading two rings, we achieve 35-dB isolation with 5-dB insertion loss. Finally, we demonstrate the application of such a device to isolate the output of an edge-coupled DFB laser chip. As these devices are fully passive and magnet-free, they require no external drive and can operate without generating any electromagnetic interference or magnetic field background. In spite of this, their performance is still competitive with state-of-the-art active and magnetic integrated isolators (Supplementary Table 1 ) 11 , 13 , 15 , 18 , 19 , 20 , 40 , 41 , 42 . Furthermore, better-controlled fabrication from commercial foundries will allow for higher quality factors 43 and enable cascading of more than two rings, pushing the power threshold for 20-dB isolation down to below 2 mW and the achievable isolation to over 70 dB (Supplementary Section 10 ). As many hybrid and heterogeneously integrated optical systems already contain high-quality photonics in Kerr materials, this type of isolator can be immediately incorporated into state-of-the-art integrated photonics. Methods Device fabrication Thin-film silicon nitride (310 nm) was deposited on a silicon dioxide/silicon carrier wafer using low-pressure chemical vapour deposition. The isolator device patterns were defined using electron-beam lithography (JEOL JBX-6300FS), using ZEP520A as the electron resist. Post development, the patterns were transferred onto silicon nitride by inductively coupled plasma etching with CHF 3 /CF 4 chemistry. After the etch, the resist was removed using Piranha solution, and the silicon-nitride chips were subsequently annealed in a N 2 environment at 1,100 °C. Isolator measurements A scanning laser (Toptica) was split into two paths using a directional coupler. One path served as the pump and one as the probe. The pump path was passed through a polarization controller and was amplified by an erbium-doped fibre amplifier (EDFA; IPG) before being sent to the chip. The probe path was modulated using an EOM (Optilab), passed through a polarization controller, and amplified by an EDFA (Thorlabs) before being sent to the chip. The backwards transmission was measured using a photodiode (Thorlabs), a lock-in amplifier (Stanford Instruments) and an oscilloscope (Rigol). Inverse-designed grating couplers optimized for transmission at 1,550 nm were used to couple to and from the chip. To minimize leakage from the probe input fibre to the detection fibre, the grating inputs were oriented perpendicular to each other. For fixed and scanning measurements of power-dependent isolation, the EOM was modulated with a 90-kHz signal from an arbitrary waveform generator (Rigol) and the same signal was used for lock-in detection. For frequency-dependent measurements, the EOM was driven by an amplified (Minicircuits) 90-kHz lock-in signal mixed (Minicircuits) with a high-frequency modulation from a frequency synthesizer (Rohde and Schwarz). DFB operation The DFB laser was driven by a precision source (Keithley) with 380-mW electrical power. To thermally stabilize and tune the frequency of the laser, the laser mount was cooled by a Peltier device using a 10-kΩ thermistor to provide feedback control with a temperature controller (Thorlabs). Data availability All data are available from the corresponding authors upon reasonable request.
Lasers are transformational devices, but one technical challenge prevents them from being even more so. The light they emit can reflect back into the laser itself and destabilize or even disable it. At real-world scales, this challenge is solved by bulky devices that use magnetism to block the harmful reflections. At chip scale, however, where engineers hope lasers will one day transform computer circuitry, effective isolators have proved elusive. Against that backdrop, researchers at Stanford University say they have created a simple and effective chip-scale isolator that can be laid down in a layer of semiconductor-based material hundreds of times thinner than a sheet of paper. "Chip-scale isolation is one of the great open challenges in photonics," said Jelena Vučković, a professor of electrical engineering at Stanford and senior author of the study appearing Dec. 1 in the journal Nature Photonics. "Every laser needs an isolator to stop back reflections from coming into and destabilizing the laser," said Alexander White, a doctoral candidate in Vučković's lab and co-first author of the paper, adding that the device has implications for everyday computing, but could also influence next-generation technologies, like quantum computing. Small and passive The nanoscale isolator is promising for several reasons. First, this isolator is "passive." It requires no external inputs, complicated electronics, or magnetics—technical challenges that have stymied progress in chip-scale lasers to date. These additional mechanisms lead to devices that are too bulky for integrated photonics applications and can cause electrical interference that compromises other components on the chips. Another advantage is that the new isolator is also made from common and well-known semiconductor-based material and can be manufactured using existing semiconductor processing technologies, potentially easing its path to mass production. The new isolator is shaped like a ring. It is made of silicon nitride, a material based on the most commonly used semiconductor—silicon. The strong primary laser beam enters the ring and the photons begin to spin around the ring in a clockwise direction. At the same time, a back-reflected beam would be sent back into the ring in the opposite direction, spinning in a counterclockwise fashion. "The laser power that we put in circulates many times and this allows us to build up inside the ring. This increasing power alters the weaker beam, while the stronger one continues unaffected," explains co-first author Geun Ho Ahn, a doctoral candidate in electrical engineering of the phenomenon that causes the weaker beam to stop resonating. "The reflected light, and only the reflected light, is effectively canceled." The primary laser then exits the ring and is "isolated" in the desired direction. Vučković and team have built a prototype as a proof of concept and were able to couple two ring isolators in a cascade to achieve better performance. "Next steps include working on isolators for different frequencies of light," said co-author Kasper Van Gasse, a post-doctoral scholar in Vučković's lab. "As well as tighter integration of components at chip scale to explore other uses of the isolator and improve performance."
10.1038/s41566-022-01110-y
Physics
Researchers unveil the origin of Oobleck waves
Baptiste Darbois Texier et al. Surface-wave instability without inertia in shear-thickening suspensions, Communications Physics (2020). DOI: 10.1038/s42005-020-00500-4 Journal information: Communications Physics
http://dx.doi.org/10.1038/s42005-020-00500-4
https://phys.org/news/2020-12-unveil-oobleck.html
Abstract Recent simulations and experiments have shown that shear-thickening of dense particle suspensions corresponds to a frictional transition. Based on this understanding, non-monotonic rheological laws have been proposed and successfully tested in rheometers. These recent advances offer a unique opportunity for moving beyond rheometry and tackling quantitatively hydrodynamic flows of shear-thickening suspensions. Here, we investigate the flow of a shear-thickening suspension down an inclined plane and show that, at large volume fractions, surface kinematic waves can spontaneously emerge. Curiously, the instability develops at low Reynolds numbers, and therefore does not fit into the classical framework of Kapitza or ‘roll-waves’ instabilities based on inertia. We show that this instability, that we call ‘Oobleck waves’, arises from the sole coupling between the non-monotonic (S-shape) rheological laws of shear-thickening suspensions and the flow free surface. Introduction How microscopic interactions affect the macroscopic flow behavior of complex fluids is at the core of soft matter physics. Recently, it has been shown that shear-thickening in dense particulate suspensions corresponds to a frictional transition at the microscopic scale; when the imposed shear stress exceeds the inter-particle short-range repulsive force, the grain contact interaction transits from frictionless to frictional 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 . During this transition, the proliferation of frictional contacts can be so massive that it triggers a remarkable macroscopic rheological response: the rate of shear of the suspension decreases when the imposed shear stress is increased. As a result, highly concentrated shear-thickening suspensions have peculiar S-shape rheological laws 9 , 10 , which have been rationalized by a frictional transition model 3 , 11 , 12 . So far, the consequences of the frictional transition and its associated S-shape rheology have been essentially investigated in rheometers, where instabilities, shear bands and spatiotemporal patterns have been documented 11 , 13 , 14 , 15 . By contrast, very little is known about the behavior of shear-thickening suspensions in real hydrodynamic flow configurations beyond rheometry, in spite of the numerous applications 16 , 17 , 18 . An archetypical case, which is widely encountered in industrial and geophysical applications, is the incline plane flow configuration. As previously reported 19 and illustrated in Fig. 1 a (see also Supplementary movie 2 ), when a thin layer of shear-thickening suspension flows down an inclined plane, surface waves of wavelengths much larger than the thickness can develop spontaneously and grow as they propagate downstream. This longwave free-surface instability may seem reminiscent of the Kapitza instability observed when a thin liquid film flows down a slope 20 , 21 , or more generally of the so called “roll waves” instability observed from turbulent flows in open channels 22 , 23 , 24 , to avalanches of complex fluids like mud 25 , 26 or granular media 27 . These latter two instabilities rely on the same primary mechanism: the amplification of kinematic surface waves at high velocity owing to inertial effects 28 . For a Newtonian liquid in the laminar regime, the destabilization occurs only when the Reynolds number of the flow, R e = ρ u 0 h 0 / η , where ρ is the fluid density, u 0 its mean velocity, h 0 the flow thickness and η the fluid viscosity, exceeds the Kapitza threshold, \(R{e}_{K}=5/(6\tan \theta )\) , which is typically much larger than 1 for a small tilting angle θ of the incline 29 , 30 , 31 . By contrast, the growth of surface waves observed in Fig. 1 a for a dense shear-thickening suspension occurs at a Reynolds number of only ≈1, i.e., far below the Kapitza threshold R e K ≈ 5 predicted for θ = 10 ∘ (see also ref. 19 ). This suggests that a different instability mechanism is at play for dense shear-thickening suspensions, yet its origin remains an open question. Fig. 1: Experimental characterization of the instability onset. a Non-inertial surface waves emerging spontaneously when a concentrated suspension of cornstarch particles flows down an incline (volume fraction ϕ = 0.45, inclination angle θ = 10 ∘ and normalized flow Reynold number R e / R e K ≈ 0.2). b Sketch of the experimental setup. We use the progressive drainage of the reservoir to quasi-steadily vary the flow rate. The instability onset is determined by measuring the wave amplitudes both at the top and at the bottom of the incline with two laser sheets and cameras. For ϕ ≤ 0.4, an oscillation of the gate is added to impose a controlled perturbation. c Spatiotemporal plots of the laser sheet transverse-position versus time, indicating the vertical oscillations (blue and red arrows) of the free surface, at the top and at the bottom of the incline ( ϕ = 0.33, θ = 2 ∘ , R e ≈ 37). d Reynolds number of the flow, R e , and e amplitude of the perturbation at the top, Δ h 1 , and at the bottom, Δ h 2 , during the drainage of the suspension reservoir ( ϕ = 0.36, θ = 3 ∘ ). The instability onset ( Δ h 1 = Δ h 2 ) is given by R e c ≈ 28 (black-dashed-line). Full size image Here, we investigate the origin of this instability by studying the flow of a shear-thickening suspension down an inclined plane over a wide range of volume fractions and flow rates. We confirm that this instability is not inertial and fundamentally different than the classical Kapitza or Roll waves instabilities. We provide experimental evidence together with a theoretical explanation, which show that this destabilization arises from the coupling between the flow free surface and the non-monotonic (S-shape) rheological laws of shear-thickening suspensions. Results and discussion Evidence of an instability distinct from the classical Kapitza or roll waves instabilities We perform experiments with shear-thickening aqueous suspensions of commercial native cornstarch (Maisita®, ). We vary the particle volume fraction over a wide range (0.30 < ϕ < 0.48) and characterize the onset of stability (the value of ϕ refers to the dry volume of cornstarch computed from its dry weight and density, 1550 kg m −3 ). We use a 1 m long and 10 cm wide inclined plane covered with a diamond lapping film (663-3M with roughnesses of ~45 μm) to insure rough boundary conditions. The suspension is released from a reservoir at the top of the plane through a gate with an adjustable aperture (Fig. 1 b). A scale, placed at the bottom end of the incline (not shown in the schematic), provides the instantaneous flow rate q of the suspension. To probe the stability of this free-surface flow for moderate volume fractions ϕ ≤ 0.4, the gate is mounted on a translating stage imposing a small sinusoidal modulation of its aperture (3 Hz, ±100 μm). At large volume fractions however ( ϕ ≳ 0.4), no forcing is required because the flow is so unstable that it is dominated by noise amplification of its most unstable mode. Two low-incident laser sheets and two cameras are used to measure the mean film thickness h 0 ~2−10 mm, and the crest-to-crest amplitude of the waves upstream and downstream of the incline (Fig. 1 c). The calibration of the laser sheet projections on the film surface yields a local measurement of h 0 with a precision of 10 μm. We use a protocol designed to characterize the instability onset with a single experiment for each volume fraction ϕ and inclination θ . The flow rate is varied quasi-steadily, either using the progressive drainage of the reservoir (decreasing flow rate) or by slowly increasing the gate aperture (increasing flow rate). At each instant, an effective Reynolds number of the flow is computed from the instantaneous flow rate q and from the mean film thickness h 0 using the relation \(Re=3{q}^{2}/(g{h}_{0}^{3}\sin \theta )\) , where g is the gravitational acceleration. Note that the flow rate is varied sufficiently slowly that at each instant, the flow rate is constant along the incline. With such a definition based on mean quantities only, the Reynolds number can be applied to any rheology and is directly related to the Froude number \(F={u}_{0}^{2}/(g{h}_{0}\cos \theta )\) commonly used to described roll waves 24 by \(Re={3F}/{\tan} \theta\) , where u 0 = q / h 0 is the mean flow velocity. R e also reduces to the standard expression R e = ρ u 0 h 0 / η for a Newtonian fluid of viscosity η and density ρ in the laminar regime (Nusselt velocity profile). Figure 1 d shows the concomitant evolution of R e and of the wave amplitude upstream ( Δ h 1 ) and downstream ( Δ h 2 ) the incline during the drainage of the reservoir, starting from an unstable situation where Δ h 2 > Δ h 1 . The stability onset is precisely reached when Δ h 1 = Δ h 2 , which provides us with the critical Reynolds number R e c (dashed line in Fig. 1 c), the critical flow thickness h c , and the critical basal shear stress, \({\tau }_{c}=\rho g{h}_{c}\,\sin \theta\) . We have verified that the same instability onset is obtained by carrying successive steady-state measurements at various constant flow rates. A new freshly prepared suspension of cornstarch is used for each measurement. Experiments are repeated four times for each volume fraction. Figure 2 a shows the critical Reynolds number R e c , normalized by the Kapitza threshold for a Newtonian fluid, as a function of ϕ . For the lowest volume fraction investigated, ϕ = 0.30, the stability threshold is close to R e K (it is typically 50% above, owing to the finite forcing frequency and finite width of the plane 32 , 33 ). As the volume fraction is increased, R e c becomes increasingly larger relative to R e K , reaching ≈5 R e K at ϕ = 0.41. This behavior is actually expected from Kapitza’s inertial mechanism for a medium that is continuously shear-thickening 34 , like our cornstarch suspension over that range of volume fractions (0.35 ≲ ϕ ≲ 0.41). More strikingly, for ϕ ≳ 0.41 the relative critical Reynolds number drops drastically, down to two orders of magnitude below the Kapitza threshold at the largest volume fraction investigated ( ϕ = 0.48, for which R e ≈ 0.15 and F ≈ 10 −2 ). Clearly, in this domain, the flow destabilization can no longer be explained within the Kapitza framework since inertial effects are negligible. Note that inertia is also negligible at the particle scale, since the Stokes number, \(St \sim {(d/{h}_{0})}^{2}Re\) , where d ~ 10 μm is the particle size, is ~10 5 times smaller than R e . As shown in Fig. 2 b, this qualitative change in the onset of instability around ϕ = 0.41 is also observed in the evolution of the critical shear stress τ c with ϕ . Similarly, the surface wave speed at the instability threshold changes significantly and abruptly. Its value c c , normalized by the mean flow velocity u 0 drops from 3, which is Kapitza’s prediction for a Newtonian layer in the long wavelength limit, to ~2 for volume fractions exceeding ≈0.41 (see Fig. 2 c). These results confirm that above ϕ ≈ 0.41, a longwave free-surface instability, which is fundamentally distinct from the Kapitza instability, emerges. In the following, we call this instability, which to our knowledge as no equivalent in classical fluids, “Oobleck waves”. Fig. 2: Onset of instability. a Critical Reynolds number of the instability normalized by the Kapitza threshold R e c / R e k versus volume fraction ϕ . Inset: R e c / R e k versus inclination angle θ for ϕ = 0.45. b Critical shear stress τ c versus ϕ . c Normalized critical wave speed c c / u 0 versus versus ϕ . Inset: c c / u 0 versus speed of the kinematic waves c kin / u 0 , the black line shows that the waves propagate at the speed of the surface kinematic waves. Dashed-blue-line: Kapitza prediction (inertia+Newtonian fluid). Solid-red-line: prediction of the linear stability analysis ( \(A={\rm{d}}\tilde{\dot{\gamma }}/{\rm{d}}{\tilde{\tau }}_{b}{| }_{{\tilde{\tau }}_{b} = 1}=0\) ) describing the coupling between the flow free-surface and the suspension shear-thickening rheology. Different symbols indicate different inclination angles θ : ◇ 2 ∘ , ▿ 3 ∘ , ⊳ 6 ∘ , ⊲ 9 ∘ , ○ 10 ∘ , △ 22 ∘ . The error bars indicate the standard deviation between experimental measurements. Different background colors highlight which instability emerges: Kapitza (gray) or Oobleck waves (red). Full size image Oobleck waves arise from the S-shape rheology and kinematic wave propagation To understand the origin of Oobleck waves, we characterize the rheology of the cornstarch suspension in a cylindrical-Couette rheometer (Fig. 3 ). We find that the volume fraction at which Oobleck waves appear ( ϕ ≈ 0.41) corresponds precisely to ϕ DST , the volume fraction at which the shear-thickening transition becomes discontinuous. Indeed, we analyze our rheological data along Wyart & Cates’s model, which assumes that the effective viscosity of the suspension, \(\eta (\phi ,\tau )={\eta }_{s}{({\phi }_{J}(\tau )-\phi )}^{-2}\) , diverges at a critical volume fraction, ϕ J , that depends on the applied shear stress, τ , according to \({\phi }_{J}(\tau )={\phi }_{0}(1-{e}^{-{\tau }^{* }/\tau })+{\phi }_{1}{e}^{-{\tau }^{* }/\tau }\) , where ( η s , τ * , ϕ 0 , ϕ 1 ) are material constant. Here, η s is a prefactor proportional to the solvent viscosity, τ * is the short-range repulsive stress scale above which the frictional transition occurs, which may be tuned by changing the particle roughness or surface chemistry, ϕ 0 (resp. ϕ 1 ) is the jamming volume fraction at which the suspension viscosity diverge at low (resp. large) stress, with ϕ 1 being dependent on the inter-particle friction coefficient 3 . By fitting our measurements with this model, we find that the rheological curve (shear stress τ versus shear rate \(\dot{\gamma }\) ) becomes S-shaped when ϕ ≥ 0.41 ± 0.005 = ϕ DST (see Fig. 3 and Methods for the fitting procedure). This suggests that negatively sloped portion in the rheological curve ( \({\rm{d}}\dot{\gamma }/{\rm{d}}\tau \, < \, 0\) ) is a key ingredient of the instability. S-shaped flow curves, and more generally rheograms with a negatively sloped region, are known to produce unstable flow conditions 35 , 36 . Previous studies on shear-thickening suspensions have based their analysis on this feature to explain for instance the emergence of random fluctuations 37 reported initially by Boersma et al. 38 , and the oscillations observed when an object moves in a shear-thickening fluid 39 , 40 or in rheometric configurations 15 , 41 . However, all these models require inertia to predict an instability. By contrast, here, the instability seems to be of a fundamentally different nature. First, it can occur at very low Reynolds and Froude numbers, for which inertial effects are negligible. Second, at the instability onset, the unstable mode propagates at the speed of the surface kinematic waves defined by c kin ≡ d q /d h 0 28 (Fig. 2 c inset). This indicates that the coupling between the flow and the free-surface deformation, which was not considered in previous studies, is essential to explain the emergence of Oobleck waves. Fig. 3: Rheograms of the aqueous cornstarch suspension. Shear-stress τ versus shear rate \(\dot{\gamma }\) for various volume fractions ϕ . Solid lines: fit by Wyart & Cates rheological laws setting the jamming volume fraction for frictionless and frictional particles to ϕ 0 = 0.52 ± 0.005 and ϕ 1 = 0.43 ± 0.005, respectively, the short-range repulsive stress scale above which the frictional transition occurs to τ * = 12 ± 2 Pa and the prefactor to η s = 0.91 ± 0.01 mPa s. The rheograms are negatively sloped ( \({\rm{d}}\dot{\gamma }/{\rm{d}}\tau <0\) ) in the region highlighted in blue. Full size image Oobleck waves instability mechanism We now show how the negative slope in the rheology coupled with a gravity-driven free-surface flow can yield to an instability without invoking inertial effects. In the zero-Reynolds number limit, the force balance on a slice of suspension, as depicted in Fig. 4 a, imposes that the basal stress, τ b , is equal to the sum of the projected weight of the slice, \(\rho gh\sin \theta\) , where h ( x , t ) is the local flow thickness, and of the longitudinal pressure gradient induced by the free-surface deflection. For wavelengths much larger than the flow thickness and the capillary length, the pressure profile perpendicular to the plane can be assumed to be hydrostatic \(P(x,z,t)=\rho g\cos \theta (h(x,t)-z)\) , where x is the flow direction and z the height within the flowing layer in the perpendicular direction 21 . The depth-averaged force balance is then given by $${\tau }_{b}=\rho gh\sin \theta -\rho gh\cos \theta \frac{\partial h}{\partial x}.$$ (1) Let us now consider a perturbation of a base flow of constant thickness, as illustrated in Fig. 4 b. A local increase of the flow thickness implies that ∂ h /∂ x becomes positive upstream of the perturbation and negative downstream. To satisfy the force balance ( 1 ), the basal shear stress τ b upstream must therefore decrease, whereas it must increase downstream. However, owing to the S-shape rheology of the suspension, when \({\rm{d}}\dot{\gamma }/{\rm{d}}\tau \, < \, 0\) , a decrease (resp. increase) in τ b implies a local increase (resp. decrease) of the flow rate \(\dot{\gamma }\) . Therefore, the shear rate increases upstream and decreases downstream, inducing a net inward mass flux underneath the bump and the amplification of the initial perturbation (red arrows in Fig. 4 b). Fig. 4: Instability mechanism. a Depth-averaged forces acting along the flow x direction on a slice of suspension of width d x (shaded in blue): basal force τ b d x , projected weight of the slice \(\rho ghdx\sin \theta\) and hydrostatic pressure \(\rho g\cos \theta {h}^{2}(x)/2\) , where τ b is the basal shear stress, g is gravity, h ( x ) is the flow thickness, θ is the plane inclination angle. b Positive feedback for a shear-thickening suspension with a S-shaped rheological curve: a local increase of the flow thickness h implies that ∂ h /∂ x is positive (resp. negative) upstream (resp. downstream) of the perturbation. Force balance (see Eq. ( 1 )) then implies that the basal shear stress τ b upstream (resp. downstream) must decrease (resp. increase). When the suspension rheogram is negatively sloped ( \({\rm{d}}\dot{\gamma }/{\rm{d}}\tau <0\) ), this yields a local increase (resp. decrease) of the flow rate \(\dot{\gamma }\) . The combination of these two feedback cycles (gray arrows) induces a net inward mass flux towards the bump (red arrows) that amplifies the initial perturbation (red vertical arrow). Full size image Quantitative depth-averaged model without inertia To go beyond this qualitative picture, we perform a linear stability analysis of a steady uniform flow of thickness h 0 and depth-averaged velocity u 0 using the Saint-Venant approximations (long wavelength limit 27 , 42 ) and neglecting inertia (lubrication approximation 21 ). The equations are written using the dimensionless variables \(\tilde{h}=h/{h}_{0}\) , \(\tilde{x}=x/{h}_{0}\) , \(\tilde{u}=u/{u}_{0}\) , \(\tilde{t}=t{u}_{0}/{h}_{0},\) \({\tilde{\tau }}_{b}={\tau }_{b}/\rho g{h}_{0}\sin \theta\) and linearized by writing \(\tilde{h}=1+{h}_{1}\) , \(\tilde{u}=1+{u}_{1}\) and \({\tilde{\tau }}_{b}=1+{\tau }_{1}\) with ( h 1 , u 1 , τ 1 ≪ 1). Under these conditions, the mass conservation, \({\partial}_{\tilde{t}} h+{\partial}_{\tilde{x}}(hu)=0\) , and the force balance ( 1 ) become \({\partial}_{\tilde{t}} {h_{1}}+{\partial}_{\tilde{x}}{h_{1}}+{\partial}_{\tilde{x}}{u_{1}}=0\) and \({\tau }_{1}={h}_{1}-\tan {\theta }^{-1}{\partial }_{\tilde{x}}{h}_{1}\) , respectively. The linearization of the normalized shear-rate \(\tilde{\dot{\gamma }}(\tilde{{\tau }_{b}})\equiv \tilde{u}/\tilde{h}\) , gives A τ 1 = u 1 − h 1 , where \(A={\rm{d}}\tilde{\dot{\gamma }}/{\rm{d}}{\tilde{\tau }}_{b}{| }_{{\tilde{\tau }}_{b} = 1}\) is the slope of the rheological curve for the base state basal stress \({\tau }_{b}=\rho g{h}_{0}\sin \theta\) , obtained from the integration of the flow velocity profile (see Methods). Taking the spatial derivative of the force balance and substituting τ 1 and \({\partial}_{\tilde{x}}{u_{1}}\) using the rheology and mass balance, respectively, lead to a single partial differential equation for the free-surface perturbation h 1 : $$\frac{\partial {h}_{1}}{\partial \tilde{t}}+\tilde{c}\frac{\partial {h}_{1}}{\partial \tilde{x}}=\frac{A}{\tan \theta }\frac{{\partial }^{2}{h}_{1}}{\partial {\tilde{x}}^{2}},$$ (2) where \(\tilde{c}={c}_{{\rm{kin}}}/{u}_{0}=2+A\) is the dimensionless speed of the kinematic waves 28 . Interestingly, the perturbation amplitude is found to follow a diffusion equation in the reference frame of the kinematic waves, with an effective “diffusion coefficient” \(A/\tan \theta\) . When A < 0, i.e., when the slope of the rheological law \({u}_{0}/{h}_{0}=\dot{\gamma }({\tau }_{b})\) is negative, anti-diffusion occurs, which leads to an amplification of all perturbations, whereas for A > 0 the flow is stable. The onset of instability is thus given by A = 0. This criteria can be expressed in terms of a critical Reynolds number R e c and a critical basal shear stress τ c using the Wyart & Cates rheological laws (see Methods). Figures 2 a, b show that, for ϕ ≥ ϕ DST , these predictions (red-solid lines) capture very well the value of the critical Reynolds R e c and its dramatic drop over two decades when increasing ϕ , as well as the order of magnitude and the drop of the critical shear stress τ c with ϕ . The decrease of the instability onset with increasing ϕ is a direct consequence of Wyart and Cates’ rheological laws 3 , where the DST onset stress also decreases with ϕ . Physically, it comes from the fact that, when approaching the maximal possible packing fraction ϕ 0 , less and less frictional contacts are required to reach the DST region. The model also predicts a weak dependence of R e c on the plane inclination angle, as observed experimentally (Fig. 2 a inset). This overall agreement is all the more conclusive that it is rooted on physically based constitutive laws, of which the rheological parameters are measured independently, without further fitting. Another strong prediction of the model is that, at the instability onset ( A = 0), the speed of the unstable mode is equal to the speed of the kinematic waves c c / u 0 = 2. This prediction is fully consistent with the drop and value of the normalized wave speed observed experimentally for ϕ > ϕ DST (see Fig. 2 c). These results conclusively show that surface waves can emerge from the coupling between a negatively sloped rheology and the flow free surface, without the need for inertial effects. The Oobleck waves instability mechanism highlighted in this study is not limited to shear-thickening suspensions and could be extended to any other complex fluids having a rheology with a negatively sloped region (e.g., granular materials and geomaterials exhibiting velocity-weakening rheology 43 , 44 , concentrated polymers or surfactant solutions 35 , 36 , liquid crystals 45 , active self-propelled suspensions 46 ). More generally, our analysis shows that gravity forces, which are usually stabilizing for gravity-driven free-surface flows, can become destabilizing in the presence of a non-monotonic rheology. Our result could thus be extended to other stabilizing forces such as capillary forces arising from the free-surface deformation. We thus anticipate that other interesting instabilities may be explained directly, or in the light of our study. Finally, in a broader context, our study reveals that kinematic waves can be unstable in an overdamped medium, where inertia is negligible. These waves, which are observed in a wide range of situations (e.g., traffic and pedestrian flows 47 , sediment transport 48 , fluidized bed 49 , surge, and floods 23 ), result from mass conservation and a general relationship between a local flow rate and a local concentration 50 , 51 (e.g., number of vehicles or pedestrians on a road, solid packing fraction in a suspension, depth of the flow). However, in all these systems, their emergence from a uniform state requires an inertial lag between the flow rate and the concentration 28 . In our system, the situation is very different as these waves are unstable not from inertia, but from the intrinsic constitutive flow rule of the material. Whether this description of kinematic waves could be extended to more complex systems, such as crowds 47 or active self-propelled particles 46 , are interesting topics to address in future studies. Methods Rheological data fitting procedure Rheograms of the aqueous suspension of cornstarch are measured for various volume fractions in a narrow-gap cylindrical-Couette cell (Fig. 5 a) using a rheometer (Anton Paar MCR 501). The height of the shear-cell (40 mm) is sufficiently large to neglect sedimentation effects of the particles during the measurement. Similarly to the procedure followed by Guy et al. 4 , the viscosity below ( τ ≪ τ * ) and above ( τ ≫ τ * ) the shear-thickening transition are extracted and plotted versus ϕ (Fig. 5 b). The low viscosity branch (frictionless branch) is first fitted with \(\eta (\phi )={\eta }_{s}{({\phi }_{0}-\phi )}^{-2}\) , with η s and ϕ 0 as fitting parameters. This yields η s = 0.91 ± 0.01 mPa.s and ϕ 0 = 0.52 ± 0.005. The large viscosity branch (frictional branch) is then fitted with \(\eta (\phi )={\eta }_{s}{({\phi }_{1}-\phi )}^{-2}\) , using the previous estimation of η s , and letting ϕ 1 as the only fitting parameter. Note that the rheograms are obtained using both very rough (square symbols) and rough walls (circle symbols), by covering the cell-walls with sand papers of different grades (roughnesses of ≈80 μm and ≈15 μm, respectively). The two measurements overlap, except in the frictional branch at high volume fraction (shaded symbols in Fig. 5 b). For instance, the data from ϕ = 0.4 and 0.41 are included in the fitting procedure, while 0.42 and 0.43 are not, because the first two points overlap, independently of the roughness of the boundaries, whereas for 0.42 and 0.43 systematic deviations are observed indicating slippage or other artefacts. All data points which are interpreted as biased measurements (transparent symbols) are discarded from the fitting procedure; this yields ϕ 1 = 0.43 ± 0.005. Once the values of η s , ϕ 0 and ϕ 1 are set, we determine the value of τ * by fitting the full rheograms \(\tau (\dot{\gamma })\) with Wyart & Cates laws: \(\tau ={\eta }_{s}{({\phi }_{J}(\tau )-\phi )}^{-2}\dot{\gamma }\) , with \({\phi }_{J}(\tau )={\phi }_{0}(1-{e}^{-{\tau }^{* }/\tau })+{\phi }_{1}{e}^{-{\tau }^{* }/\tau }\) . The best fit, shown in Fig. 5 c, is obtained for τ * = 12 ± 2 Pa. The value of τ * represents the critical shear stress required to overcome the inter-particle repulsive force and activate frictional contacts between particles. For an inter-particulate force f and a particle size d , the critical shear stress is expected to be of order f / d 2 . The value we obtain (≈12 Pa) is consistent with the values already reported in the literature for cornstarch in water. Fig. 5: Rheological data fitting procedure. a Cylindrical-Couette geometry used to characterize the rheology of the aqueous suspension of cornstarch (yellow). The gray area represents the cylinder rotating in the direction indicated by the black arrow. b Viscosity (Pa.s) below ( τ ≪ τ * ) and above ( τ ≫ τ * ) the shear-thickening transition versus ϕ . Blue-solid-line: frictionless branch, red-solid-line: frictional branch, black-dashed-lines: jamming volume fractions for the frictionless and frictional branches defining ϕ 0 and ϕ 1 , respectively. Square and circle symbols correspond to measurements performed with different wall roughness. The data points made transparent are not used in the fitting procedure as for these points, the measurement depend on the wall roughness. c Shear stress τ versus shear rate \(\dot{\gamma }\) measured at various volume fraction ϕ . Solid lines: Wyart & Cates rheological laws. The blue shaded area highlights the region where the rheograms are negatively sloped ( \({\rm{d}}\dot{\gamma }/{\rm{d}}\tau <0\) ). Full size image Computation of τ c and R e c To compute the critical shear stress τ c and the critical Reynold number R e c from the instability criteria resulting from the linear stability analysis \(A\equiv {\rm{d}}\tilde{\dot{\gamma }}/{\rm{d}}\tilde{{\tau }_{b}}{| }_{\tilde{{\tau }_{b}} = 1}=0\) , we need to relate the shear rate \(\dot{\gamma }\equiv {u}_{0}/{h}_{0}\) , defined as the ratio of the depth-averaged flow velocity to the flow thickness, to the basal shear stress τ b and the basal suspension viscosity η ( τ b ). For a steady uniform flow down an inclined plane of slope θ , the momentum equation applied to a surface layer of thickness h 0 − z gives $$\tau (z)=\rho g\sin \theta ({h}_{0}-z)=\eta (z)\frac{{\rm{d}}\hat{u}(z)}{{\rm{d}}z},$$ (3) where the second equality uses the definition of viscosity, \(\eta =\tau /({\rm{d}}\hat{u}/{\rm{d}}z)\) , and \(\hat{u}(z)\) is the local velocity parallel to x . From the proportionality between τ and h 0 − z , the local velocity can be expressed as $$\hat{u}(\tau )=\frac{1}{\rho g\sin \theta }\mathop{\int}\nolimits_{\tau }^{{\tau }_{b}}\frac{\tau ^{\prime\prime} }{\eta (\tau ^{\prime\prime} )}{\rm{d}}\tau ^{\prime\prime} .$$ (4) Using the definition of the depth-averaged flow velocity, \({u}_{0}=\mathop{\int}\nolimits_{0}^{{\tau }_{b}}\hat{u}(\tau ^{\prime} )\ {\rm{d}}\tau ^{\prime} /{\tau }_{b}\) , we obtain the expression of the depth-averaged shear rate $$\dot{\gamma }=\frac{{\tau }_{b}}{3\eta ({\tau }_{b})}{\mathcal{G}}({\tau }_{b}),$$ (5) where $${\mathcal{G}}(\tau )=\frac{3\eta (\tau )}{{\tau }^{3}}\mathop{\int}\nolimits_{0}^{\tau }\mathop{\int}\nolimits_{\tau ^{\prime} }^{\tau }\frac{\tau ^{\prime\prime} }{\eta (\tau ^{\prime\prime} )}{\rm{d}}\tau ^{\prime\prime} {\rm{d}}\tau ^{\prime} ,$$ (6) embeds the shear-thickening of the suspension. By definition, \({\mathcal{G}}=1\) for a Newtonian fluid. From ( 5 ) and ( 6 ) we obtain $$A\equiv \frac{{\rm{d}}\tilde{\dot{\gamma }}}{{\rm{d}}\tilde{{\tau }_{b}}}{| }_{\tilde{{\tau }_{b}} = 1}=\frac{{\tau }_{b}}{\dot{\gamma }}\frac{{\rm{d}}\dot{\gamma }}{{\rm{d}}{\tau }_{b}}{| }_{{\tau }_{b} = \rho g\sin \theta {h}_{0}}=\frac{3}{{\mathcal{G}}({\tau }_{b})}-2,$$ (7) where we have used \(\frac{{\rm{d}}}{{\rm{d}}\tau }(\mathop{\int}\nolimits_{0}^{\tau }\mathop{\int}\nolimits_{{\tau }^{\prime}}^{\tau }\frac{{\tau }^{^{\prime\prime} }}{\eta ({\tau }^{^{\prime\prime} })}{\rm{d}}{\tau }^{^{\prime\prime} }{\rm{d}}{\tau }^{\prime})={\tau }^{2}/\eta (\tau )\) . Note that in ( 7 ), the basal stress τ b , the shear rate \(\dot{\gamma }\) and the derivative are taken at the base state, i.e., for \(\dot{\gamma }={u}_{0}/{h}_{0}\) and \({\tau }_{b}=\rho g\sin \theta {h}_{0}\) . Finally, the critical shear stress τ c , at which the flow destabilizes (black-solid-line plotted in Fig. 2 b of the main text), is obtained numerically by finding the value of τ b for which A ( τ c ) = 0, i.e., \({\mathcal{G}}({\tau }_{b}={\tau }_{c})=3/2\) . From the value of τ c , we obtain the critical Reynolds number (black-solid-line plotted in Fig. 2 (a) of the main text) $$R{e}_{c}\equiv \frac{3{u}_{0}^{2}}{g{h}_{0}\sin \theta }=\frac{3{{\tau }_{c}}^{3}{\left({\phi }_{J}({\tau }_{c})-\phi \right)}^{4}}{9{\eta }_{s}^{2}\rho {g}^{2}{\sin }^{2}\theta }{[{\mathcal{G}}({\tau }_{c})]}^{2}=\frac{3{{\tau }_{c}}^{3}{\left({\phi }_{J}({\tau }_{c})-\phi \right)}^{4}}{4{\eta }_{s}^{2}\rho {g}^{2}{\sin }^{2}\theta }.$$ (8) Data availability The data that support the findings of this study are available from (DOI 10.5281/zenodo.4247592).
"Oobleck" is a strange fluid made of equal parts of cornstarch and water. It flows like milk when gently stirred, but turns rock-solid when impacted at high speed. This fascinating phenomenon, known as shear-thickening, results in spectacular demonstrations like running on a pool of Oobleck without submerging into it, as long as the runner doesn't stop. Researchers from Aix-Marseille University in France have now studied the regular and prominent surface waves that form when a Oobleck flows down an inclined slope (see Figure 1). Similar waves can be observed on gutters and windows on rainy days. However, the scientists noted qualitative differences with water waves; waves in Oobleck grow and saturate much faster. In order to unveil the origin of Oobleck waves, they conducted careful experiments with a mixture of cornstarch and water down an inclined plane. The researchers measured the onset of wave appearance and their speed using controlled perturbation of the flow and laser detection to estimate the fluid film thickness. These experiments revealed that for concentrated Oobleck, the onset of destabilization is different for destabilization in a Newtonian fluid such as water. This surprising observation led the team to look for a scenario to explain their formation. Their results are presented in a paper published on December 18 in Communication Physics. In this article, they conclude that for Oobleck, waves do not arise from the effect of inertia, as for water, but from Oobleck's specific flowing properties. Under impact, as shown by recent studies, Oobleck suddenly changes from liquid to solid because of the activation of frictional contacts between the starch particles. When flowing down a slope, this proliferation of frictional contacts leads to a very curious behavior: The flow velocity of the suspension decreased when the imposed stress increased—like stepping on the gas pedal causing a car to decelerate. Researchers have shown that this effect couples to the flow free surface and can spontaneously generate a regular wave pattern. The proposed mechanism is generic. These findings could thus provide new grounds to understand other flow instabilities observed in various configurations, particularly in industrial processes facing problematic flow instabilities when conveying Oobleck-like materials such as concrete, chocolate or vinyl materials.
10.1038/s42005-020-00500-4
Medicine
Researchers develop new protocol to generate intestinal organoids in vitro
Aditya Mithal et al, Generation of mesenchyme free intestinal organoids from human induced pluripotent stem cells, Nature Communications (2020). DOI: 10.1038/s41467-019-13916-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-13916-6
https://medicalxpress.com/news/2020-01-protocol-intestinal-organoids-vitro.html
Abstract Efficient generation of human induced pluripotent stem cell (hiPSC)-derived human intestinal organoids (HIOs) would facilitate the development of in vitro models for a variety of diseases that affect the gastrointestinal tract, such as inflammatory bowel disease or Cystic Fibrosis. Here, we report a directed differentiation protocol for the generation of mesenchyme-free HIOs that can be primed towards more colonic or proximal intestinal lineages in serum-free defined conditions. Using a CDX2 eGFP iPSC knock-in reporter line to track the emergence of hindgut progenitors, we follow the kinetics of CDX2 expression throughout directed differentiation, enabling the purification of intestinal progenitors and robust generation of mesenchyme-free organoids expressing characteristic markers of small intestinal or colonic epithelium. We employ HIOs generated in this way to measure CFTR function using cystic fibrosis patient-derived iPSC lines before and after correction of the CFTR mutation, demonstrating their future potential for disease modeling and therapeutic screening applications. Introduction Three-dimensional tissue-specific organoids represent a powerful tool to study both normal development and disease. Organoids have been generated from a variety of primary tissue samples, including small intestine 1 , 2 , stomach 3 , colon 4 , and pancreas 5 . Since the discovery of the Wnt-activated LGR5 + stem cell niche at the base of small intestinal and colonic crypts 1 , previous studies have reported the generation of 3D intestinal organoids containing crypt-like structures from murine and human LGR5 + intestinal stem cells in the presence of Wnt stimulation, epidermal growth factor (EGF) signaling, and Noggin 2 . However, the invasive procedures to obtain intestinal and colonic biopsy samples present a major challenge for larger scale applications of human intestinal organoids. The discovery of induced pluripotent stem cells (iPSCs) 6 has led to the development of multiple directed differentiation protocols, resulting in the in vitro generation of various endoderm-derived tissue types of interest, including liver 7 , stomach 8 , pancreas 9 , proximal 10 , 11 , 12 and distal 13 lung, kidney 14 , as well as intestine 15 . Moreover, the three-dimensional culture systems that generate organoids allow cells to self-organize, promoting further maturation and differentiation into target cell types that more closely resemble their in vivo counterparts 16 , 17 . The efficient generation of iPSC-derived human intestinal organoids (HIOs) serves not only as a relevant tool to study development, but has great potential for patient-specific in vitro disease modeling and high-throughput drug screening applications. HIOs positive for intestinal markers such as the intestinal homeobox transcription factor Cdx2 18 , 19 and intestinal epithelium marker Cdh17 have been generated from iPSCs using activin A to derive SOX17 + /FOXA2 + endoderm, followed by Wnt3A and FGF4 (with serum) to specify CDX2 + hindgut (Hindgut Medium), and R-spondin, EGF, and the BMP inhibitor, noggin (Intestinal Medium or IM) to promote intestinal specification and crypt-like formation 15 . More recently, distal patterning of iPSC-derived HIOs to generate SATB2 + colonic organoids was achieved through BMP2 stimulation 20 . These factors have all been shown to play a role in intestinal specification and epithelial proliferation during embryonic development 21 . Interestingly, this protocol often generates HIOs containing both epithelial and mesenchymal stromal cells 15 , 20 , necessitating a FACS-based approach to isolate epithelial cell adhesion molecule positive (EpCAM + ) cells in order to interrogate epithelial-specific populations 22 , complicating their use in disease modeling or drug screening applications to isolate epithelial-specific factors. The derivation of HIOs from intestinal crypts using the LGR5 + adult stem cell population can generate organoids in the absence of mesenchyme 2 , raising questions as to whether intestinal progenitors derived from iPSCs are comparable to native crypts in generating HIOs. Moreover, a directed differentiation protocol using fully defined culture conditions is still lacking, as current protocols rely on the addition of exogenous serum. Here we describe a protocol using a well-defined, serum-free media for the robust de novo generation of epithelial iPSC-derived HIOs devoid of mesenchyme. In addition, we report the generation of a hiPSC CDX2-GFP reporter line that highlights the role of CDX2 as a specific marker for the emergence of iPSC-derived intestinal progenitors. This platform enables the study of both normal development as well as disease states of the gut (exemplified by cystic fibrosis), supporting the generation of patient-specific iPSC-derived organoids for interrogation, genetic manipulation, and large-scale drug screening applications. Results Generation of intestinal progenitors from iPSCs We and others have previously shown that dual-smad inhibition of the BMP/TGFβ signaling pathways (with dorsomorphin and SB431542) in definitive endoderm derived from iPSCs and ESCs promotes the development of endoderm competent to form anterior foregut derivatives, such as NKX2-1 positive lung or thyroid lineages 10 , 11 , 12 , 13 , 23 , 24 . Indeed, we performed fluorescence activated cell sorting (FACS) of cells expressing the anterior foregut endodermal transcription factor NKX2-1 or a combination of cell surface markers CD47 hi /CD26 lo (NKX2-1 + ) to enrich for a population of progenitors which can then be differentiated into proximal and distal lung lineages from human iPSCs 11 , 12 , 13 . In this protocol, prior single-cell sequencing of day 15 progenitors revealed the presence of cells expressing non-lung endodermal markers, including CDX2, and these non-lung lineages were enriched in the NKX2-1 negative fraction of cells (refs. 25 , 26 and Supplementary Fig. 1 ). Thus, we sought to investigate the potential of this differentiation approach to obtain intestinal organoids in defined, mesenchyme-free (MF) and serum-free culture conditions, in comparison to the previously described mesenchyme-containing (MC) protocol 15 (Fig. 1a ). Fig. 1: Emergence of intestinal-competent progenitors from iPSCs. a Schematic of comparison between mesenchyme-containing (MC) HIO vs mesenchyme-free (MF) directed differentiation protocols. b Mean Average (MA) Plots of significantly differentially expressed genes that were either upregulated (red dots) or downregulated (blue dots) in digital gene expression analysis from day 42 (D42) organoids sorted for CD47 on day 15, comparing the CD47 hi (Alveolospheres, left) and CD47 lo (HIOs, right) cultured in CK-DCI, as compared with day 8 (D8) progenitors ( p -value = <0.05 calculated as described 82 ). c Gene set enrichment analysis using Enrichr analyzing the top tissue types when referenced to the human gene atlas. Length of red bars indicates combined enrichment score. All bars are adjusted p -value = <0.05 calculated as described 28 , 29 . d Representative micrographs of whole mounts of day 85 organoids derived from BU3NGST NKX2-1 GFPneg sorted outgrowth demonstrate colocalized expression of Cdx2 and Villin (VIL) (scale bar = 50 μm, representative of n = 3 differentiations). Full size image Two independent human iPSC lines, bBU1c2 27 and BU3- NKX2-1 GFP -SFTPC tdTomato 11 , 13 (BU3NGST) were differentiated into CXCR4/c-Kit +/+ definitive endoderm, then treated with dual-smad inhibition as described above. Endodermal cells were then further incubated in conditions to promote lineage specification through Wnt activation with the GSK3β inhibitor CHIR99021 (CHIR), BMP4, and retinoic acid (RA) (ref. 13 and Fig. 1a ). At day 15, cells were sorted to isolate the NKX2-1 + and NKX2-1 − fractions using a published cell sorting algorithm developed by our group 11 , based on either CD47 hi/lo (for BU1) or NKX2-1 GFP +/− (for BU3NGST) and plated into 3D Matrigel droplets. Both sorted populations were cultured in a defined, serum-free media containing CHIR and KGF together with Dexamethasone, cAMP and IBMX (CK + DCI), previously shown by our group to generate type II alveolar epithelial cells from NKX2-1 + lung progenitors 13 . Both iPSC lineages differentiated into endoderm and day 15 progenitors with similar efficiencies (Supplementary Fig. 2 ). Over the course of 3 weeks, these cultures grew from single cells into self-organizing 3D structures. In order to define the transcriptional identity of these organoids, we performed RNA sequencing of bBU1c2 at day 42 of differentiation, comparing the CD47 hi (enriched for NKX2-1-expressing cells) as well as the CD47 lo (enriched for non NKX2-1-expressing cells) outgrowth to day 8 of differentiation. As shown in Fig. 1b , the CD47 hi cells sorted on day 15, re-plated in 3D Matrigel in CK + DCI, and analyzed on day 42, were enriched for expression of transcripts encoding typical lung markers such as NKX2-1 , as well as markers of type 2 alveolar cells including SLC34A2 , NAPSA , LPCAT1 , SFTPC , and SFTPB . In contrast, organoids generated from the day 15 CD47 lo outgrowth expressed genes of mixed tissue identity, including small intestine ( CDX2 , LYZ , and CDH17 ), colon ( SATB2 and CEACAM5 ), and liver ( SERPINA1 and HNF4α ). When analyzed using Enrichr 28 , 29 (referenced to the human gene atlas), the number one hit in ‘Cell Type’ for the top 350 significantly upregulated genes in the CD47 hi outgrowth was fetal lung, while the top hits for the CD47 lo outgrowth were colon and small intestine (Fig. 1c ). The data also showed that Vimentin ( VIM ), a mesenchymal marker, was significantly downregulated, while the canonical epithelial marker EpCAM, was significantly upregulated, in the CD47 lo sorted cells (see below). Whole mounts of BU3NGST GFP-derived organoids at day 85 stained for Cdx2 and the intestinal brush border component Villin showed robust 3D epithelial organoid formation, containing a significant number of Cdx2/Villin co-expressing cells (Fig. 1d ). In addition, the NKX2-1 + cells grew into spheres comprised of type II alveolar epithelial cells, as previously described 13 . scRNAseq captures the emergence of intestinal progenitors The use of dual-SMAD inhibition was originally reported to strongly induce anterior foregut specification in pluripotent stem cell-derived endoderm 23 . To further understand progenitor cell lineage commitment at single-cell resolution early on in differentiation, we performed single-cell mRNA sequencing (scRNAseq) as depicted in Fig. 2a . We differentiated the C17 NKX2-1-GFP (C17) 30 iPSC line as described above, and performed scRNAseq using the 10x Chromium platform at day 6 (after 3 days of dual-smad inhibition) and day 13 (after 7 days of CHIR and BMP4 stimulation) of directed differentiation. At day 6, we analyzed 2215 cells, at a depth of 53,297 reads per cell, while at day 13 of differentiation, 2763 cells were analyzed with 53,471 reads per cell. We then performed dimensionality reduction as visualized using uniform manifold approximation and projection (UMAP), depicting the day 6 and 13 cells in the same plot (Fig. 2b ). Unsurprisingly, these two populations clustered independently from one another in an unsupervised manner, indicating the major transcriptional changes in cell identity that are known to occur during the early stages of directed differentiation. Fig. 2: scRNAseq of day 6 and day 13 progenitors. a Experimental schematic of scRNAseq, performed at day 6 (following 3 days of dual-smad inhibition) and day 13 (following 7 days of specification in CBRa). b UMAP visualization of cells at days 6 and 13 of differentiation. c UMAP visualization of expression of specific endodermal ( SOX17 and FOXA2 ), lung ( NKX2-1 , SOX2 , and SOX9 ) and intestinal (CDX2) markers at days 6 and 13 of differentiation. Color scale indicates normalized log fold change of gene expression. d UMAP visualization of expression of intestinal stem cell markers LGR5 , OLFM4 , and TACTSTD2 (TROP2) in cells at days 6 and 13 of differentiation. Full size image We sought to look at a subset of genes that mark endoderm ( SOX17 and FOXA2 ), anterior foregut ( SOX2 ), as well as intestinal ( CDX2 ) and Lung/Thyroid ( NKX2-1 ) progenitors 31 , 32 . The dorsal and ventral foregut endoderm are marked by SOX2 and NKX2-1, respectively 33 , 34 , while the boundary of SOX2 and CDX2 expression in developing endoderm marks the foregut from the posterior endodermal tissues 32 . After 3 days of endodermal specification, followed by 3 days of dual-smad inhibition, the anterior marker SOX2 was expressed in only a subset of cells at day 6 (Fig. 2c ). In addition, by day 13 a significant number of cells expressed CDX2 and SOX17 , which notably do not overlap with the NKX2-1 -expressing cells. Furthermore, the pancreatic master regulator SOX9 is not widely expressed in day 13 cells (Fig. 2c ), suggesting that our progenitor population at day 13 does not contain a significant proportion of pancreas-competent cells 35 . We also sought to examine the expression of intestinal stem cell markers early on in directed differentiation. Figure 2d demonstrates that there are a significant number of cells at day 6 that express TROP2 and some expressing OLFM4 , while a subset of cells at day 13 express LGR5 . Overall, these data support the presence of a large CDX2 + progenitor population with the potential to give rise to HIOs. Proximal specification of intestinal progenitors Having demonstrated that the NKX2-1 − population contains gut-competent progenitors, we then sought to identify culture conditions that favor the emergence of regionally-patterned populations of intestinal-specific organoids. Day 15 BU3NGST cells were first sorted to isolate GFP negative cells, and cultured in a range of media conditions, including the previously published Noggin/R-Spondin-based intestinal media (IM) 15 , as well as CK + DCI, and a variety of combinations of FGF4, KGF and Wnt activation (Fig. 3a ). The intestinal media with CHIR and KGF (IM + CK) was shown to generate more organoids than any other media condition (Fig. 3b, c ). In an attempt to ascertain regional identity and to further characterize the transcriptional profile of these organoids, quantitative real-time PCR (qRT-PCR) was performed for a variety of genes (Fig. 3d ). CDX2 , intestinal-specific cadherin CDH17 36 , and VIL1 (Villin) 37 were all highly expressed as compared with undifferentiated iPSCs in CK + DCI and IM + CK, as well as in the previously published intestinal medium 15 supplemented with CHIR (enabling us to discern the effect of CHIR from KGF). Notably, these markers were expressed at similar levels compared with a primary control (Fig. 3d ). However, PDX1 , a homeobox transcription factor essential for duodenal and pancreatic development 38 , as well as GATA4 , another proximal small intestinal marker, were significantly upregulated in the IM + CK condition as compared with both the CK + DCI organoids as well as adult colon (Fig. 3d ). Notably, the IM + CK organoids had significantly lower levels of SATB2 20 , Albumin ( ALB ), and Pepsinogen C ( PGC ) 39 expression compared with the CK + DCI HIOs (and IM + CHIR), suggesting that the IM + CK condition results in HIOs that are more homogenous and express markers specific to intestinal lineages, while preventing the emergence of hepatic and gastric lineages (Supplementary Fig. 3 ). The CK + DCI HIOs expressed significantly more SATB2 , a colonic marker, than either the IM + CK or IM + CHIR HIOs (Fig. 3d ). Organoids grown in all conditions express high levels of lysozyme, an antimicrobial protein expressed by Paneth cells throughout the GI tract 40 . Expression of the intestinal markers Cdx2 and Villin was also confirmed by immunohistochemistry (Fig. 3e ), further validating that the IM + CK conditions generated the most robust intestinal-specific organoids. Fig. 3: Proximal small intestinal specification following dual-smad inhibition. a Experimental Schematic of directed differentiation. b Representative micrographs of cells grown in different media conditions at day 34 of differentiation after sorting for NKX2-1 GFP negative cells at day 15 (scale bar = 300 μm, representative of n = 2 independent wells per condition). c Quantification of number of organoids per well at day 50 in different culture conditions. Error bars represent s.e.m. from n = 2 independent wells per condition. d qRT-PCR from day 54 HIOs cultured in either IM + CK, CK + DCI, or IM + CHIR normalized to day 0 hiPSCs and compared with a primary control (human adult colon) (2 −ΔΔCT , technical triplicates normalized to GAPDH or ACTB (β-ACTIN), n = 3 independent differentiations except IM + CHIR (IM + C, n = 2 independent differentiations). Error bars represent the s.d., statistical significance, where indicated, determined by one way-ANOVA followed by Tukey test, * p < 0.05, ** p < 0.005, **** p < 0.0001). e Whole mount immunofluorescence of day 60 organoids stained for Villin (VIL), Cdx2 and DNA (blue) (scale bar = 100 μm, representative of n = 3 differentiations). Full size image Generation of a CDX2-GFP reporter iPSC line BU1CG We demonstrated that both NKX2-1 − and CD47 lo sorted cells are enriched for intestinal progenitors that have the potential to grow into CDX2 + organoids that express a variety of markers specific for intestinal epithelium. However, in order to identify, profile, and purify putative CDX2 + intestinal progenitors throughout each stage of directed differentiation, we targeted the CDX2 locus with an eGFP fluorescent reporter, generating a CDX2-GFP knock-in reporter cell line (Fig. 4a and Supplementary Fig. 4 ). Using CRISPR/Cas9, we gene-edited a normal iPSC line, bBU1c2 27 , using a synthesized self-linearizing DNA oligonucleotide containing a 2A-eGFP-polyA flanked by two 400 base pair homology arms as a donor and a Cas9-GFP plasmid, eliminating the need for subsequent selection marker excision (see Methods). Due to the presence of a self-cleaving 2A peptide and a targeted insertion site just upstream of the endogenous CDX2 stop codon, the CDX2 gene was not inactivated as a result of gene editing (Fig. 4a ). PCR confirmed that the construct was inserted into the desired locus in 70% of the clones picked from one of the two sgRNA’s used (Fig. 4b Clone 109, hereafter referred to as BU1CG). Off target screening for the top three most likely off target gene insertion sites ( NHLRC4 , RAI4 , and SPP3 ) based on the sgRNA sequence revealed no aberrant indels (Supplementary Fig. 4c ). Fig. 4: A CDX2-GFP iPSC reporter line for intestinal differentiation. a Detailed schematic of the reporter construct and PCR screening primer sites (arrows). b Positive PCR screening of mono-allelic (17) and bi-allelic (109) knock-in clones derived from iPSC line bBU1c2. c Experimental schematic of differentiation of BU1CG into HIOs. d qRT-PCR for CDX2 expression in cells at day 15 of differentiation comparing sorted GFP + to both GFP − sorted cells as well as pre-sort (2 −ΔΔCT technical triplicates normalized to GAPDH , n = 3 independent sorts, error bars represent the s.d., p = 0.007, ** p < 0.01) as determined by unpaired Student’s t -test). e FACS analysis of GFP expression during the first two weeks of differentiation ( n = 4 independent differentiations, error bars represent the s.d.). f Quantification of number of organoids per independent well obtained at day 50 of differentiation (error bars indicate s.e.m. from n = 2 independent wells per condition). g Whole mount immunofluorescence of HIOs at D40 of differentiation stained for Cdx2, GFP, and DNA (blue) (scale bar = 50 μm, representative of n = 6 organoids from n = 2 differentiations). h Representative micrographs of HIOs at day 34 of differentiation using different media conditions (originally sorted at day 14 for CDX2 GFP , scale bar = 100 μm, representative fields of view of n = 2 wells per condition). i Representative micrographs showing the formation of HIOs from BU1CG after sorting GFP + cells at day 14 (scale bar = 200 μm). Inset shows limited outgrowth from GFP− sorted cells cultured in the same media conditions (IM + CK) (scale bar = 200 μm) (representative of n = 3 differentiations). j Representative micrographs showing HIOs at day 45 comparing CK + DCI vs IM + CK conditions (scale bar = 200 μm, representative of n = 6 differentiations). Full size image The selected BU1CG line showed a stable iPSC morphology upon passage and normal karyotype (Supplementary Fig. 4d ), and was subsequently differentiated into definitive endoderm, yielding on average 76.7% CXCR4/c-KIT double positive cells ( n = 7, representative flow cytometry in Supplementary Fig. 5a ). Cells were then differentiated into intestinal progenitors using the MF protocol, and sorted for CDX2 GFP at day 15 of differentiation, and then plated as single cells in 3D Matrigel droplets, and incubated in multiple culture conditions similar to the approach outlined for BU3NGST (Fig. 4c ). At day 15, we generated an average of 1.377 × 10 7 cells per input well of iPSCs containing 2 × 10 6 cells ( n = 3 differentiations, Supplementary Fig. 5b ). We confirmed the fidelity of the reporter by sorting cells at day 15 based on GFP and as expected, expression of CDX2 tracked with the GFP + sorted cells (Fig. 4d ). Taking advantage of the reporter, we followed the emergence of these putative intestinal progenitors based on CDX2 GFP expression. As shown in Fig. 4e , CDX2 GFP positive cells began to emerge at day 8 of differentiation, and by day 13 they represented 41.166 ± 20.53% ( n = 6, mean ± s.d.) of all cells in the culture. Staining of mature HIOs at day 40 further confirmed the fidelity of the GFP reporter, depicting nuclear Cdx2 staining colocalizing with cytoplasmic GFP (Fig. 4f ). Confirming our previous findings, IM + CK and CK + DCI generated significantly more CDX2 GFP organoids per input cell as compared with the other combinations of intestinal medium, Wnt, and FGF4/KGF stimulation (Fig. 4g, h ). Immunofluorescence and light microscopy analyses of the resulting organoids revealed luminal, organized multicellular structures with high CDX2 GFP expression (Fig. 4i, j ). In contrast, sorting GFP negative cells for re-plating and further outgrowth in the same IM + CK conditions resulted in almost complete depletion of gut-competent cells with poor outgrowths containing significantly fewer GFP+ cells (0.49% ± 0.052, n = 3 mean ± s.d.). Not surprisingly, when the GFP negative cells were cultured in CK + DCI conditions the outgrowth showed high expression of the lung marker, NKX2-1 (Supplementary Fig. 5c ), most of them negative for CDX2 GFP (Supplementary Fig. 5d ). These data provide strong evidence for the early emergence of putative intestinal progenitors during the MF protocol and suggests that most, if not all intestinal capacity resided in the CDX2 GFPpos population, as the CDX2 GFPneg cells failed to form robust 3D structures when cultured in the IM-CK condition (Fig. 4i , left inset). iPSC-derived HIOs grow in the absence of mesenchymal support Intestinal organoids grown from intestinal crypts can self-sustain their in vitro expansion in the absence of mesenchymal support 2 , 41 , something that has yet to be recapitulated with iPSC-derived organoids. As mentioned above, previously reported iPSC-derived intestinal directed differentiation protocols also lead to the generation of Cdx2 negative mesenchyme, which was proposed to secrete a variety of factors that induce and support the growth of the intestinal epithelium 42 . We sought to determine if the HIOs obtained using our protocol, in fact, differentiated in the absence of mesenchymal support (Fig. 5a ) compared side by side with the previously described protocol 15 . Light/Fluorescence microscopy demonstrated the presence of CDX2 GFPpos HIOs differentiated using both the MC and MF protocols (Fig. 5b ). However, there were clear morphologic differences, highlighted by the presence of GFP negative cells surrounding the CDX2 GFP positive epithelium in the MC organoids (Fig. 5b , top). Staining for Vimentin revealed that these GFP negative cells were mostly positive for Vimentin which were not present in the MF protocol (Fig. 5c ), findings that were confirmed by RNA-Seq (Supplementary Fig. 3 ). Representative flow cytometry for the epithelial-specific marker EpCAM (Fig. 5d ) demonstrated that in the MF protocol virtually all cells were epithelial, in contrast to the MC protocol where up to 50% of the cells were EpCAM negative. Fig. 5: iPSC-derived HIOs grow in the absence of mesenchymal support. a Experimental schematic of MF and MC directed differentiations. b Light microscopy representative micrographs of merge images from BU1CG-derived HIOs cultured under MC vs MF conditions (scale bar = 100 μm, representative of n = 3 differentiations). c Representative fluorescent micrographs of HIOs derived using the MC vs MF Protocol stained for the mesenchymal marker Vimentin along with Cdx2 (scale bar = 50 μm, representative of n = 5 organoids from n = 3 differentiations). d Flow Cytometry of single-cell suspensions from HIOs differentiated using the MC vs MF protocol stained with the epithelial marker EpCAM. e Comparison of the % of EPCAM + cells as measured by flow cytometry in HIOs cultured in different media conditions at day 54 of differentiation (paired Student’s t -test, * p < 0.05). f Flow cytometry for EpCAM expression of cells at days 6, 8, and 10 of both MC and MF differentiations. g UMAP representation of EpCAM expression at days 6 and 13 of differentiation by sc-RNAseq. Full size image Furthermore, we decided to investigate whether the ability of the HIOs to grow in the absence of mesenchyme was established during emergence of CDX2 GFP positive cells regardless of which protocol we used. Serial flow cytometry for EpCAM over the early stages of differentiation demonstrated the maintenance of EpCAM expression in the MF protocol and gradual loss of EpCAM expression in the MC protocol (Fig. 5f ). In addition, scRNAseq of cells at days 6 and 13 (as described in Fig. 2 ) demonstrated that there is widespread expression of EpCAM in cells at those time points of differentiation in the MF protocol, supporting our flow cytometry findings (Fig. 5g ). Using this same dataset, we also tracked expression of mesenchymal and mesodermal markers at these time points, including VIM , COL1A1 , COL3A1 , FN1 , THY1 , and ACTA2 (Supplementary Fig. 6a ). With the exception of VIM and FN1 , the vast majority of cells at day 13 did not express the other markers listed above. It has been reported that epithelial cells express mesenchymal genes including VIM and FN1 during early organogenesis (E9-11.5) 43 , 44 , which may explain why our day 13 cells express these markers. Finally, we compared the outgrowths of hindgut obtained via the first 8 days of the MC protocol 15 to CDX2 positive cells obtained at day 15 of the MF protocol cultured in 3D Matrigel® in several media conditions. As shown in Fig. 5e , HIOs derived from CDX2 GFPpos sorted cells contained significantly more EpCAM + cells (99.25 ± .49%, n = 6 mean ± s.d.) compared with MC differentiated cells (49.92 ± 29.9%, n = 6 mean ± s.d.), regardless of the media. In order to interrogate whether the day 15 FACS step of the MF differentiation is responsible for mesenchymal depletion, we also performed an MF differentiation that omitted the day 15 sort, and plated progenitors into 3D culture conditions at day 15. At day 30, 88.6% of cells were EpCAM + by flow cytometry (Supplementary Fig. 6b ), supporting the notion that this differentiation protocol enables the emergence of intestinal organoids without the need for mesenchymal support. MF HIOs contain a variety of intestinal epithelial cell types The intestinal epithelium is made up of a diverse group of cell types each occupying a specific functional niche. These include absorptive enterocytes, secretory Paneth cells that secrete Lysozyme ( LYZ ), enteroendocrine cells that express Chromogranin A ( CHGA ), and mucin-secreting goblet cells, among others 45 . In order to assess whether our HIOs contained cell types present in mature intestinal epithelium, we performed immunohistochemistry comparing the organoids grown in CK-DCI vs. IM-CK conditions (Fig. 6a–c ). We observed the presence of enteroendocrine cells (positive for chromogranin A, Fig. 6a ) and Paneth cells (positive for Lysozyme, Fig. 6b ), in the context of organoids comprised mostly of Villin-expressing putative enterocytes, at the expected densities for these particular cell types in both conditions. We also noticed that the IM + CK organoids were largely negative for colonic mucin Muc2, while the CK + DCI HIOs showed robust staining for luminal Muc2 (Fig. 6c ), suggesting that the IM-CK conditions promote a more proximal identity in contrast to the CK-DCI conditions. In order to further investigate this, we performed qRT-PCR for a panel of genes (Fig. 6d ) specifically expressed by proximal or distal intestinal epithelium. Indeed, the gene expression profile of the HIOs obtained through these two different conditions strongly supported our previous observations that the CK-DCI protocol gives rise to heterogeneous organoids containing many cells expressing colonic markers, as well as markers of several GI tract tissues, particularly when compared with RNA from a primary control from an adult human colonic tissue sample. In contrast, the IM-CK conditions promote the emergence of HIOs with a more defined identity corresponding to proximal small intestine/duodenum. These HIOs express high levels of CDX2 , LYZ , CDH17 , GATA4 , and PDX1 , while expressing significantly lower levels of the distal colonic markers SATB2 and MUC2 . Fig. 6: CDX2 GFP+ sorted organoids show conserved regional specificity. a – c Sections of paraffin embedded organoids cultured in either IM + CK or CK + DCI stained for Chromogranin A (CHGA), Lysozyme (LYZ), Villin (VIL), Cdx2, and colonic mucin Muc2 (scale bar = 50 μm, representative of n = 5 organoids from n = 2 differentiations). d qRT-PCR for various genes of interest, from D54 HIOs cultured in either CK + DCI or IM + CK as compared with a primary control normalized to day 0 hiPSCs (2 −ΔΔCT , technical triplicates normalized to GAPDH or ACTB (β-ACTIN), n = 3 differentiations, error bars represent the s.d., statistical significance where indicated determined by unpaired Student’s t -test, * p < 0.05). Full size image Patient HIOs are suitable for disease modeling While the translational potential of organoids to the ultimate goal of cell or tissue replacement therapy is still years away, they have already demonstrated significant utility in disease modeling, particularly in the context of monogenic disorders such as Familial Adenomatous Polyposis (FAP) 46 and cystic fibrosis (CF) 12 , 47 , 48 . In CF, cell-based models have proved invaluable in developing CFTR modulators. Primary airway epithelial cells are the gold-standard for predicting CFTR drug-responsiveness however significant progress has been made using rectal organoids to theratype CF patients 49 , 50 , 51 . Given that CFTR is highly expressed in the intestinal epithelium, we tested, in proof-of-concept experiments, the feasibility of measuring CFTR function in our iPSC-derived HIOs (see schematic, Fig. 7a ). Fig. 7: Patient specific mesenchyme-free-derived HIOs are suitable for disease modeling. a Schematic overview of experiment to measure CFTR function in ΔF508, ΔF508-corrected and WT HIOs ( n = 3 independent differentiations per iPSC line). b Representative micrograph of whole mount of distally patterned HIOs generated from the C17 iPSC line at day 54 of differentiation, sorted for NKX2-1 GFP− at day 15, and cultured in CK + DCI (scale bar = 50 μm, representative of n = 2 differentiations). c qRT-PCR for CFTR expression in HIOs cultured in both IM + CK and CK + DCI at day 54 of differentiation (2 −ΔΔCT , technical triplicates normalized to GAPDH ; n = 3). d Average baseline organoid size of WT, ΔF508 and ΔF508 - corrected HIOs ( n = 192 mean spheres analyzed per iPSC line). e Quantification of the steady-state lumen area (SLA) in ΔF508 and ΔF508-corrected HIOs (see also Supplementary Fig. 7b ). f Representative micrographs of HIOs pre- and post-24-h forskolin treatment (representative scale bar = 200 μm in upper left image, images represent n = 3 biological replicates per cell line and an average of n = 8 wells per replicate). g Quantification of change in whole well CSA in response to 24-h forskolin treatment (normalized to T = 0 h whole well CSA). Statistical significance where indicated determined by unpaired Student’s t -test, * p < 0.05, ** p < 0.005 ( n = 3 independent differentiations per cell line, error bars represent the s.d.). Full size image In order to do so, it was first necessary to test whether iPSCs carrying CFTR mutations were capable of differentiating into HIOs using our MF protocol. For this purpose, we used our published cystic fibrosis (CF)-specific iPSC line, C17 11 and differentiated cells toward distal lineages using the CK + DCI condition. Similar to what was observed in previous differentiations with wild type cell lines, C17 was successfully differentiated into 3D HIOs expressing Cdx2 and Villin (Fig. 7b ). qRT-PCR confirmed that differentiated HIOs expressed CFTR relative to undifferentiated cells, but at lower levels when compared with adult colon (Fig. 7c ). To control for the potential effects of genetic background on CFTR measurement we used an iPSC line from an individual homozygous for the ΔF508 mutation ( ΔF508 ). This ΔF508 iPSC line was previously gene-edited to correct the ΔF508 mutation in one allele ( ΔF508-corrected ) 30 . ΔF508 , ΔF508-corrected and WT HIOs were differentiated until day 30, as above. We applied methodologies including steady-state lumen area (SLA) and forskolin-induced swelling (FIS), previously developed for the analysis of CFTR function in rectal organoids 48 , 51 . At baseline the average organoid size was significantly smaller in ΔF508 compared with WT and ΔF508-corrected HIOs (Fig. 7d ). SLA was also significantly lower in ΔF508 compared with ΔF508-corrected HIOs (Fig. 7e and Supplementary Fig. 7b ). In response to FIS, WT HIOs started to swell within 30 min (Supplementary Fig. 7a ). After 24 h of forskolin, no significant change in whole well cross-sectional area (CSA) was detected in ΔF508 HIOs (mean CSA 1.073 ± 0.04702); whereas ΔF508-corrected and WT HIOs significantly increased in CSA (2.190 ± 0.3051 in the ΔF508-corrected and 2.190 ± 0.1950 in the WT, n = 3 independent differentiations per cell line, data represents mean ± s.d.) (Fig. 7f, g , Supplementary Movies 1 – 6 , representative of n = 3 differentiations and n = 3 wells per condition). Discussion The original description of the use of dual-smad inhibition by Green et al. 23 to induce differentiation of anterior foregut from endoderm-patterned pluripotent stem cells suggested that these conditions particularly suppressed CDX2 expression in endodermal cells. Our findings show that, although lung competence is induced in a subset of cells, many are, in fact, not patterned toward anterior foregut endoderm following 72 h of dual-smad inhibition. Indeed, our scRNA-seq data showed that there is significant heterogeneity throughout directed differentiation (Fig. 2b, c ), and by day 13, a significant number of cells were CDX2 + (Fig. 2c ). This result was confirmed using our BU1CG line that showed robust upregulation of CDX2 GFP starting at day 8 of differentiation (Fig. 4 ). A potential explanation for this discrepancy might lie in the fact that most of the data in the Green et al. studies were obtained at earlier time points during differentiation using a single human embryonic stem cell (hESC) line. Their original protocol relied on the use of embryoid bodies as a starting point, which could generate mesodermal derivatives that may affect the outcome of the differentiation. Our data suggests that although dual-smad inhibition can facilitate the emergence of progenitors with anterior foregut capacity, it does not prevent the differentiation of a robust progenitor population capable of specifying into many endodermal lineages including more posterior gut tube derivatives, such as small and large intestine, liver, pancreas, and stomach, primarily as a result of strong putative activation of the Wnt/β-Catenin pathway due to treatment with GSK3β inhibitor CHIR99021. It is important to acknowledge that at this point we cannot rule out additional Wnt-independent effects of CHIR99021. While the original protocol for the generation of HIOs from human iPSCs employed Wnt3A as the primary Wnt agonist to promote intestinal progenitor specification 15 , many subsequent manuscripts have described directed differentiation of iPSCs to HIOs that are reliant on CHIR99021 for Wnt/β-Catenin activation 8 , 20 , 52 , 53 , 54 . In addition, CHIR99021 has been well characterized as a strong inducer of the Wnt/β-Catenin signaling pathway 55 . It was intriguing to find that cells sorted based on CDX2 GFP (or CD47/ NKX2-1 ) at day 15 of differentiation and cultured in the same media conditions gave rise to very different cell lineages. Indeed, sorted CD47 hi (or NKX2-1 + ) cells vs CD47 lo (or NKX2-1 − ) cells cultured in CK + DCI gave rise to very different organoids, i.e., alveolar-like vs posterior lineages (gut, liver, stomach), respectively. Whether or not this results from a true cell fate decision and commitment to an anterior vs posterior fate established early during differentiation (accomplished via establishment of lineage specific epigenetic marks as shown in other systems 56 , 57 , 58 ) is currently unknown and merits further investigation. The fact that addition of KGF to the intestinal media generated the most robust conditions for proximal intestinal specification (IM + CK), with higher numbers of intestinal organoids is unsurprising, particularly given the role KGF has been shown to play in vivo in the intestinal epithelium. At the time of its original discovery in 1989, human KGF, which was ultimately re-classified as fibroblast growth factor 7 (FGF7), was shown to exert powerful paracrine effector functions on epithelial cell growth 59 . Since then, others have reported that KGF plays a direct role in epithelial proliferation across a range of cell types in the GI tract, including multiple epithelial cell lines 60 , 61 , as well as goblet cells 62 . Furthermore, increased KGF activity has been associated with epithelial regeneration in inflammatory bowel disease, both in animal models 63 and human biopsy samples 64 . From a mechanistic point of view, KGF is known to be expressed by the mesenchyme acting via FGFR2b present in the intestinal epithelia 65 , 66 . This is the opposite to FGF4 which is expressed by the epithelium and acts on receptors present in the mesenchyme 67 . This might explain the key difference between the MC protocol and ours, and the fact that we can achieve robust differentiation in the absence of mesenchymal support, whereas prior protocols stimulate concomitant mesenchymal outgrowth. The functional interaction of intestinal epithelium with intestinal mesenchymal lineages is certainly of interest. However, for certain questions, notably the investigation of epithelial intrinsic defects, the ability to study an epithelial-only culture system is a major advantage, and would diminish potential experimental noise generated from a mesenchyme-containing system. An epithelial-only model facilitates the simple measurement of epithelial intrinsic function or dysfunction in a reductionist framework. Finally, we selected to study cystic fibrosis in our HIO model based on the suitability of these organoids to study the epithelial expression of CFTR in the intestine, in addition to the established assays to measure CFTR function using primary rectal organoids 48 , 68 . We determined, in proof-of-concept experiments, that MF HIOs provide an organoid-based read-out of CFTR function. It is worth noting that for the purposes of these experiments, organoids remained in the differentiation media containing cAMP. Thus, organoids with functional CFTR protein are activated at baseline and this likely dampens the magnitude of acute CFTR activation with forskolin. There are several preclinical, cell-based models to assess CFTR dysfunction and rescue 49 . Important future questions of the CFTR assay in HIOs include: (1) Do ΔF508 HIOs detect CFTR rescue in response to CFTR modulators?, (2) Does in vitro CFTR rescue in HIOs predict clinical efficacy?, and (3) How does the HIO platform compare to established platforms in terms of sensitivity and positive predictive value? Compared with routine rectal biopsies, it takes several months to first reprogram and subsequently differentiate iPSCs into HIOs. However, there are key areas where the unique properties of HIOs could be helpful. First, given that iPSCs are able to be expanded indefinitely, there is the potential to scale-up HIO production for screening purposes. Second, the ability to gene-edit iPSCs and differentiate these cells into multiple tissue types could overcome the genetic variability and overwhelming effect of the infected, inflammatory milieu that limits experiments using primary human tissue. An iPSC-based approach offers the potential to study key questions, including the role of genetic modifiers in the heterogeneity of CF phenotypes. At a molecular level, the iPSC system could be applied to determine cell-type and tissue-specific differences in the regulation of CFTR expression 69 . Nevertheless, despite significant progress in the development of increasingly potent CFTR modulators for residual function mutations 70 , 71 , there remains a subset of individuals whose mutations result in little to no CFTR protein and who represent a major unmet therapeutic challenge. The optimal preclinical platform for these individuals remains to be determined and human, scalable, patient-specific cell-based platforms that express CFTR at higher levels than primary bronchial epithelial cells may prove helpful in drug development. Methods hiPSC generation and culture and expansion All parental hiPSC lines previously published by our group (bBU1) 27 and others (C17, dF508) 12 , 30 were derived from normal donors (bBU1) or an individual with a published compound heterozygote CFTR mutation (C17), and have been shown to have a normal karyotype (46XY). All lines were maintained in feeder-free conditions using mTESR®1 (StemCell Technologies), and passaged onto hESC Matrigel® (Corning cat. no. 354277) coated 10 cm, 6-well, 12-well, and 24-well tissue culture dishes (Corning) as per the manufacturer’s instructions. All human subjects studies were performed under signed consent and approved by the Boston University Institutional Review Board (IRB), protocol H-32506. Cloning of CDX2-eGFP into a blunt ended cloning vector Using an approach outlined by Zhang and colleagues 72 , we used a synthetic self-linearizing oligonucleotide construct (sequence provided upon request) as a donor without the need for subsequent selection marker excision. Oligonucleotide constructs for the donor, guide RNAs, and sequencing/screening primers were ordered from Integrated DNA Technologies (IDT). All sequencing reagents are listed in Supplementary Table 3 . CRISPR Guide RNA sequence and target sites were selected using the CHOPCHOP 73 , 74 and MIT CRISPR Design Tools. The synthetic donor construct was cloned into the blunt ended cloning vector, pJET.2, using the CLONEJet PCR Cloning Kit (ThermoFisher cat. no. K1231). The SpCas9-2A-GFP (PX458) plasmid with cloning backbone for sgRNA was obtained from the Zhang Lab through Addgene (Addgene #48138) 75 . Gene editing of bBU1c2 Parental bBU1c2 iPSCs from one confluent six-well plate and 2 10 cm dishes were dissociated from their tissue culture vessels using ReLeSR (StemCell Technologies), and dissociated into single-cell suspensions. In total, 6 × 10 6 cells were nucleofected with 5 μg of guide plasmid DNA and 5 μg of donor plasmid DNA, and re-plated on fresh 10 cm Matrigel-coated dishes. The cells were nucleofected on an Amaxa™ 4D-Nucleofector™ using the Lonza Nucleofector P3 Primary Cell 4D-Nucleofector™ X Kit (V4XP-3024), as per the manufacturer’s instructions, on the “HESCell H9” program. Screening and banking of BU1CG Two days after nucleofection, the cells were dissociated once more into single cells, and sorted for Cas9-GFP in order to select for cells that had been successfully nucleofected with the Cas9-sgRNA plasmid. Of the 4 × 10 6 cells sorted, 0.5% of them were GFP + . The cells were then plated at varying dilutions on 10 cm dishes, and allowed to grow into colonies, which were then mechanically picked using a p20 pipette and re-plated into one well of a 24-well plate containing warm mTeSR1® with 5 μM Y27632 . Genomic DNA was extracted from 96 clones using the QIAamp DNA Mini Kit (QIAGEN cat. no. 51304), and screened by PCR using the Herculase II Fusion DNA Polymerase (Aligent cat. no. 600675, as per manufacturer’s instructions) for successful donor construct insertion. Amplified DNA was visualized by gel electrophoresis using GelRed® Nucleic Acid Gel Stain (Biotium cat. no. 41002) and imaged using a BioRad GelDoc™XR System, along side the 1 Kb Plus DNA Ladder (ThermoFisher cat. no. 10787018). The gel presented in the main Fig. 4b is an uncut, unedited, native gel. Sequencing was performed by GENEWIZ®. hiPSC differentiation into day 15 HIO progenitors After reaching >95% confluency, cells were differentiated into HIOs using a protocol adapted from refs. 11 , 13 , 25 . hiPSC colonies were dissociated into single cells using Gentle Cell Dissociation Reagent (StemCell Technologies cat. no. 07174), and re-plated at a density of 2 × 10 6 cells per well of a Matrigel-coated six-well tissue culture plate in mTeSR1 supplemented with Y27632 (Tocris, 5 μM). After 24 h, cells were then differentiated into definitive endoderm using the StemCell Technologies StemDiff Definitive Endoderm Kit (Cat#05110), as per manufacturer’s instructions. Cells were then assessed by flow cytometry for anti-CXCR4-PE (ThermoFisher MHCXCR404) and anti-c-kit-APC (Biolegend 323205). At day 3, cells were split 1:3 as described above into new hESC Matrigel® coated 6-well plates, and incubated with DS/SB (see ref. 25 ), containing Dorsomorphin (2 μM Stemgent, cat. no. 04-0024) and SB431542 (10 μM Tocris, cat. no. 1614) supplemented with Y27632 for 24 h, followed by DS/SB without Y27632 for 48 h. At day 6, cells were split again 1:3 as described above, and incubated in CB/RA containing CHIR99021 (CHIR) (3 μM, Tocris, cat. no. 4423), rhBMP4 (10 ng/mL, R&D Systems, cat no. 314-BP), and retinoic acid (RA) (100 nM, Sigma, cat. no. R2625-50MG). Basal media for both DS/SB and CBRA consisted of complete serum-free differentiation medium (cSFDM), containing IMDM (ThermoFisher) and Ham’s F12 (ThermoFisher) with B27 Supplement with retinoic acid (Invitrogen), N2 Supplement (Invitrogen), 0.1% bovine serum albumin Fraction V (Invitrogen), monothioglycerol (Sigma), Glutamax (ThermoFisher), ascorbic acid (Sigma), and primocin. For a comprehensive list of reagents and catalog numbers, please see Supplementary Table 1 , for media recipes, see Supplementary Table 2 , and for antibodies, please see Supplementary Table 4 . Sorting and re-plating of day 15 progenitors into 3D HIOs At days 14–15, cells were sorted using the protocol outlined in ref. 25 and the surface marker algorithm described by our group in ref. 11 . Cells were dissociated using 0.05% Trypsin-EDTA (ThermoFisher), and washed in DMEM with 20% FBS. Cells were then strained using a 40 μm filter, spun at 300 × g for 5 min, and resuspended in FACS buffer containing 5 μM Y27632 , and stained for CD47-PerCP/Cy5.5 (BioLegend, cat. no. 323110) and CD26-PE (BioLegend, cat. no. 302705) for 30 min on ice, protected from light. Cells were then washed with additional DMEM/20% FBS, spun down again at 300 × g for 5 min, and resuspended in new FACS buffer containing 10 nM Calcein Blue in DMSO (ThermoFisher, cat. no. C1429). Cells were then sorted for either the CD47 lo /CD26 hi or the CD47 lo /GFP + populations using an operator-assisted MoFlo Astrios EQ (Beckman Coulter) at the Boston University Flow Cytometry Core Facility (FCCF). After sorting, cells were spun down and resuspended in 3D intestinal Matrigel (Corning 354234), in droplets of 50–100 μL (supplemented with media conditions outlined below), at a density of 0.5–1 × 10 3 cells/μL, and plated on a pre-warmed 24-well tissue culture plate. After allowing the droplets to solidify for 20 min in a 37 °C incubator, cells were treated with a variety of different media conditions described in Supplementary Tables 1 , 2 , supplemented with Y27632 . After 3–4 days, fresh media was added without Y27632 , and with further media replacement performed every 3–4 days, depending on confluency. Passaging of three-dimensional HIOs Any media was aspirated, and each well of HIOs was treated with 1 mL of Cell Recovery Solution (Corning cat. no. 354253), and placed at 4 °C for 30 min. Wells were then washed with PBS, and all contents were spun down at 300 × g for 5 min. Organoids were pipetted gently, and resuspended in fresh Matrigel droplets supplemented with media, taking care not to break up organoids. Split densities varied based on original confluency and experimental needs. Organoids were plated and were cultured for up to 100 days with splits every 1–2 weeks as needed. Organoid immunofluorescence and microscopy Images of whole, live organoids were captured in their tissue culture vessel embedded in 3D Matrigel droplets and submerged in culture media, using a Keyence BZ-X710 All-in-one Fluorescence Microscope. For whole mounts, organoids were dissociated from their Matrigel droplet as described above, washed with PBS, and then fixed in 4% paraformaldehyde (Electron Microscopy Sciences, cat. no. 19208) at room temperature for 30 min. Whole mount HIOs were then washed with PBS, and blocked in 4% normal donkey serum (NDS) with 0.5% Triton X-100 (Sigma) for 30 min. They were then incubated overnight in primary antibody (see Supplementary Table 4 ) in 0.5% Triton X-100 and 4% NDS. Samples were then washed in 4% NDS and incubated with secondary antibody from Jackson Immunoresearch (1:300 anti-rabbit IgG (H + L), 1:500 anti-chicken IgY, or anti mouse IgG (H + L)) for 45 min at room temperature. Nuclei were stained with Hoechst dye (Thermo Fisher, 1:500). Whole organoids were then mounted with flouromont-G (Southern Biotech) on cavity slides and cover-slipped. For paraffin sectioning, samples were fixed as described above, and washed with PBS. Organoids were then embedded into HistoGel™ Specimen Processing Gel (Richard Allen Scientific), and submitted to the Boston University Experimental Pathology Core Facility for paraffin embedding. Sections were then deparaffinized, followed by an antigen retrieval in a laboratory microwave for 3 min at full power, and 8 min at 30% power, and were set aside to cool for 30 min. Sections were then washed and stained as described above (See Supplementary Table 4 for full list of primary and secondary antibodies). Both stained whole mount and paraffin embedded sections were visualized with either a Zeiss LSM 700 laser scanning confocal microscope or a Nikon Eclipse Ti2 Series Microscope, and processed and analyzed in Fiji. Flow cytometry Cells for flow cytometry were dissociated using Gentle Cell Dissociation Reagent, followed by resuspension in FACS Buffer comprising of PBS −/− with 0.5% FBS. Antibodies for assessment of definitive endoderm, and the day 15 sort for lung/intestinal progenitors are listed above, with appropriate isotype (IgG1) and unstained controls. To assess EpCam expression, organoids were dissociated as described above, and then further incubated with 0.05% Trypsin for 20 min at 37 °C. After incubation, the reaction was inactivated with DMEM/20% FBS, and the cells were mechanically dissociated by pipetting. Cells were the spun down, resuspended in FACS Buffer, and stained with anti-EpCAM-APC (BioLegend, cat. no. 324208) for 20 min at room temperature, protected from light. Cells were then washed, resuspended in fresh FACS buffer, and strained into BD FACS tubes (Corning cat. no. 352235). All experiments were performed on a BD FACSCalibur™ or STRATEDIGM S1000EON and analyzed using FlowJo. Forskolin swelling assay Forskolin-induced swelling was performed in organoids at days 29–31 of differentiation, using a similar protocol to previously published work 12 , 48 . Three independent differentiations were performed for each cell line; organoids were plated in three-dimensional Matrigel (at least six wells per differentiation) and incubated in fresh media for 1–2 days prior to forskolin treatment. Images were taken using a Keyence BZ-X700 fluorescence microscope immediately prior to (time 0 h) and 24 h after (time 24 h) the addition of 5 µM forskolin (Sigma). Imaging analysis was performed using ImageJ; sphere cross-sectional surface area was calculated using a binary analysis of circular (circularity > 0.3) and well sized (area > 900 µm 2 ) organoids. Whole well sphere cross-sectional area at time 0 was set to 1 and the ratio of time 24 h to time 0 cross-sectional area is indicated as normalized cross-sectional area. Time lapse images were captured using a Keyence BZ-X700 microscope with serial imaging of a mapped well (one per condition) every 2.5 min. Steady-state lumen area calculation As previously developed 51 , steady-state lumen area (SLA) was calculated by determining the ratio of lumen to whole organoid cross-sectional area. Using images captured as above, ImageJ was used for quantification (average of 30 organoids per cell line). Epithelial and luminal perimeter was measured manually for each image. RNA isolation and qRT-PCR analysis RNA was isolated from all samples using the RNeasy Kit (QIAGEN cat. no. 74014), either immediately after dissociation from tissue culture vessels, or after storage at −20 °C in RNAlater (ThermoFisher cat. no. 7020) as per the manufacturer’s instructions. RNA was then reverse transcribed to cDNA using the SuperScript™ III First-Strand Synthesis System (Invitrogen cat. no. 18080093) as per the manufacturer’s recommended parameters. RNA was quantified using a NanoDrop™ Lite Spectrophotometer (ThermoFisher) and input was standardized across all samples, to ensure normalized cDNA yields for downstream PCR applications. qRT-PCR was performed using both the TaqMan® or SYBR® Green (Applied Biosystems) master-mixes as per manufacturer’s instructions, and the QuantStudio 7 Flex Real-Time 384 Well PCR System with barcoded 384-well plates. Relative fold change above undifferentiated iPSC was determined by calculating the ΔΔCt, using either GAPDH (TaqMan) or ACTB (SYBR). For primer sequences, see Supplementary Table 3 . scRNAseq of days 6 and 13 progenitors Surrogate wells of differentiated iPSCs (C17) toward endodermal lineages (as described above) from a single MF differentiation were isolated at days 6 and 13. Briefly, cells were disassociated and brought to a single-cell suspension with AccuMax, counted and resuspended in appropriate volume. Cell isolation, capture, and library prep followed the 10x Genomics scRNA-Seq (V2) protocol. Libraries produced were quantified by a Kapp kit and sequenced on an Illumina NextSeq 500. The Cell Ranger software pipeline produced the FASTQ and Counts matrix files. Day 6 library generated 2215 cells at a depth of 53,297 reads/cell with mean genes per cell detected at 4090/cell. Day 13 library generated 2763 cells at a depth of 53,471 with mean genes per cell detected at 3103/cell. Seurat ver. 3.0 was used to further process data. Data was merged and normalized using the regularized negative binomial regression method 76 with cell degradation (i.e., mitochondrial percentage) regressed out during data scaling. Dimensionality reduction methods like PCA and UMAP were used to represent gene expression. Louvain method was used for clustering. Differential expression tests were done with MAST 77 . The dataset supporting the conclusions of this experiment is available in the GEO repository, accession GSE140405 . Bulk RNA sequencing by digital gene expression In order to test differential gene expression, we performed 3′ tag digital gene expression profiling (DGE). Cells (bBU1) were differentiated to day 15, sorted for CD47 lo as described above, and plated in 3D Matrigel in CK + DCI and IM + CHIR. At day 42, RNA was isolated from HIOs as described above. In contrast to traditional bulk RNA-Seq, which generates sequencing libraries from the whole transcripts, 3′ tag DGE only covers the terminal fragment of a transcript, complementary to 3′-end sequences 57 . Restricting the sequencing coverage to a small part of the transcripts reduces the number of reads required to profile the full transcriptome. RNA was extracted and amplified from all described samples as described above. Subsequently, library preparation and sequencing were performed at the Broad Institute. Reads were aligned to the ENSEMBL human reference genome GRCh38.9 78 using STAR 79 . We used edgeR package 80 to import, filter and normalize the count matrix, followed by the g limma package 81 and voom 82 , for linear modeling and differential expression testing using empirical Bayes moderation to estimate gene-wise variability before significance testing based on the moderated t -statistic. We used a corrected p -value 83 of 0.05 as threshold to call differentially expressed genes. Functional characterization was done using Enrichr 28 , 29 . The dataset supporting the conclusions of this experiment is available in the GEO repository, accession GSE128922 . Statistical analysis Experimental data for Flow Cytometry and RT-PCR are reported as mean ± s.d. All statistical analysis was performed using GraphPad Prism Software, with statistical significance determined by one way-ANOVA followed by Tukey’s test (>2 groups, n = 3 per group except IM + CHIR n = 2) or student’s two-tailed unpaired T -test (2 groups, n = 3 per group) or paired two-tailed t -test, where * = p < 0.05, ** = p < 0.005, **** = p < 0.0001. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that all data supporting the findings of this study are available within the article and its supplementary material files, or from the corresponding author on reasonable request. The dataset supporting the conclusions of the bulk RNA-sequencing experiment (Fig. 1 ) has been deposited in the GEO repository under accession code: GSE128922 . The dataset supporting the conclusions of the scRNAseq experiment (Fig. 2 ) has been deposited in the GEO repository under accession code: GSE140405 . Further details of iPSC derivation, characterization, and culture are available for free download at . Code availability All data analysis was performed using publicly available methodologies. The bulk RNA sequencing was analyzed using the tools described in the methods section above. The scRNA-seq data was analyzed on a platform using the Cell Ranger and Seurat ver 3.0 pipelines.
Boston researchers have developed a new way to generate groups of intestinal cells that can be used, among others, to make disease models in the lab to test treatments for diseases affecting the gastrointestinal system. Using human induced pluripotent stem cells, this novel approach combined a variety of techniques that enabled the development of three-dimensional groups of intestinal cells called organoids in vitro, which can expand disease treatment testing in the lab using human cells. Published online in Nature Communications, this process provides a novel platform to improve drug screenings and uncover novel therapies to treat a variety of diseases impacting the intestine, such as inflammatory bowel disease, colon cancer and Cystic Fibrosis. Researchers at the Center for Regenerative Medicine (CReM) of Boston University and Boston Medical Center used donated human induced pluripotent stem cells (hiPSCs), which are created by reprogramming adult cells into a primitive state. For this study, these cells were pushed to differentiate into intestinal cells using specific growth factors in order to create organoids in a gel. This new protocol allowed the cells to develop without mesenchyme, which typically in other protocols, provides support for the intestinal epithelial cells to grow. By taking out the mesenchyme, the researchers could study exclusively epithelial cells, which make up the intestinal tract. In addition, using CRISPR technology, the researchers were able to modify and create a novel iPSC stem cell line that glowed green when differentiated into intestinal cells. This allowed the researchers to follow the process of how intestinal cells differentiate in vitro. "Generating organoids in our lab allows us to create more accurate disease models, which are used to test treatments and therapies targeted to a specific genetic defect or tissue—and it's all possible without harming the patient," said Gustavo Mostoslavsky, MD, Ph.D., co-director of CReM and faculty in the gastroenterology section at Boston Medical Center. "This approach allows us to determine what treatments could be most effective, and which are ineffective, against a disease." Using this new protocol, the researchers generated intestinal organoids from iPSCs containing a mutation that causes Cystic Fibrosis, which typically affects several organs, including the gastrointestinal tract. Using CRISPR technology, the researchers corrected the mutation in the intestinal organoids. The intestinal organoids with the mutation did not respond to a drug while the genetically corrected cells did respond, demonstrating their future potential for disease modeling and therapeutic screening applications. The protocol developed in this study provides strong evidence to continue using human iPSCs to study development at the cellular level, tissue engineering and disease modeling in order to advance the understanding—and possibilities—of regenerative medicine. "I hope that this study helps move forward our collective understanding about how diseases impact the gastrointestinal tract at the cellular level," said Mostoslavsky, who also is associate professor of medicine and microbiology at Boston University School of Medicine. "The continual development of novel techniques in creating highly differentiated cells that can be used to develop disease models in a lab setting will pave the way for the development of more targeted approaches to treat many different diseases."
10.1038/s41467-019-13916-6
Biology
Remote areas are not safe havens for biodiversity
Giovanni Strona et al, Ecological dependencies make remote reef fish communities most vulnerable to coral loss, Nature Communications (2021). DOI: 10.1038/s41467-021-27440-z Giovanni Strona et al, Global tropical reef fish richness could decline by around half if corals are lost, Proceedings of the Royal Society B: Biological Sciences (2021). DOI: 10.1098/rspb.2021.0274 Journal information: Nature Communications , Proceedings of the Royal Society B
http://dx.doi.org/10.1038/s41467-021-27440-z
https://phys.org/news/2021-12-remote-areas-safe-havens-biodiversity.html
Abstract Ecosystems face both local hazards, such as over-exploitation, and global hazards, such as climate change. Since the impact of local hazards attenuates with distance from humans, local extinction risk should decrease with remoteness, making faraway areas safe havens for biodiversity. However, isolation and reduced anthropogenic disturbance may increase ecological specialization in remote communities, and hence their vulnerability to secondary effects of diversity loss propagating through networks of interacting species. We show this to be true for reef fish communities across the globe. An increase in fish-coral dependency with the distance of coral reefs from human settlements, paired with the far-reaching impacts of global hazards, increases the risk of fish species loss, counteracting the benefits of remoteness. Hotspots of fish risk from fish-coral dependency are distinct from those caused by direct human impacts, increasing the number of risk hotspots by ~30% globally. These findings might apply to other ecosystems on Earth and depict a world where no place, no matter how remote, is safe for biodiversity, calling for a reconsideration of global conservation priorities. Introduction The effects of human activities on our planet are so pervasive 1 that many denote the current epoch as the Anthropocene 2 . In these challenging times for biodiversity, species face extinction 3 , 4 , and ecosystems deteriorate under the synergic influence of global hazards (such as climate change) and local human stressors (such as overexploitation) 5 , 6 . Since global hazards act indeed globally, while local ones are associated with proximity to human activities, their combined effect is expected to decrease with the remoteness of the local ecosystem (Fig. 1a ). Therefore, pristine and isolated ecosystems—sometimes referred to as “wilderness areas”—are considered sanctuaries that have the potential to preserve nature during the current and future biodiversity crises 7 . Fig. 1: Theoretical and empirical relationships between remoteness vs local/global hazards and ecosystem vulnerability from ecological dependencies. a Theoretical expectation of a decrease in local and local + global hazards with remoteness, and a counteracting increase in ecosystem vulnerability due to ecological dependencies. b Comparison between reef remoteness, measured as travel time (in log e transformed hours) from a reef locality to the closest major city 21 , and local hazards (cumulative local impacts on reef localities for 2013, consisting of six impacts related to fishing activities, light pollution, shipping, nutrient pollution, organic chemical pollution, and direct human interactions on coastal and near-coastal habitats 19 ). c Comparison between reef remoteness and global hazards (cumulative global impacts on reef localities for 2013, consisting of warming, acidification, and sea level rise 19 ). d Comparison between reef remoteness and cumulative local + global impacts. e Comparison between reef remoteness and bleaching susceptibility quantified, for each reef locality, as the average bleaching alert level from 1985 to 2019. f Comparison between reef remoteness and fish-coral dependency (quantified as the fraction of fish diversity directly or indirectly connected to corals through a coral → fish → fish network path at 1761 reef localities at a resolution of 1° × 1°). For each relationship, we report the Spearman’s rank correlation coefficient (r s ). Full size image However, local anthropogenic disturbances can favour generalist species over specialized ones 8 , 9 , 10 , as corroborated by previous work showing a positive relationship between the degree of ecological specialization and time with no disturbances in in-silico ecological networks 11 . In addition, due to the reduced in-flow of individuals into communities, we might also expect a higher specialization of ecological interactions in isolated habitats 12 . Specialized consumers can be more efficient in using their (few) resources when these are available but have, in principle, a higher co-extinction risk than generalist species 13 , 14 . Thus, while specialization increases ecological networks’ robustness to species loss under stable environmental conditions, it also makes them more fragile towards potential cascading effects of primary extinctions (triggered, for example, by warming) 11 . Therefore, undisturbed and isolated communities should have many specialized interactions increasing their vulnerability to global change (Fig. 1a ). Such an ecological mechanism depicts a component of risk which is distinct and adds up to that stemming from the increased chances of local extinction that species are experiencing in isolated habitats 15 . Here we test whether a positive relationship between ecological specialization/vulnerability and remoteness exists in natural systems, and whether the resulting increased risk of species loss in remote areas can question the common reliance on remote areas as biodiversity strongholds. For these goals, we focused on one of the most biologically diverse and socio-economically significant ecosystems on the planet, coral reefs, which, despite international attention and global protection programmes, continue to deteriorate under the influence of local human impacts (such as physical destruction and pollution) and the increasing effects of climate change (such as coral bleaching) 16 , 17 , 18 , 19 . By assessing the local dependency of fish assemblages on corals across the world’s oceans, we show that the increase in the frequency and strength of fish-coral associations with distance from human settlements, combined with the global reach of coral bleaching, obliterate the benefits of remoteness on reef fish local extinction risk. Results and discussion Exploring the risk-remoteness relationship in reef fish We quantified remoteness as travel time to major cities 20 , 21 (Fig. 2a ). This measure captures both the local impact of direct anthropogenic disturbances (Fig. 1b ) and geographical isolation (Supplementary Fig. 1 ), being therefore well suited to test our hypotheses. Using a global dataset providing standardized measures of anthropogenic impacts on oceans 19 , we quantified the cumulative risk of species loss for reef fish assemblages from local and global hazards. Local hazards stem from direct human activities (six impacts related to fishing activities plus light pollution, shipping, nutrient pollution, organic chemical pollution, and direct human impacts on coastal and near-coastal habitats). They decline with increasing remoteness from human settlements (Figs. 1 b, 2b ). Global hazards are related to global processes such as ocean warming, ocean acidification and sea-level rise. They also decline with increasing remoteness but in a much weaker way (Figs. 1 c, 2c ). These patterns indicate that the necessary conditions for the risk-remoteness relationship to occur are met (Fig. 1a ). Fig. 2: Global maps of reef remoteness, local and global hazards, bleaching susceptibility and fish-coral dependency. a Global remoteness of coral reefs, quantified as travel time (in log e transformed hours) from the target reef locality to the closest major city 21 . b log e transformed local hazards (cumulative local impacts on reef localities for 2013, consisting of: six impacts related to fishing activities, light pollution, shipping, nutrient pollution, organic chemical pollution and direct human interactions on coastal and near-coastal habitats 19 ). c log e transformed global hazards (cumulative global impacts on reef localities for 2013, consisting of: warming, acidification and sea level rise 19 ). d log e transformed local + global hazards; e global bleaching susceptibility, quantified as the average bleaching alert level from 1985 to 2019. f Fish-coral dependency, quantified as the proportion of fish species that are directly or indirectly connected to corals through an identified coral → fish → fish network path at 1761 reef localities at a resolution of 1° × 1°. Full size image Given that we were able to demonstrate the necessary conditions empirically, we then addressed our primary questions. Specifically, we explored (i) the relationship between reef remoteness and strength of fish-coral ecological interactions; and (ii) the potential effect of such a relationship on the shape of the risk-remoteness relationship for reef fish. Such explorations required first assessing the degree of fish-coral dependency globally. The fish species known from literature to rely exclusively on corals for food or shelter represent only a fraction (~20%) of local coral reef fish diversity 22 , 23 , 24 , 25 . However, experimental evidence suggests that the loss of corals may affect more than a half of fish diversity 26 , as also supported by recent statistical estimates 27 . This mismatch highlights that assessing fish assemblages’ vulnerability to coral loss requires considering the dense network of elusive, direct, and indirect links 28 that create interaction pathways from coral to fish species. To assess the influence of both direct and indirect coral-fish links on fish species persistence, we collected information on the global distribution and ecological traits of 9,143 fish species associated with coral reefs. We used these data and analytical approaches of previous studies 29 , 30 , 31 , 32 to identify potential trophic and habitat-related associations between corals and fish, and between prey and predatory fish species. We constructed local-scale networks of potential coral → fish → fish interactions (on a spatial grid of 1° × 1° covering 1761 reef localities worldwide) by combining previously published information on fish dependency on corals, spatial co-occurrences of species (accounting for species niche and biogeographical history), and the ecological traits of fish species. Finally, we quantified the dependency of fish assemblages on corals as the proportion of fish species in each locality (i.e., 1° × 1° cell in our grid) with direct or indirect links to corals within the local ecological network (Fig. 2f ). This crucial step enables identifying indirect dependencies that would not have been apparent by just tallying coral dependent fish species from the literature. We found that the dependency of fish assemblages on corals increases with coral reefs’ remoteness. These results support the remoteness-specialization hypothesis (Fig. 1f ) and provide an important confirmation that the co-evolutionary mechanisms affecting the emergence of specialization in ecological networks identified by theoretical work 11 , 12 also apply to real-world systems. Furthermore, the average percentage of fish species identified as dependent on corals by our network approach (38% ± 10% s.d.) matches a recent global scale estimate obtained with a completely independent statistical model (41% ± 18% s.d.) 27 , corroborating the idea that a world without corals might have half as many fish species. We then decomposed the fish-coral dependency by distinguishing between fish directly associated with corals (i.e., having a minimum distance to corals in the network of one link) compared to fish indirectly linked to corals (i.e., having a minimum distance to corals of more than one link). We found that the relative importance of directly associated fish increases with remoteness (Fig. 3 ), which further strengthens the support for the hypothesis. Not only does the overall fish coral dependency increase with remoteness from a quantitative perspective, but the relative contribution of direct dependencies becomes stronger. Since we expect the effects of coral loss to be stronger on directly coral-associated fish than on indirectly associated fish, this result reinforces the idea that remote communities will be substantially more affected than accessible ones as the impacts of global change propagate across ecological networks. An extensive set of sensitivity analyses confirm that these results are not affected by potential biases in the availability of information on fish ecology and distribution, nor are they driven by geographical variation in functional redundancy or species abundances (see “Methods” and Supplementary Fig. 2 ). Fig. 3: The relative contribution of direct fish-coral dependency increases with reef remoteness. We decomposed the total fish-coral dependency (i.e. the total fraction of fish species having at least one path to corals in the local coral → fish → fish networks) by distinguishing between fish species having a minimum distance of 1 step (i.e. network link) to corals, and fish species having a minimum distance to corals >1 step. While the fraction of fish with direct associations with corals increases with remoteness, that of indirectly associated fish decreases ( a ). Thus, as we move away from human influence, the relative contribution of direct fish-coral dependency increases from 26 to 68% on average ( b ). The plots summarize the results obtained in 1761 reef localities at a resolution of 1° × 1°. Solid lines represent average values, while shaded areas represent standard deviations. The Spearman’s rank correlation coefficients (r s ) were computed on the full set of results ( n = 1761), and not on the averaged values. Remoteness of coral reefs was quantified as travel time (in log e transformed hours) from the target reef locality to the closest major city 21 . Full size image Mapping fish risk hotspots The effect of global and local hazards and that of ecological dependencies show a striking spatial complementarity in determining global reef-fish risk. We mapped areas of high local + global hazards (falling in or above the 70th percentile) as well as areas of high combined fish-coral dependency and bleaching susceptibility. The latter are reef localities in or above the 70th percentiles of both fish-coral dependency and bleaching susceptibility, and comprise 9.4% of reef localities (165 1° × 1° cells of our global reef map). Comparing the two maps reveals how only nine reef localities, or 0.5% of areas highly threatened by local and global hazards also have a high fish-coral dependency and bleaching susceptibility. Thus, when we consider as hotspots of risk all localities from either of the two maps, the total number of reef fish assemblages at risk increases by 29%, from 535 to 691 reef localities (39.2% of reefs) (Fig. 4 ). Further, our study reveals that the fish communities on some of the most remote coral reefs are at relatively high risk of local species extinction (Figs. 2 and 4 ). Fig. 4: Spatial comparison between hot-spots of risk from local and global hazards vs. hotspots of risk from fish-coral dependency combined with bleaching risk. a Magenta pixels are reef localities (at a resolution of 1° × 1°) falling above the 70 th percentile of local+global hazards (based on 2013 cumulative human impacts on reef localities 19 as in Fig. 2d ); cyan pixels are reef localities falling simultaneously above the 70 th percentile of fish-coral dependency (fraction of fish diversity per reef locality directly or indirectly connected to corals through the coral → fish → fish network, as in Fig. 2f ) and above the 70 th percentile of bleaching susceptibility (quantified, for each reef locality, as the average bleaching alert level from 1985 to 2019 as in Fig. 2e ); dark blue pixels are reef localities falling in both of the previous categories. b Percentage of reef localities worldwide where the fish community is put at risk by either local+global hazards (magenta line) or by fish-coral dependency combined with bleaching susceptibility (cyan line) for increasing values of remoteness, quantified as travel time (in log e transformed hours) from the target reef locality to the closest major city 21 . c Frequency of reef risk hotspots from either local+global hazards (magenta line), fish-coral dependency combined with bleaching susceptibility (cyan line), or both (dark blue line), for increasing values of remoteness (frequency relative to the respective total number of risk hotspots; data were pooled to the first decimal digit of remoteness). d Percentage of reef localities worldwide where the fish community is put at risk by either local + global hazards (magenta line), fish-coral dependency combined with bleaching susceptibility (cyan line), at least one of these two sources of risk (dashed dark blue line), or both (continuous dark blue line), for different percentile thresholds used to identify hotspots. The thresholds were identified (and applied) independently for local + global hazards, fish-coral dependency and bleaching susceptibility. Full size image Thus, the effects of local and global hazards in reef fish assemblages and those of ecological dependencies combined with bleaching vulnerability show a remarkable complementarity. Many areas that are not hotspots of risk from global or local hazards are potential hotspots of risk due to ecological network fragility and vice versa. This pattern is a strong warning that the ongoing biodiversity crisis is truly global and that distance from human influence does not guarantee safety. In turn, it highlights a profound need to account for ecological dependencies when assessing the risk global change poses to particular species. Accounting for ecological dependencies in risk assessment The very different nature of the risk sources makes exploring the potential effect of the remoteness-specialization relationship on global risk projections challenging. Here, the risk assessment framework provided by IPCC’s fifth assessment report—which quantifies risk as the combination of vulnerability, exposure, and hazard 5 —might provide a formal layout to tackle the challenge. As a proof of concept, we devised an equation which quantifies risk by combining additively global and local hazards with the effect of ecological dependencies as applied to our fish-coral case study. To include the effect of ecological dependencies, we had to identify a potential “trigger” capable of transforming the vulnerability stemming from fish-coral dependency into an additional component of local risk. An obvious trigger is local susceptibility to bleaching events 16 , 17 , 18 , which we identified based on bleaching alert level data from 1985 to 2019 (Fig. 2e ; see “Methods” for details). Bleaching is a global hazard (in that its cause does not originate from a single point source) that can have local effects. Bleaching susceptibility can indicate the probability of local coral mortality and loss. Combining bleaching susceptibility with the local estimate of fish-coral dependency (from the network analysis) quantifies, therefore, a local risk for fish communities stemming from the bottom-up effects of coral loss across coral-fish networks. Depending on the different weights assigned to either the risk component stemming from global and local hazards or to the one stemming from ecological dependencies (i.e., the α and β terms in Eq. 4 ) we can identify different patterns for the risk-remoteness relationship. The two extremes correspond to the risk emerging from, alternatively, only local and global hazards, or only ecological dependencies triggered by local bleaching susceptibility (Fig. 5 ). However, under the parsimonious assumption that both sources of risk are equally important for fish species (i.e., for example, that a coral dependent fish species would be equally threatened by mass coral mortality as by overfishing) the risk-remoteness relationship becomes flat, providing a strong argument that distance from humans does not make a fish community any safer. Fig. 5: Fish-coral dependency modifies the risk-remoteness relationship. Coral reef remoteness was quantified as travel time (in log e transformed hours) from the target reef locality to the closest major city 21 . The blue dots represent risk quantified as the sum of threats from local + global hazards on reefs (as in Fig. 2d ), while magenta dots represent risk quantified as bleaching susceptibility × fish-coral dependency. Both components of risk (i.e., local + global hazards and bleaching susceptibility × fish-coral dependency) were rescaled between 0 and 1. The two rescaled risks components are then combined into a single risk assessment equation where risk = [ α (local + global hazards) + β (bleaching susceptibility × fish-coral dependency)]/2. The lines in the plot represent the slopes of the trend lines from different parametrizations of the risk equation. When equal weight is given to the two risk components, risk remains almost constant across remoteness values (trend line slope = −0.002, black dashed line). Full size image The risk-remoteness relationship in global conservation With reef fish providing protein to half a billion people worldwide 33 and the critical importance of fish for addressing micronutrient deficiencies 34 , our results have profound societal implications; remote coral reefs won’t be able to compensate for the losses of coral and fish species directly impacted by human activities, threatening the livelihoods of millions. Also, our study reveals an essential macroecological and eco-evolutionary mechanism that might dramatically amplify risks from global change in natural systems. The risk patterns observed for reef fish communities suggest that our already disconcerting projections about biosphere fragility might be overly optimistic. Moreover, the results of our study temper any hopes that, by protecting wilderness areas, we safeguard biodiversity vaults that can withstand the past and ongoing environmental destruction and changes brought by the Anthropocene. Therefore, aggressively addressing global hazards while supporting local management and conservation at both intensely used and remote locations emerges as the only hope to reverse the current biodiversity crisis. Methods Fish distribution We rasterized a detailed reef distribution vector map 35 at 5 × 5 latitude/longitude degrees (by considering as reef area each cell in the raster intersecting a polygon in the original shapefile). We collected all the occurrences of fish species intersecting the rasterized reef area from both the Ocean Biogeographic Information System 36 and the Global Biodiversity Information Facility 37 . We used taxonomic and biogeographical (i.e., latitudinal/longitudinal extremes for a given species) information from FishBase 38 to exclude potential incorrect occurrences (i.e., all the records falling outside the known species ranges). We also restricted the list to all the species for which FishBase provided relevant ecological information (as these were needed to evaluate prey-predator species interactions and identify indirect links between fish species and coral, see below). The filtered list comprises 9143 fish species. For these species, we used occurrence data to generate species ranges. For this, we used the α-hull procedure 39 , but instead of pre-selecting an α parameter and using it for all species, we developed a procedure to obtain conservative species ranges while including most of the known occurrences. First, we selected a very small α (0.001), to obtain a hull including most of the occurrences. Then, we progressively incremented α in small amounts (0.005) by computing, for each increment, the ratio between the relative reduction in the resulting hull area (in respect to the previous hull), and the relative reduction of occurrences included in the hull (in respect to the total number of available occurrences for the target species). We stopped increasing α when the ratio became <10. This procedure ensured that only isolated sites far from the core distribution of a species were excluded, while the range was stretched as much as possible around known occurrences. After delineating ranges for each species, we rasterized the reef vector map at a higher resolution (1 × 1 latitude/longitude degree) and used it as a reference layer to extract fish occurrences at each reef location. This resolution is finer than that used by other global studies on reef fish diversity and distribution 40 , 41 . We took the 1° × 1° reef raster as the reference grid in all subsequent analyses and spatial interpolations, considering all the reef cells hosting at least five fish species ( n = 1761). Fish distribution validation To validate the fish distribution data, we compared them with a smaller independent dataset (GASPAR) providing fish occurrences for 196 globally distributed reef localities 42 , which we rasterized against the same reference grid used for our fish and coral distribution data. Because this dataset is based on comprehensive check-lists, its information can be considered as ascertained presence-absence data. Thus, we compared our list of fish occurrences (at one degree) in each cell where data from the GASPAR dataset were also available, computing true skill statistics score as TSS = [( a × d ) − ( b × c )]/[( a + c ) × ( b + d )], with a being predicted & observed occurrences; b being predicted, but not observed occurrences; c being observed but not predicted occurrences; and d being not observed and not predicted occurrences. We obtained a median TSS of 0.53, with a median sensitivity (the proportion of correctly predicted presences) of 0.60, and a median specificity (the proportion of correctly predicted absences) of 0.96, indicating that our mapped ranges were sufficiently conservative and rarely generated false presences. Finally, given that we were analysing coral reef fishes, we excluded a few grid cells where our methods returned no fish species. Environmental data We obtained environmental data (surface temperature, salinity, pH, and total chlorophyll as a proxy for productivity) at a spatial resolution of 5 arcmin from Bio-ORACLE v2.0 43 , and we upscaled these data on the reference reef grid (averaging the variable values in each 1 × 1 latitude/longitude degree grid). Human impact As a measure of human impact on reef localities, we used the 14 cumulative human impact layers (for 2013) 19 available at . For the purposes of our analysis, we categorized them into “local hazards” stemming from direct human impacts (specifically, six impact layers related to fishing activities plus light pollution, shipping, nutrient pollution, organic chemical pollution, and direct human interactions on coastal and near-coastal habitats, such as trampling); and “global hazards” related to planetary wide processes (warming, acidification and sea level rise). The original dataset has a resolution of 1 km 2 and was therefore upscaled on the reference reef grid (averaging the variable values in each 1 × 1 latitude/longitude degree grid). Time travel to cities We quantified the “remoteness” of each reef locality in terms of travel time (based on the fastest possible local means of terrestrial and aquatic transportation, hence excluding air travel) to the closest human settlement. For this, we used the procedure described in Weiss et al. 21 which consists of first combining information on land types and use, topography, distribution of roads and railways, position of national borders to derive a friction surface raster map indicating the average speed at which humans can travel through each pixel; and then applying an algorithm to identify the least costly paths (i.e. those requiring the shortest travel time) from each pixel to a target locality (e.g. a city) 21 . The original publication 21 provides a global map of accessibility that does not include water localities, which is clearly problematic for reefs. We therefore produced a new map of travel time (in hours) including also water pixels (at the same resolution of Weiss et al. 21 , i.e. 1 km 2 ) by using their friction map, the same layer of human urban centre (the ‘high-density centres’ variant of the Global Human Settlements 44 ) and the same cost distance algorithm (cumulative cost distance, which we computed using SAGA gis 45 ). Then, we upscaled the high-resolution map on our grid of 1 × 1 degree reef localities (computing the mean accessibility per each 1 × 1 degree cell). Bleaching susceptibility We downloaded annual layers reporting maximum bleaching alert level at the global scale and at a resolution of 50 km from 1985 to 2019 46 . Alert levels range from 0 (no stress) to 4 (mortality likely). We upscaled each layer on the reef reference grid (averaging alert level data) and computed an index of bleaching susceptibility as the average of recorded alert level in each coral reef pixel of the reference raster. Building ecological networks of fish → fish interactions We built networks of fish → fish interactions by using a multi-step procedure. (1) We generated a model capable of predicting the probability of occurrence of a prey-predator interaction between two given fish species based on some of their functional and ecological traits. For this, we obtained information on fish body size, trophic level, minimum and maximum depth, and habitat preference for 17,722 fish species from FishBase 38 , OBIS 36 and GBIF 37 (from the latter two sources, we specifically derived complementary data on species depth occurrences, which we used to fill in gaps in FishBase). We combined this information with a large dataset of known prey-predator interactions assembled from the Global Biotic Interactions dataset, GLOBI 47 . After filtering GLOBI according to the set of species with available ecological information and removing replicated records, we obtained 11,188 individual prey-predator pairs (for a total of 2643 species). We then identified an identical number of absences (pairs of species not interacting, and hence not having a link in the network). GLOBI includes only observed interactions, while it does not provide explicit information on non-interacting species. Although one can ideally generate a list of absences by sampling from all pairwise combinations of species not listed by GLOBI, this procedure might lead to the mislabelling of an actual prey-predator pair as a non-interacting pair simply because the species combination is missing from the database. To reduce this risk and generate “reliable” pseudo absences (that is, truly representative of associations not possible in the real world), we used a stochastic approach where we sampled species pairs at random from all possible species combinations not present in GLOBI with the important addition of two constraints; namely, the prey needed to be at least 30% larger than the predator and/or the predator needed to have a trophic level ≤3.0 (according to FishBase trophic classification). (2) We then used a random forest classifier (a machine learning technique; we used the Python package Scikit-learn 48 ) where the dependent variable was the presence or (pseudo) absence of interactions, and the independent variables were prey and predator traits (prey body size, prey trophic level, prey min and max depth and eight dummy variables for habitat; and the same variables for predator, for a total of 24 independent variables). We first explored the ability of the model by training it on a random subsample (50%) of the dataset (including true presences and pseudo absences), and then testing it on the remaining fraction. The model performed very well, being capable of predicting observed (true positives) and unobserved interactions (true negatives) in the testing set with an exceptional precision and accuracy (TSS = 0.93; type I error rate = 0.05; type II error rate = 0.02). After this first exploration, we used the full dataset to train the model to be used on the actual data. Out-of-bag validation score in the final model based on the complete dataset was >0.97. The random forest predictor was used to assess the probability of trophic interaction between a large list of potential interactions generated by combining all fish species from our reef fish occurrence dataset known to rely mainly or exclusively on fish for their survival (i.e. “true piscivores”, FishBase trophic level > 3.5), with all the fish in the dataset. The full list included 31,768,450 potential interactions, that we reduced to 6,721,450 interactions by keeping only the interacting pairs identified by the random forest classifier with a probability ≥0.9. (3) If the ecological dependency between two species is actually manifested then the two species must obviously co-occur at some locations, and vice-versa, co-occurrence is a necessary pre-requisite for an ecological dependency. Following this logic, we took a final, additional step to further filter and improve the fish → fish interaction list. In particular, we quantified the tendency for species to co-occur in the same locality as one potential proxy layer for species interactions, complementary to our other approaches. There are various factors that can affect the co-occurrence of two species. In a simplification, this can emerge from stochasticity, shared environmental requirements, shared evolutionary history, and ecological dependencies. We attempted to disentangle the effect of the last factor from the first three. For each target species pair, we computed overlap in distribution as the raw number of reef localities where both target species were found. Then, we compared this number with the null expectation obtained by randomizing the distribution of species occurrences across reef localities. We designed a null model accounting for randomness, species niche and biogeographical history, and hence randomizing the occurrence of species only within areas where they could have possibly occurred according to environmental conditions and biogeographical factors (e.g., in the absence of hard or soft barriers). To implement the null model, we first excluded from the list of potential localities all the areas outside the biogeographical regions where the target species had been recorded, with regions identified according to Spalding et al. 49 . Then, within the remaining areas, we identified all the reef localities with climate envelopes favourable to target species survival. For this, we identified the min and max of major environmental drivers (mean annual surface temperature, salinity, pH) where the target species occurred, and then we identified all the localities with conditions not exceeding these limits. We generated, for each pairwise species comparison, one thousand randomized sets of species occurrences by rearranging randomly species occurrence within all suitable localities. We quantified co-occurrence between the species pair in each random scenario. Finally, we compared the observed co-occurrence with the random co-occurrences, computing a p -value as the fraction of null models with co-occurrence identical or higher than the observed one. We kept only the pairs with a p -value < 0.05. This further reduced the fish → fish list to 1,365,863 interactions. We used the networks to build site-specific networks interactions in all 1° × 1° reef localities of our reference grid, by filtering it according to local fish species diversity. Measuring fish-coral dependency We compiled from literature 22 , 23 , 24 , 25 a list of fish species known to be associated with corals, in terms of habitat and/or trophic specialization. This list includes 44% of the fish species we used in our analysis (4040/9,143). As above, we used the known associations (or lack thereof) in the dataset to identify coral dependency in the unassessed fish. For this, we trained two independent random forest classifiers (again using the Python package Scikit-learn 48 ), one to model generic habitat associations, and the other one to model corallivory. In both models, the dependent variable was the presence/absence of coral-association, and the independent variables were the same ecological features used to predict fish → fish trophic interactions (i.e. prey body size, prey trophic level, prey min and max depth and eight dummy variables for habitat), plus an additional variable quantifying the fraction of documented coral-associated species in the family of the target fish. Both models showed high precision and accuracy (with a TSS of 0.57 for the habitat association model, and of 0.81 for the corallivory model). Combining the list of coral dependent species from literature ( n = 897) with our model predictions ( n = 356) yielded a total of 1253 fish species. We linked all the coral-dependent species in the local fish → fish networks to a symbolic “coral” node. Then, we quantified the overall dependency of fish assemblages on corals in each reef locality as the fraction of fish having at least one (unidirectional) path to corals across network links. We opted for this simple and intuitive measure after finding it produced virtually identical results to several, more complex, measures of fish-coral dependency that we explored (such as weighted and unweighted network distance between individual fish species and coral genera, and dependency values estimated using co-extinction simulations 50 ). For each network, we also quantified, separately, the fraction of fish species directly associated with corals (i.e., having a minimum distance to corals in the network of one link) and indirectly associated with corals (i.e. having a minimum distance to corals of more than one link). Risk assessment framework Following the definitions from the IPCC’s fifth assessment report, we separate vulnerability (combination of sensitivity and adaptive capacity) from exposure to an extrinsic forcing agent (‘hazard’). Then we quantify risk as the combination of vulnerability, exposure, and hazard 5 . Assuming, for illustrative purposes, a combined linear effect of local and global hazards on the risk experienced by a target system, we can model the latter ( R ) as: $$R=E\times ({{{H}}}_{{{{{{\rm{local}}}}}}}\times {{{V}}}_{{{{{{\rm{local}}}}}}}+{{{H}}}_{{{{{{\rm{global}}}}}}}\times {{{V}}}_{{{{{{\rm{global}}}}}}}),$$ (1) with E being exposure, and H local , H global , V local and V global being local and global hazards and their respective vulnerabilities. If we then focus on average per-species risk, and assume no relationship between a system’s remoteness and its intrinsic vulnerability to local and global hazards, we can further simplify the equation by setting E , V local and V global to 1: $$R={{{H}}}_{{{{{{\rm{local}}}}}}}+{{{H}}}_{{{{{{\rm{global}}}}}}}$$ (2) To account for the effect of the expected increase in ecological dependencies with remoteness 8 in the illustrative risk assessment model described by Eq. ( 2 ), we can add one term to quantify the combined effect of the vulnerabilities emerging from ecological dependencies combined with the exposure to relevant hazards capable of exploiting such vulnerabilities and triggering cascading effects through interaction links (“triggers”): $$R=[\alpha ({{{H}}}_{{{{{{\rm{local}}}}}}}+{{{H}}}_{{{{{{\rm{global}}}}}}})+\beta ({{{{{\rm{ecological}}}}}}\,{{{{{\rm{dependency}}}}}}\times {{{{{\rm{triggers}}}}}})]/2$$ (3) Here, α and β are weights that can be used to modulate the relative importance of the two risk components (impacts from humans and global change vs ecological dependencies). Assuming that both risk components are rescaled in [0,1], to keep R in [0,1], we need to set 0 ≤ α ≤ 2 and β = 2 − α . Applying the risk assessment framework to reef fish communities We modelled the local risk of a reef fish community (in each 1° × 1° grid cells in the reef reference raster) using two different approaches. First, we quantified the risk as originating from the sum of local and global hazards (Eq. ( 2 )), where local and global hazards refer to the human impact layers 19 , as described in the “Human impact” section above. Then, we re-assessed risk for each reef fish assemblage when accounting also for the risk component possibly deriving from ecological (fish-coral) dependencies combined with a relevant hazard (e.g., death of coral species due to bleaching) capable of triggering cascading effects across species interaction links by adapting Eq. ( 3 ): $$R=[\alpha ({{{H}}}_{{{{{{\rm{local}}}}}}}+{{{H}}}_{{{{{{\rm{global}}}}}}})+\beta ({{{{{\rm{coral}}}}}}\,{{{{{\rm{dependency}}}}}}\times {{{{{\rm{coral}}}}}}\,{{{{{\rm{bleaching}}}}}}\,{{{{{\rm{susceptibility}}}}}})]/2$$ (4) Fish-coral dependency and coral bleaching susceptibility were assessed as described in the sections above. To make the different components of risk comparable, prior to computing risk, we rescaled both local + global hazards and fish-coral dependency × coral bleaching susceptibility between 0 and 1 across all reef localities. We did the same for the two sets of risk assessment values obtained using either Eqs. ( 1 ) or ( 2 ) (to permit direct comparison between the shapes of the risk-remoteness relationships). Both equations ideally provide the average risk of a species in a given locality, that is they assume exposure = 1. Also, they assume that the average local degree of vulnerability towards either local or global hazard is constant among localities; therefore, the respective vulnerability terms can be removed from the risk equations given that they are constants which would affect each locality the same. See the “Potential caveats in the risk assessment equations” section below for additional discussion on these issues. Assumptions of the risk assessment equations In this study we demonstrated how the framework of environmental risk assessment could incorporate species dependencies to more thoroughly examine the relationship between risk and remoteness. The proposed risk assessment equations are not intended to provide a definitive global risk assessment of reef fish assemblages. Instead, they are functional to assessing if, and to what degree, the risk component stemming from ecological dependencies can affect the expected relationship between risk and remoteness. The exact form of the equations is not overly important. In our equations we assumed constant vulnerability of fish assemblages to local and global hazards. That is, we ignored hazard-specific vulnerabilities. Although fish on coral reefs are likely vulnerable to the various hazards to different extents, modelling this amount of complexity would be extremely difficult. Considering the multiplicity of hazards per locality, and their potential complex interactions, it would be extremely challenging to obtain precise and realistic values for each of them to test our assumptions. However, we were able to compile several proxies of potential vulnerability to some of the main hazards, and in particular we computed the average vulnerability to fishing for all fish species in each reef locality, using the vulnerability measure provided by FishBase and based on the method by Cheung et al. 51 . Based on geographic distributions of the species, we determined the temperature, pH, and organic matter limits for each species, and then we used these data as indicators of each species potential tolerance to changes in temperature, acidification and organic pollution. Based on species habitat preference as defined by FishBase, we determined the fraction of demersal, benthopelagic, and coral associated species, as likely more affected by direct human disturbances (such as trampling); and the proportion of pelagic fish as potentially affected by shipping. We then compared those vulnerability proxies with remoteness, finding no strong relationships which would need to be incorporated into the risk equations (Supplementary Fig. 3 ). Then, we explored if our results held when exposure was taken into account (i.e., projecting the average per-species risk to the full fish assemblages). Exposure is a typical parameter involved in environmental risk assessment. For this, we multiplied the risk for the (log e -transformed) corresponding fish diversity. The observed patterns (Supplementary Fig. 4 ) were consistent with those relative to average species risk, which means that our conclusions scale up to fish assemblages. Again, the results of our study do not provide absolute estimates of risk for any of the fish species or coral reefs. However, with further research, we believe such estimates could be realistically obtained given sufficient species-specific data and more information about how the detrimental effects of each hazard are manifested. Sensitivity analyses We performed various analyses to check the robustness of our results and conclusions against potential biases stemming from data availability. In particular, we focused on potential relationships between the quality and quantity of information on species ecology and distribution, and remoteness. First, we checked for unequal distribution of sampling effort, under the hypothesis that remote localities could be less investigated than those close to human settlements. A comparison between the number of fish records available from OBIS 36 and GBIF 37 vs remoteness across all 1° × 1° reef localities revealed that this is not the case, with sampling effort remaining relatively high across all localities regardless of remoteness (Supplementary Fig. 2a , R 2 = 0.0008). We then explored whether the availability and quality of the ecological information we used in our analyses decreased with remoteness. For this we evaluated how the TSS values obtained from the comparison between the species ranges devised with our procedure and independent species distribution data from the GASPAR dataset 42 varied across reef localities with remoteness. We found no relationship (Supplementary Fig. 2b , R 2 = 0.0292). We also looked at the individual species TSS values obtained by comparing the distribution of a target species devised by our procedure with that according to the GASPAR dataset. Consistently with the previous result, we found no pattern linking the average of local species’ TSS values to remoteness (Supplementary Fig. 2c , R 2 = 0.0001). We also explored whether remoteness affected negatively the fraction of species (for which we had distributional data) to be discarded in each locality due to the lack of the ecological information needed in our analyses. Again, the analysis revealed no effect of remoteness on data availability (Supplementary Fig. 2d , R 2 = 0.0992). Another potential question arising from our conclusions is whether they would still be valid when species abundances are considered alongside species diversity. To explore this issue, we tested whether the relative abundance of coral-dependent fish changes with remoteness using all the data available from the Reef Life Survey (RLS) dataset 52 . Finding that coral dependent fish become less abundant as remoteness increases would weaken our results, as the increasing species-level vulnerability stemming from coral dependency would be counterbalanced by the reduction in the overall number of individuals threatened by coral loss. This is not the case. On average, coral associated fish are more abundant than the other species (with an average number of individuals per survey of 782 for coral associated species vs 658 for non associated species). More importantly, the local proportion of associated individuals is unaffected by remoteness (Supplementary Fig. 2e , R 2 = 0.0002). Finally, we tested whether our results could be driven or confounded by a potential relationship between functional redundancy and remoteness. We quantified functional redundancy in each locality as one minus the ratio between the number of unique functional entities and total species richness. We identified functional entities using the method and functional diversity datasets as in Mouillot et al. 53 . We found no relationship between functional redundancy and remoteness (Supplementary Fig. 2f , R 2 = 0.0042). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All the data used in the analysis are freely available online from the sources listed in the Method section, and particularly: (1) reef distribution map: ; (2) fish occurrence data: ; and GBIF (Actinopterygii, ; Elasmobranchii, ; Holocephali, ; Sarcopterygii, ); (3) fish ecology data: ; (4) ocean impact layers: ; (5) the friction surface map needed to compute accessibility: ; (6) human settlement data: ; (7) bleaching alert data: ; (8) environmental layers: ; (9) marine eco-regions: ; (10) fish trophic interactions: ; (11) reef fish abundance data: . (12) GASPAR dataset: . The list of coral-associated fish compiled from literature as well as the data used for the fish range validation and the dataset of functional traits are provided together with all scripts used in the analyses at 54 . Code availability All the scripts and data permitting to replicate the analyses and reproduce the figures are available from 54 .
An international research team led by Associate Professor Giovanni Strona from the University of Helsinki has identified a general macroecological mechanism that calls for a reconsideration of global conservation strategies. "To truly understand how global change is affecting natural communities and to identify effective strategies to mitigate the ongoing dramatic biodiversity loss, it is fundamental to account for the overarching complexity emerging from biotic interactions. As we show in our new research, doing this might reveal important counterintuitive mechanisms," Giovanni Strona says. The researchers combined a massive dataset of fish distribution and ecological traits for more than 9,000 fish species. Using artificial intelligence techniques, they generated thousands of networks mapping the interactions between corals and fish and those between fish prey and fish predators in all reef localities worldwide. They quantified, for each locality, the degree of fish dependency on corals. This analysis confirmed what Strona and colleagues showed in another paper published earlier this year: coral loss might detrimentally affect, on average, around 40 per cent of fish species in each coral reef area. The researchers also found that the dependency between fish and corals becomes stronger the further away they are from humans. This means that fish communities in remote reefs might be the most vulnerable to the cascading effects of coral mortality. Areas of critical vulnerability Next, the researchers asked whether the increased risk that stems from the potential cascading effects of coral mortality might counteract the benefits that remote fish communities experience because they are far away from direct impacts of human activities. "For this, we devised a novel risk assessment framework that is applicable to any ecosystem. It combines local anthropogenic impacts such as overfishing and pollution and global impacts like climate and environmental change with the risk deriving from ecological interactions," explains Mar Cabeza, head of the Global Change and Conservation Lab at the University of Helsinki. The framework revealed that taking into account ecological dependencies flattens the expected negative relationship between extinction risk for fish communities and remoteness. "For example, the hotspots of risks for fish communities from local human-derived impacts and global change are almost perfectly the same as the hotspots of risk from fish coral dependencies. This produces a global map of risk for fish communities where no place is safe, regardless of distance from humans," Giovanni Strona says. "The validity and relevance of these findings might extend far beyond reef fish, depicting a world where remote localities, rather than safe havens for biodiversity, might be, instead, areas of critical vulnerability," Mar Cabeza concludes. The research was published in Nature Communications.
10.1038/s41467-021-27440-z
Biology
Amino acid in fruit fly intestines found to regulate sleep
D-Serine made by serine racemase in Drosophila intestine plays a physiological role in sleep, Nature Communications (2019). DOI: 10.1038/s41467-019-09544-9 , www.nature.com/articles/s41467-019-09544-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-09544-9
https://phys.org/news/2019-05-amino-acid-fruit-intestines.html
Abstract Natural D-serine (D-Ser) has been detected in animals more than two decades ago, but little is known about the physiological functions of D-Ser. Here we reveal sleep regulation by endogenous D-Ser. Sleep was decreased in mutants defective in D-Ser synthesis or its receptor the N-methyl-D-aspartic receptor 1 (NMDAR1), but increased in mutants defective in D-Ser degradation. D-Ser but not L-Ser rescued the phenotype of mutants lacking serine racemase (SR), the key enzyme for D-Ser synthesis. Pharmacological and triple gene knockout experiments indicate that D-Ser functions upstream of NMDAR1. Expression of SR was detected in both the nervous system and the intestines. Strikingly, reintroduction of SR into specific intestinal epithelial cells rescued the sleep phenotype of sr mutants. Our results have established a novel physiological function for endogenous D-Ser and a surprising role for intestinal cells. Introduction Amino acids exist in stereoisomers, with all common amino acids except glycine having L- and D-enantiomers depending on the relative spatial arrangement surrounding the α-carbon. Though L-amino acids were traditionally thought to be the only natural form, D-amino acids have been found in biological organisms. Free D-serine (D-Ser) has been found in species ranging from bacteria to mammals 1 , 2 , 3 , 4 . D-Ser is an effective co-agonist of the N-methyl-D-aspartate subtype of glutamate receptor (NMDAR) 5 , 6 . D-Ser is synthesized from L-Ser by serine racemase (SR) 7 and degraded by D-amino acid oxidase (DAAO) 4 and SR 8 . Distribution of D-Ser and NMDAR as determined by chemical measurement 9 and immunohistochemistry 10 supports D-Ser as an endogenous coagonist acting on the glycine modulatory site of the NR1 subunits of the NMDAR 11 , 12 . A role for endogenous D-Ser in synaptic transmission was confirmed by selective degradation of D-Ser with DAAO which attenuated NMDAR function and its rescue by D-Ser 13 . It was proposed that the synaptic NMDAR is activated by D-Ser, whereas the extrasynaptic NMDAR is gated by glycine 14 . Sleep is important for animals and is regulated by both circadian and homeostatic processes 15 . While significant progress has been made in the molecular understanding of circadian rhythm, much less is known about homeostatic regulation of sleep. For more than a decade, Drosophila has been used as a model for genetic studies of sleep 16 , 17 . Genes and brain regions regulating sleep have been identified 18 , 19 , 20 , 21 . Recently, NMDAR and D-Ser have been indicated to participate in sleep regulation in both flies and mammals 22 , 23 , 24 . However, whether D-Ser regulates sleep remains unclear. Here, through a genetic screen followed by a thorough investigation of the synthases, the oxidases, and the receptor of D-Ser, combined with pharmacological genetic epistasis experiments, we report evidence that sleep is regulated by D-Ser through NMDAR1. Furthermore, the synthases, the oxidases, and the receptor of D-Ser have all been found to be expressed in the central nervous system and in the intestine. Strikingly, the intestinal but not neuronal expression has been proved to be important for sleep regulation, indicating a novel role of the intestine in sleep regulation. Taken together, these results suggest that D-Ser made by intestinal SR promotes sleep through NMDAR1 in Drosophila . Results Decreased sleep in shmt mutants and rescue by L-Ser or D-Ser In a screen of homozygous P-element insertion lines for mutations affecting sleep, we found that sleep duration was decreased when a P element was inserted into the CG3011 gene. Analysis of its sequence (Fig. 1a and Supplementary Fig. 1 ) indicates that CG3011 encodes the serine hydroxymethyltransferase (SHMT), which participates in the synthesis of L-Ser 25 , 26 (Fig. 1b ). There are three isoforms of shmt in fly, the original mutant uncovered by our screen contained a P element insertion in the 5′ non-coding region of isoform A (Fig. 1a ). To investigate the function of Drosophila SHMT, we generated mutations in the shmt gene by using CRISPR-Cas9. Deletion of all three isoforms caused lethality, whereas frameshift mutations introducing a STOP codon in the first coding exon of shmt affecting only isoform A resulted in viable shmt mutants ( shmt-es in Fig. 1a ). The mRNA level of isoform A shmt in shmt-es was significantly decreased compared with wild type ( wt ) flies detected by quantitative polymerase chain reaction (qPCR) analyses (Fig. 1c ). The shmt-es mutants were backcrossed into an isogenic Canton-S (CS) line in our lab 27 , and used in further analysis. Fig. 1 Sleep phenotypes of shmt mutants. a A schematic representation of a point mutation leading to a premature stop codon in shmt (thus shmt-early stop or shmt-es ). Also shown is the amino acid sequences of the shmt-es mutant line used here. Single gRNA generated insertion and/or deletion (indel) in the shmt gene, introducing a frameshift and a stop codon (asterisk). b A diagram of D-Ser synthesis pathway. c mRNA level of isoform A shmt in shmt-es was significantly reduced. d Sleep profiles of shmt-es (red) ( n = 57) and wt (black) ( n = 236) flies, plotted in 30 min bins. White background indicates the light phase (ZT 0–12); shaded background indicates the dark phase (ZT 12–24). e Statistical analyses. Daytime and nighttime sleep durations were significantly reduced in shmt-es flies. In this and other figures, open bars denote daytime sleep and filled bars nighttime sleep. f Drug treatment of both L- and D-Ser rescued the nighttime sleep duration of shmt-es flies to the wt level. The number of flies used in the experiment was denoted under each bar. *** P < 0.001, n.s. P > 0.05, Mann–Whitney test was used in ( c , e ), two-way ANOVA test with Bonferroni posttests was used in ( f ) to compare the sleep durations between wt and shmt-es , Kruskal–Wallis test with Dunn’s posttest was used in ( f ) to compare the sleep durations of shmt-es under different drug treatments. Error bars represent s.e.m. Male flies were used Full size image Sleep was measured in wt and shmt-es flies by video recording and analysis. When tested under the 12 h light/12 h dark (LD) condition, durations of both nighttime sleep and daytime sleep were significantly decreased in shmt-es flies (Fig. 1d, e ). Because L-Ser is the substrate for D-Ser synthesis (Fig. 1b ) 7 , we tested whether the sleep phenotype of shmt mutants was attributed to L- or D-Ser by rescuing shmt mutants with either L-Ser or D-Ser. After eclosion, flies were raised with either sucrose or sucrose supplemented with L-Ser or D-Ser for 3 days before being transferred into recording tubes with the same media. Feeding either L-Ser or D-Ser could rescue the sleep defect of shmt-es flies (Fig. 1f ). Thus, the sleep defect of shmt-es flies could be due to the lack of either L- or D-Ser. Decreased sleep and increased arousal in sr mutants SR is responsible for D-Ser production in vivo 28 , 29 , 30 . Drosophila SR is encoded by CG8129 (Supplementary Fig. 2 ) 31 . To investigate the function of D-Ser, we generated sr knock-out (srko) flies in which most of the coding region of sr was deleted (Fig. 2a ). Under LD condition, the nighttime sleep duration was significantly reduced in srko flies (Fig. 2b, c ). We also generated four other sr mutants, including two deletion mutants sr-middle and sr-long (Supplementary Fig. 3a ), and two insertion mutants SRKO-Gal4 and SRKO-Flp with the coding region replaced by the yeast Gal4 or Flp gene (Supplementary Fig. 3b ). The duration of nighttime sleep but not that of daytime sleep was also reduced in these four mutants (Supplementary Fig. 3c-f ). Because the nighttime sleep duration was decreased in all five sr mutants as well as the shmt-es mutants, we thereafter focused on the role of D-Ser in nighttime sleep, but not the daytime sleep which was observed in only the shmt-es mutants but none of the five sr deletion mutants. Fig. 2 Sleep phenotype of sr mutants. a A schematic representation of the CG8129 gene with the red bar indicating the region deleted in srko mutants. Two transcription variants (NM_141629, NM_169273) generate two proteins of 469aa (NP_649886) and 316aa (NP_731340), respectively. They have an identical C-terminal part while the longer variant has additional 153aa at the N-terminal region. In srko , aa 114–469 in the longer form and aa 1–316 in the shorter form were deleted. b Sleep profiles of srko (red) ( n = 42) and wt (black) ( n = 69) flies, plotted in 30 min bins. c Statistical analyses. Nighttime sleep durations were significantly reduced in shmt-es flies. d Drug treatment of D-Ser, but not L-Ser, rescued the nighttime sleep duration of srko flies to wt flies fed with mock. The number of flies used in the experiment was denoted under each bar. e Arousal rates of srko and wt flies under light stimuli. The arousal rate of srko flies was significantly increased. Numbers of flies that were aroused by the stimuli (open bars) and that kept sleep (filled bars) were plotted. Light stimuli were applied to wt and mutant flies as indicated. Arousal rate was denoted under each bar. f Drug treatment of D-Ser, but not L-Ser, rescued the arousal rate of srko flies to the wt level. *** P < 0.001, ** P < 0.01, * P < 0.05, n.s. P > 0.05. Mann–Whitney test was used in ( c ), two-way ANOVA test with Bonferroni posttests was used in ( d ) to compare the sleep durations between wt and srko under the same treatment, Kruskal–Wallis test with Dunn’s posttest was used in ( d ) for other statistical analyses. Fisher’s exact test was used in ( e , f ). Error bars represent s.e.m. Male flies were used Full size image While sleep duration can directly reflect sleep defect, stimuli-induced arousal rate can reflect sleep intensity. Previous studies have shown that sleep duration and arousal can be regulated separately 32 , 33 . We tested arousal response to light in wt and srko flies using similar method as previous studies 34 . On the 4th night after being transferred to the recording tubes, sleeping flies were shined with light pulses for 1 s at zeitgeber time (ZT) 16, and then the numbers of flies that were awaken and that kept sleep were counted. The arousal rate was significantly elevated in srko flies under stimulus (Fig. 2e ). Latency to sleep was increased in srko flies whereas circadian rhythm and sleep recovery after sleep deprivation were not significantly different between srko and wt flies (Supplementary Fig. 4 ). Taken together, these results indicate that sr is required for regulation of nighttime sleep and stimulus-induced arousal response. Role of D-Ser in sleep and arousal SR is the only known enzyme responsible for D-Ser synthesis in vivo (Fig. 1b ) 35 , L-Ser and D-Ser could be converted reciprocally to each other by SR. Sleep defect in shmt-es mutant flies is consistent with a role for either D- or L-Ser, whereas the phenotypes in srko mutants suggest that D-Ser is important for sleep. To further distinguish between D- and L-Ser, they were separately applied to srko flies. As discussed earlier, both L-Ser and D-Ser could rescue the sleep defect of shmt-es flies (Fig. 1f ). However, only D-Ser, but not L-Ser, could rescue the sleep defect of srko flies (Fig. 2d ). No significant sleep change was observed in srko flies fed with L-Ser compared to mock. By contrast, the nighttime sleep duration of srko was rescued by D-Ser to the level of wt flies fed with mock. When we examined the arousal response, we found that the arousal rate of srko flies was also rescued to the wt level by D-Ser, but not by L-Ser (Fig. 2f ). These results suggest that D-Ser, but not L-Ser, is important for sleep and arousal. Increased sleep and decreased arousal in daao-dko mutants D-Ser is degraded by DAAOs (Fig. 3a ). There are two genes encoding DAAO in Drosophila : CG12338 and CG11236 36 . To investigate their functional significance, we generated deletion mutants for each of the gene (Fig. 3b ). Fig. 3 Sleep phenotype of daao mutants. a A diagram of D-Ser degradation pathway. b Schematic representations of CG12338 and CG11236 genes. The red bars indicate regions deleted in mutant flies. For CG12338 , two transcription variants (NM_001299363, NM_136759) generate the same protein of 335aa (NP_001286292, NP_610603) and most of the coding region except the first 97 base pairs (bp) was deleted in the CG12338 knock-out ( cg12338ko ) flies. For CG11236 , two transcription variants (NM_135231, NM_001258985) generate two proteins of 341aa (NP_609075) and 338aa (NP_001245914), respectively. CG11236 knock-out ( cg11236ko ) contained the first 94aa because of deletion of 284 bp to 889 bp (NM_135231) or 284 bp to 880 bp (NM_001258985) from each transcript which resulted in frameshifts. c Sleep profiles of cg12338ko (blue) ( n = 42), cg11236ko (magenta) ( n = 46), double knock-out ( daao-dko , red) ( n = 40), and wt (black) ( n = 200) flies, plotted in 30 min bins. d Statistical analyses. Both daytime and nighttime sleep durations were significantly increased in daao-dko flies, nighttime sleep duration was significantly decreased in cg11236ko flies. e Arousal rate was significantly reduced in daao-dko flies. Numbers of flies were plotted for wt (black) and daao-dko (red) flies. *** P < 0.001. Mann–Whitney test was used in ( d ), Fisher’s exact test was used in ( e ). Error bars represent s.e.m. Male flies were used Full size image When a single daao was interrupted, there was not much change in sleep durations: nighttime sleep duration was significantly reduced in CG11236ko flies, while daytime sleep duration of CG11236ko and daytime and nighttime sleep durations of CG12338ko were not different from that in the wt (Fig. 3c, d ). In order to tell that if this is due to the redundant function of these two daao genes, we generated double knock-out ( daao - dko ) flies with both genes interrupted. We found that both daytime and nighttime sleep durations were significantly increased in daao - dko flies (Fig. 3c, d ), and the arousal rate of daao - dko flies was significantly decreased (Fig. 3e ). The opposite sleep and arousal phenotype of daao - dko flies to that in the srko flies further supports that D-Ser promotes sleep and inhibits the arousal response in Drosophila . D-Ser regulation of sleep through NMDAR1 D-Ser is a co-agonist of the NMDA receptor (NMDAR) 13 , 14 . There are two NMDAR subunits in Drosophila but only NMDAR1 contains the D-Ser binding site 37 . Pan-neuronal NMDAR1 knockdown by elav-Gal4 driven RNA interference (RNAi) reduces sleep in Drosophila 22 , but it did not distinguish whether the sleep effect was caused by D-Ser or by other NMDAR1 agonists, such as glycine. Recently, a study has found that NMDAR-mediated field excitatory post-synaptic potentials (NMDA-fEPSPs) and D-Ser levels fluctuate with sleep need in mice 24 , further raising the possibility of D-Ser regulating sleep through NMDAR1, in both flies and mammals. To investigate whether NMDAR1 regulates sleep, we generated nmdar1ko by replacing the first three coding exons of nmdar1 with 2A-Gal4-STOP right after the start codon (Fig. 4a ). Similar with srko flies, sleep duration was significantly decreased and arousal rate significantly increased in nmdar1ko flies (Fig. 4b–d ). Fig. 4 Regulation of sleep by D-Ser upstream of the NMDAR1. a A schematic representation of nmdar1 gene. The single transcription variant (NM_169059) generates a protein of 997aa (NP_730940). The sequence from aa2 to 107 was deleted and replaced with T2A-Gal4-STOP-3P3-RFP in nmdar1ko flies. b Sleep profiles of nmdar1ko (red) ( n = 36) and wt (black) ( n = 69) flies, plotted in 30 min bins. c Statistical analyses. Daytime and nighttime sleep durations were significantly reduced in nmdar1ko flies. d Arousal rate of nmdar1ko flies (red) was significantly higher than that of wt (black). Numbers of flies that were aroused (filled bars) and that kept sleep (open bars) were plotted for nmdar1ko and wt flies. e , f Neither L- nor D-Ser affected the sleep duration ( e ) or the arousal rate ( f ) of nmdar1ko flies. Numbers below each bar represent the number of flies tested in ( e ). g , h The sleep phenotype ( g ) and arousal phenotype ( h ) of daao-dko flies were masked by nmdar1ko in triple knockout flies. Nighttime sleep durations were significantly increased in daao-dko (green) ( n = 46) flies, and significantly decreased in nmdar1ko (blue) ( n = 32) and triple knockout (red) ( n = 34) flies, compared to wt (black) ( n = 47) flies ( g ). Arousal rate was significantly decreased in daao-dko (green) flies, and significantly increased in nmdar1ko (blue) and triple knockout (red) flies, compared to wt (black) flies ( h ). *** P < 0.001, ** P < 0.01, * P < 0.05, n.s. P > 0.05. Mann–Whitney test was used in ( c ), two-way ANOVA test with Bonferroni posttests was used in ( e ) to compare the sleep durations between wt and nmdar1ko under the same treatment, Kruskal–Wallis test with Dunn’s posttest was used in ( e ) for other statistical analyses and in ( g ). Fisher’s exact test was used in ( d ), ( f ), and ( h ). Error bars represent s.e.m. Male flies were used Full size image We carried out two experiments to investigate the relation between D-Ser and NMDAR1 in sleep and arousal: pharmacological application of L-Ser and D-Ser on nmdar1ko flies, and generation of triple knock-out flies lacking the nmdar1 and the two daao genes. Although D-Ser rescued the sleep defect and arousal defect in shmt-es and srko flies (Figs. 1f and 2d, f ), neither L-Ser nor D-Ser could affect the sleep duration and arousal rate of nmdar1ko mutants (Fig. 4e, f ). These results support that D-Ser lies downstream of SHMT and SR but upstream of NMDAR1 in sleep regulation. To test the epistasis relationship of daao genes and the nmdar1 gene, triple knockout flies carrying all three mutations of CG12338ko , CG11236ko , and nmdar1ko were generated by combining daao-dko and nmdar1ko together. Triple knockout flies phenocopy nmdar1ko flies in sleep duration and arousal rate: sleep duration was increased and arousal rate decreased in daao-dko flies, while sleep duration was decreased and arousal rate increased in nmdar1ko and triple knockout flies (Fig. 4g, h ). No significant difference was detected between the sleep duration and arousal rate of nmdar1ko and triple knockout flies (Fig. 4g, h ). These results indicate that nmdar1 acts downstream of daao . Thus, both the pharmacological experiment and the genetic epistasis experiment support that D-Ser functions through NMDAR1 to regulate sleep in Drosophila . Expression patterns of shmt , sr , daao , and nmdar1 To examine the expression of genes participated in the synthesis, degradation, and function of D-Ser, we fused Gal4 in-framely to shmt , sr , CG12338 , CG11236 , and nmdar1 , making the shmt-KIGal4 , SR-KIGal4 , CG12338-KIGal4 , CG11236-KIGal4 , and nmdar1-KIGal4 lines (Supplementary Table 1 ). Then UAS-mCD8::GFP was driven by these lines to label the membrane of cells expressing them respectively. We found that all five lines were expressed in the brain, the ventral nerve cord (VNC), and the gut (Fig. 5 , Supplementary Fig. 5 ). shmt-KIGal4 was expressed in the glia cell and neurons in the brain and the VNC (Fig. 5a, b ), and in the midgut (Fig. 5c , Supplementary Fig. 5a ). SR-KIGal4 was expressed in four neurons in the subesophageal ganglion (SOG) of the brain (Fig. 5d ), in four pairs of neuronal tracts projecting to the prothoracic, mesothoracic, metathoracic neuromere (PN, MN, MtN), and the abdominal center (AC) of the VNC (Fig. 5e ), and in the midgut enterocytes (ECs) (Fig. 5f , Supplementary Fig. 5b ). Fig. 5 Expression patterns of shmt , sr , daao , and nmdar1 . Expression patterns of shmt-KIGal4 ( a – c ), SR-KIGal4 ( d – f ), CG12338-KIGal4 ( g – i ), CG11236-KIGal4 ( j – l ), and nmdar1-KIGal4 ( m – o ) labeled by mCD8::GFP in the brain ( a , d , g , j , m ), the ventral nerve cord (VNC) ( b , e , h , k , n ), and the gut ( c , f , i , l , o ). The tissues were immunostained with anti-GFP and anti-DLG in ( d , e ), immunostained with anti-GFP in ( c , f , i , l , o ), immunostained with anti-GFP and nc82 in other panels. Scale bars are 30 μm Full size image CG12338-KIGal4 was expressed in the MB, the PI, and the SOG of the brain (Fig. 5g ), in the MN, the MtN, and the AC of the VNC (Fig. 5h ), and in the midgut, the Malpighian tubules, and the neurons projecting to the hindgut (Fig. 5i , Supplementary Fig. 5c ). CG11236-KIGal4 was expressed similarly to SR-KIGal4 : in quite a few neuron tracts projecting to the SOG of the brain (Fig. 5j ), in four pairs of neuronal tracts projecting to the PN, the MN, the MtN, and the AC of the VNC (Fig. 5k ), and in the midgut ECs (Fig. 5l , Supplementary Fig. 5d ). nmdar1-KIGal4 was expressed broadly in the brain, including the PI, the SOG, the FSB, and the superior neuropils (SNP) (Fig. 5m ), in the AC and the afferent neurons of the PN of the VNC (Fig. 5n ), and in neurons projecting to the proventriculus, the midgut regions R1 and R5, and the hindgut (Fig. 5o , Supplementary Fig. 5e ). Intestinal SR in sleep regulation Given that the synthases, the oxidases, and the receptor of D-Ser were all found to be expressed in the central nervous system and the gut, we next seek to identify which part is required for D-Ser to promote sleep by reintroducing UAS-SR into different parts in the srko background. The nighttime sleep duration of sr mutants was rescued by the reintroduction of UAS-SR back into sr -expressing cells (Fig. 6a ) labeled by SRKO-Gal4 in which 2A-Gal4-STOP was fused to the start codon of sr (Supplementary Table 1 ). However, pan-neuronal expression of sr driven by Elav-Gal4 failed to rescue the nighttime sleep duration (Fig. 6b ), suggesting that sr does not function in neurons to promote sleep. Furthermore, we labeled non-neuronal sr -expressing cells by SRKO-Gal4, Elav-Gal80 , in which neuronal expression but not intestinal expression of SRKO-Gal4 was blocked by Elav-Gal80 (Fig. 6g–i ). Reintroduction of sr back into non-neuronal sr -expressing cells also rescued the sleep defect of sr mutants (Fig. 6j ), suggesting that neuronal sr is not necessary for sleep promoting. Taken together, these results suggest that neuronal sr is neither sufficient nor necessary for sleep promoting, thus sr functions elsewhere to regulate sleep. Fig. 6 Requirement of intestinal but not neural SR in sleep regulation. a Reintroduction of sr in sr -expressing cells rescued the nighttime sleep defect of sr mutants. Nighttime sleep durations of srko/SRKO-Gal4 (green), srko/srko,UAS-SR (blue), SRKO-Gal4/srko,UAS-SR (red), and wt (black) flies were plotted. b Reintroduction of sr pan-neuronally failed to rescue the sleep defect of sr mutants. Nighttime sleep durations of Elav-Gal4/Y; srko/srko (green), srko/srko,UAS-SR (blue), Elav-Gal4/Y; srko/srko,UAS-SR (red), and wt (black) flies were plotted. c – e sr -expressing cells and MyoIA -expressing cells overlap in the fly gut. The nuclei of MyoIA -expressing cells were labeled by StingerRed driven by MyoIA-Gal4 ( d ), and the sr -expressing cells were labeled by GFP driven by SRKI-LexA ( c ). f Expression patterns of LexAop-Flp, UAS-FRT-STOP-FRT-GFP, MyoIA-Gal4, SRKI-LexA flies in the gut. Cells co-expressing MyoIA and sr were labeled with GFP. g – i sr -expressing non-neuronal cells were labeled by SRKO-Gal4,Elav-Gal80 ( i ), whereas the neural cells expression was blocked ( g , h ). j Reintroduction of sr driven by SRKO-Gal4,Elav-Gal80 rescued the sleep defect of sr mutants. Nighttime sleep durations of srko/SRKO-Gal4,Elav-Gal80 (green), srko/srko,UAS-SR (blue), SRKO-Gal4,Elav-Gal80/srko,UAS-SR (red), and wt (black) flies were plotted. k – m Expression patterns of MyoIA-Gal4 in the brain ( k ), the VNC ( l ), and the gut ( m ) labeled by mCD8::GFP. n Expression of sr in MyoIA -expressing cells rescued the nighttime sleep defect of sr mutants. Nighttime sleep durations of srko/srko,MyoIA-Gal4 (green), srko/srko,UAS-SR (blue), srko,MyoIA-Gal4/srko,UAS-SR (red), and wt (black) flies were plotted. o – q MyoIA -expressing non-neuronal cells were labeled by MyoIA-Gal4,Elav-Gal80 ( q ), whereas the neural cells expression was blocked ( o , p ). r Expression of sr driven by MyoIA-Gal4,Elav-Gal80 rescued the sleep defect of sr mutants. Nighttime sleep durations of srko/srko,MyoIA-Gal4,Elav-Gal80 (green), srko/srko,UAS-SR (blue), srko,MyoIA-Gal4,Elav-Gal80/srko,UAS-SR (red), and wt (black) flies were plotted. The tissues were immunostained with anti-GFP in ( c – f ), ( i , m ), and ( q ), immunostained with anti-GFP and nc82 in ( g , h , k , l , o , p ). Scale bars are 500 μm in ( i , m , q ), 30 μm in other panels. Numbers below each bar represent the number of flies tested. Kruskal–Wallis test with Dunn’s posttest, *** P < 0.001, n.s. P > 0.05. Error bars represent s.e.m. Male flies were used Full size image Because sr was expressed in the midgut ECs (Fig. 5f , Supplementary Fig. 5b ), we used MyoIA-Gal4 which was known to drive GFP expression in midgut ECs 38 to test the role of intestinal SR in sleep regulation. mCD8::GFP was driven by MyoIA-Gal4 to label MyoIA -expressing cells, MyoIA-Gal4 was expressed in the midgut ECs (Fig. 6m ) as well as in the neurons in the brain and the VNC (Fig. 6k, l ), while no expression of MyoIA-Gal4 was found in the genital and the internal surface of the abdominal cuticle which is covered by the fat body and the oenocytes (Supplementary Fig. 6 ). Colocalization of MyoIA and sr was detected in the gut by simultaneously labeling sr -expressing cells with GFP and the nuclei of MyoIA -expressing cells with StingerRed (Fig. 6c–e ). We also intersected MyoIA and sr by expressing UAS-FRT-STOP-FRT-GFP in MyoIA -expressing cells, and expressing LexAop-Flp in sr -expressing cells. Thus, in cells that co-expressing MyoIA and sr , the stop cassette between the UAS and GFP was removed by the Flp recombinase, labeling the cells with GFP (Fig. 6f ). Expression of sr in MyoIA -expressing cells rescued the sleep defect of sr mutants (Fig. 6n ). We also used Elav-Gal80 to block the neuronal expression of MyoIA-Gal4 (Fig. 6o–q ), and the nighttime sleep duration of sr mutants was rescued by expression of sr in non-neuronal MyoIA -expressing cells (Fig. 6r ). Moreover, the sleep duration was significantly decreased and sleep latency significantly increased when sr was knocked-down with RNAi specifically in the gut (Supplementary Fig. 7a, d ) or daao was overexpressed specifically in the gut (Supplementary Fig. 7b, c, e, f ) with daao cDNA driven by MyoIA-Gal4, Elav-Gal80 . Taken together, these results support that SR expressed in intestinal cells is important for regulating sleep in Drosophila . Discussion Our findings have revealed both a novel function for D-Ser and a novel role for intestinal cells. Results from mutations of five genes (two genes required for D-Ser synthesis, two for D-Ser degradation, and one for the D-Ser receptor), taken together with those from the pharmacological application of L- and D-Ser, support the conclusion that D-Ser plays an important role through NMDAR1 in regulating sleep in Drosophila . Furthermore, results from genetic rescue experiments with neuronal and intestinal drivers indicate that intestinal SR regulates sleep. The evidence for D-Ser function in sleep is strong. Phenotypic analysis indicates that D-Ser is important for nighttime sleep and arousal. Nighttime sleep and arousal phenotypes of shmt , and sr mutants are opposite to those of daao-dko mutants, consistent with roles of D-Ser in increasing sleep and decreasing arousal. Further support was provided by the finding that D-Ser could rescue the sleep and arousal phenotypes in shmt and sr mutants, whereas L-Ser was unable to rescue the sleep and arousal phenotypes in sr mutants. In Drosophila , while the functional significance of D-Ser in sleep is clear, a role for L-Ser appears unlikely but cannot be completely ruled out at this point. While NMDAR1 could affect circadian rhythm in mice 39 , 40 , it is surprising for a well-known excitatory receptor to promote sleep. Our results from nmdar1 knockout flies and sr knockout flies provide the strongest in vivo evidence for an essential role of NMDAR1 in promoting sleep. These results are consistent with, but cleaner than, the previous RNAi results in flies 22 . NMDAR1 has recently been implicated in regulating sleep in flies 22 , 23 : pan-neuronal knocked down of NMDAR1 or NMDAR2 through RNAi or feeding of the NMDAR antagonist MK801 to flies reduced sleep duration 22 . So far, regional RNAi had failed to reveal specific regions where NMDAR1 regulates sleep 22 . NMDAR1 expression has been detected in the R2 ring of the EB which is important for sleep homeostasis 23 , though it remains unknown whether NMDAR1 in the R2 ring regulates sleep. Roles for D-Ser in regulating mammalian sleep and arousal remain to be investigated. The saturation level of the glycine binding site in the NMDAR1 correlates with sleep need in mice 24 . Total serine level increased to ~487% during slow wave sleep (SWS) in the ventrolateral posterior nuclei (VLPN) of the cat thalamus 41 . D-Ser reduces sedative response induced by alcohol in flies and rodents 31 , 42 . A report of the effect of daao ablation on promoting sedative response in mice under novel environment 43 was refuted by further analysis 44 . Because D-Ser was only increased in some but not all regions in the brain after the elimination of a single daao 45 , the function of D-Ser in mammalian sleep remains unclear. Both glial and neuronal distribution of D-Ser and SR have been reported in mammals 35 , 46 . Early studies have detected D-Ser to be present in glia 10 , 47 , 48 . D-Ser has been considered as a major gliotransmitter 49 , 50 , 51 and its release is triggered by non-NMDAR glutamate receptor 48 , 49 . However, SR and D-Ser were also found in neurons 47 , 52 , 53 . Both the neuronal and glial release of D-Ser have been detected 54 . Using conditional SR knocked out mice, the majority of SR (65%) and extracellular D-Ser have been suggested to be of neuronal origin 55 . Our present study with Drosophila using SR-KIGal4 identified only neuronal but no glial expression of SR. A striking finding here is that SR is not only expressed in the nervous system, but that it is expressed and functions in intestinal cells to regulate sleep in Drosophila . Through region-specific rescue, knock-down, and overexpression studies (Fig. 6 , Supplementary Fig. 6 ), we found that the expression of SR in cells labeled by MyoIA-Gal4, Elav-Gal80 is essential for sleep regulation. We have examined the expression patterns only in the CNS, the gut, the genital, the fat body, and the oenocytes, therefore, although it is most likely that SR in the intestinal cells functions to regulate sleep, functions in other organs or cells could not be completely ruled out. This is the first time that a gene has been found to function in the intestines to regulate sleep in any animal species. Why and how intestinal SR regulates sleep remains elusive. Sleep disorders have been found to be associated with gastrointestinal and metabolism pathology in human 56 and animals 57 . The intestine is a tissue made up of a large variety of cells 58 that could both sense the environment and communicate with the central nervous system. In mammals and flies, crosstalk between enteroendocrine cells and neurons through neuropeptide signaling have been identified to regulate processes such as energy homeostasis and development 59 , 60 , and the gut microbiome has been implicated in the regulation of behaviors such as locomotion and anxiety 61 , 62 , 63 . We have now demonstrated an essential role for an endogenous gene in the intestine in sleep regulation. How D-Ser produced in the intestine functions through the NMDAR to regulate sleep, and whether other cells, such as glia, participate in the circuit requires further studies. Our work should stimulate further investigations of whether sr or other genes function in the gastrointestinal system to regulate sleep or other neuronal functions. Methods Fly lines and rearing conditions All fly stocks were reared on standard corn meal at 25 °C and 50% humidity on a 12:12 LD schedule unless otherwise noted. Elav-Gal4, Elav-Gal80, UAS-mCD8::GFP , UAS-StingerRed , LexAop-GFP were from the Bloomington Stock Center. MyoIA-Gal4 were generously provided by R. Xi (National Institute of Biological Science, Beijing). UAS-srRNAi (v110407) and UAS-Dicer (v60009) were from the Vienna Drosophila RNAi Center. Generation of transgenic, knockout, and knockin flies We generated UAS-SR , UAS-CG12338 , and UAS-CG11236 flies by inserting the coding sequences of CG8129-RB , CG12338-RA , and CG11236-RA respectively into the PACU2 vector from the Jan lab at UCSF 64 , and then inserted the construct into attp2 site. The coding sequences were amplified from the 1st strand cDNA made by the PrimeScript™ II 1st strand cDNA synthesis kit (Takara, 6210A) from total RNA of wt flies isolated with TRIzol reagent (Invitrogen). We generated fly mutants with CRISPR/Cas9. A pSP6-2sNLS-spcas9 plasmid and a pMD19-T gRNA scaffold vector were obtained from Dr. R. Jiao 65 . After the sequence of single strand guide RNA (sgRNA) was designed, the corresponding DNA template was amplified from the pMD19-T gRNA scaffold vector, and then transcribed in vitro (Promega, P1320) to obtain the sgRNA. pSP6-2sNLS-spcas9 vector was linearized by restriction enzyme XbaI (New England BioLabs, USA, R0145), transcribed in vitro (Ambion, USA, AM1340), and added with Poly(A) (New England BioLabs, USA, M0276) to obtain spCas9 mRNA. To generate shmt-es mutant, we injected one single strand guide RNA (sgRNA) and a spCas9 mRNA into Canton-S (CS) embryos to generate indel induced by site-directed cleavage in CG3011 . F2 indel flies were identified by PCR and confirmed by sequencing. Mutants with a stop codon in the coding region of CG3011 were selected for further studies. To generate srko , CG12338ko , and CG11236ko flies, we injected two sgRNAs and a spCas9 mRNA into CS embryos. The target region between the two sgRNAs was deleted. F2 knock-out flies were identified by PCR and sequencing. To obtain Gal4/LexA lines, we injected two sgRNA, spCas9 mRNA, and a donor plasmid into CS embryos. Gal4/LexA in the donor plasmid was integrated into specific sites of the genome through homologous recombination. A sgRNA and a spCas9 mRNA were used to improve the efficiency of homologous recombination. To construct the donor plasmid, two homologous arms were amplified by PCR from the fly genome using restriction enzyme-tailed PCR primers and the products were inserted into appropriate restriction sites in the pBlueScriptII vector. T2A-Gal4/LexA-loxP-3P3-RFP-loxP sequence (constructed by Bowen Deng in the Lab) was inserted between the two homologous arms. T2A-Gal4/LexA was kept in-frame with the 5′ homologous arm. 3P3-RFP was used as a selection marker after injection. F2 flies with RFP observed in the eyes was selected and confirmed by PCR and sequencing (Supplementary Table 1 ). Sequences of the primers used for identification of the fly lines are presented in Supplementary Table 2 . Behavioral assays Sleep analysis was performed in a video-based recording system. 5–8 days old flies were placed in 65 mm × 5 mm tubes containing fly food. Infrared LED lights were used to provide constant illumination, and videos were recorded by a camera with 704 × 576 resolution. Videos were taken at 5 frames/s. And 1 frame/s was extracted for fly tracing. The position of a fly was tracked by a program based on OpenCV. Briefly, flies were extracted by subtracting background which was updated for each frame in order to prevent environmental light shift. The center of a fly was calculated, the speed was defined as changes of the center from the previous frame to the current frame. Sleep was defined with more than 5 min bout of inactivity, sleep latency was defined as the time in minutes from the moment light was turned off to the onset of sleep 16 , 17 . Arousal response was measured at ZT16 (4 h after lights off) under 1 s light stimulation (100–200 lux) 34 . The percentage of flies that were aroused by light stimuli from sleep was calculated as arousal rate. Sleep deprivation was performed by placing a silica gel holder with recording tubes horizontally into a holding box. The box was rotated clock-wise or counter clock-wise and bumped to plastic stoppers under the control of a servo motor. The box was rotated continuously for 9 times during each episode, and the setup was activated every 3 min for 12 h during the night. Sleep was completely deprived as confirmed by DAM recording during deprivation 66 . Because sleep homeostasis is more significant in females than that in males 67 , we used females to examine sleep homeostasis. For analysis of circadian rhythm, flies were treated and recorded in the same condition as the sleep assay, except that the experiments were performed in DD. Activity was measured for 5–8 days and calculated in ActogramJ 68 . The period length was calculated by Chi-square method. Drug treatment L-Ser (S0035) and D-Ser (S0033) were from Tokyo Chemical Industry and mixed in the 5% sucrose and 2% agar medium. Flies were maintained on food containing 2.9 g/L L-Ser, D-Der (treatment group) or no Ser (control group) after eclosion for 3 days before being transferred to the recording tube with the same medium. Immunohistochemistry and confocal imaging Flies were anesthetized and dissected in cold phosphate-buffered saline (PBS). Brains were fixed in 4% paraformaldehyde (weight/volume) for 1 h at room temperature (RT), washed in PBST (PBS containing 0.2% Triton X-100, vol/vol) for 10 min three times, blocked in PBSTS (PBS containing 2% Triton X-100, 10% normal goat serum, vol/vol) for 12 h at 4 °C, incubated with primary antibodies in the dilution buffer (PBS containing 0.25% Triton X-100, 1% normal goat serum, vol/vol) for 12 h at 4 °C, and washed with the washing buffer (PBS containing 1% Triton X-100, vol/vol, 3% NaCl, g/ml) for 10 min three times. Brains were then incubated with secondary antibodies in the dilution buffer for 12 h at 4 °C in darkness, and washed three times in the washing buffer for 10 min each. In the case of the third antibody, brains were further incubated with third antibodies at 4 °C overnight and washed. Finally, brains were mounted in Focusclear (Cell Explorer Labs, FC-101) and imaged on Zeiss LSM710 confocal microscope. The intestines, after dissection, were fixed with 4% paraformaldehyde for 0.5 h, followed by 30 s in heptanes and methanol (1:1, vol/vol), washed with 100% methanol for 5 min twice, and washed with PBST for 10 min three times, and immunostained for GFP as described in the previous paragraph. Images were processed by Zeiss software and Imaris (bitplane) for 3D reconstruction. Chicken anti-GFP (1:1000) (Abcam Cat# 13970; RRID:AB_300798), mouse anti-Bruchpilot (1:40) (DSHB Cat# 2314866, nc82; RRID: AB_2314866) were used as primary antibodies with AlexaFluor488 anti-chicken (1:500) (Life Technologies Cat# A11039; RRID:AB_2534096) and AlexaFluor633 anti-mouse (1:500) (Life Technologies Cat# A21052; RRID: AB_141459) being used as respective secondary antibodies. Mouse 4F3 anti-DLG (1:50) (DSHB Cat# 4F3 anti-discs large; RRID: AB_528203) was used as primary antibody with biotin-conjugated goat anti-mouse (1:200) (Invitrogen Cat# B2763; RRID: AB_2536430) being the secondary antibody and AlexaFluor635 streptavidin (1:500) (Invitrogen Cat# S32364) being the third antibody. Quantitative PCR Total RNA was extracted from ~60 flies aged 5–8 days using TRIzol reagent (Invitrogen) and then reverse transcripted by the PrimeScript TM RT Master Mix kit (Takara, RR036A). Quantitative PCR analysis was then performed using TransStart Top Green qPCR SuperMix kit (TransGen, AQ131-03) in the Applied Biosystems 7900HT Fast-Time PCR system. The sequences of primers used to detect shmt and actin42a (endogenous control) RNA are as follows: shmt-F: 5′-CAGCCGTTTACAAAGACATGCA-3′ shmt-R: 5′-GAATGGCGTTGGTGATGGTT-3′ act42a-F: 5′-CTCCTACATATTTCCATAAAAGATCCAA-3′ act42a-R: 5′-GCCGACAATAGAAGGAAAAACTG-3′ Statistics All statistical analyses were carried out with Prism 5 (GraphPad Software). Fisher’s exact test was used to compare arousal rates. Mann–Whitney test was used to compare two columns of data. Kruskal–Wallis test followed by Dunn’s posttest was used to compare multiple columns of data. Two-way ANOVA followed by Bonferroni post-tests was used to compare drug rescue effects. Additional Mann–Whitney test was used to compare mutants with wt flies under different treatments. Statistical significance is denoted by asterisks: * P < 0.05, ** P < 0.01, *** P < 0.001. Data availability The data that support the findings of this study are available upon reasonable request. Code availability Code used for tracing the fly were conducted in C++, and further analyses were conducted in Matlab. All the code are available upon reasonable request.
A team of researchers affiliated with several institutions in China has found that an amino acid made in fruit fly intestines plays a key role in regulating their sleep. In their paper published in the journal Nature Communications, the group describes their study of D-serine in Drosophila melanogaster and what they found. Scientists have known about D-serine for many years, but thought that it only existed in bacteria. Recently, however, researchers found that humans also produce the amino acid, as do fruit flies. But until now, it was not known what function it served. In this new effort, the researchers found that, at least in fruit flies, it helps regulate sleep. To learn more about the amino acid, the researchers edited the genes of fly specimens to halt its production and found that doing so resulted in the flies sleeping only half as much as normal flies. But they also found something else. Fruit flies actually produce D-serine in two places—in their intestines and their brains. Logic would suggest that the acid produced in the brain would be the one associated with sleep, but the researchers found that the opposite was true. When they turned off the genes that controlled production of the enzyme, serine racemase, which syntheses D-serine in the intestines, the flies slept less, but when they did the same for those made in the brain, they saw no change in sleep habits. The researchers report that they have no idea how an amino acid produced in the intestines can impact sleep patterns, noting that sleep regulation is probably carried out by the central nervous system. Prior research has shown that sleep is a very old evolutionary development, which suggests its control is likely similar across species. They suggest that more research is needed to find the answers to other questions surrounding D-serine—for instance, is it produced in other parts of the body? Does it play a role in regulating sleep in humans, and if so, how?
10.1038/s41467-019-09544-9
Medicine
Study identifies critical regulator of tumor-specific T cell differentiation
Andrew C. Scott et al. TOX is a critical regulator of tumour-specific T cell differentiation, Nature (2019). DOI: 10.1038/s41586-019-1324-y Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1324-y
https://medicalxpress.com/news/2019-06-critical-tumor-specific-cell-differentiation.html
Abstract Tumour-specific CD8 T cell dysfunction is a differentiation state that is distinct from the functional effector or memory T cell states 1 , 2 , 3 , 4 , 5 , 6 . Here we identify the nuclear factor TOX as a crucial regulator of the differentiation of tumour-specific T (TST) cells. We show that TOX is highly expressed in dysfunctional TST cells from tumours and in exhausted T cells during chronic viral infection. Expression of TOX is driven by chronic T cell receptor stimulation and NFAT activation. Ectopic expression of TOX in effector T cells in vitro induced a transcriptional program associated with T cell exhaustion. Conversely, deletion of Tox in TST cells in tumours abrogated the exhaustion program: Tox -deleted TST cells did not upregulate genes for inhibitory receptors (such as Pdcd1 , Entpd1 , Havcr2 , Cd244 and Tigit ), the chromatin of which remained largely inaccessible, and retained high expression of transcription factors such as TCF-1. Despite their normal, ‘non-exhausted’ immunophenotype, Tox -deleted TST cells remained dysfunctional, which suggests that the regulation of expression of inhibitory receptors is uncoupled from the loss of effector function. Notably, although Tox -deleted CD8 T cells differentiated normally to effector and memory states in response to acute infection, Tox -deleted TST cells failed to persist in tumours. We hypothesize that the TOX-induced exhaustion program serves to prevent the overstimulation of T cells and activation-induced cell death in settings of chronic antigen stimulation such as cancer. Main Using an inducible model of autochthonous liver cancer in which SV40 large T antigen (TAG) is the oncogenic driver and tumour-specific antigen 7 (Fig. 1a and Extended Data Fig. 1a ), we recently showed that CD8 + T cells expressing a restricted T cell receptor (TCR) specific for TAG (hereafter referred to as TCR TAG cells) differentiate to an epigenetically encoded dysfunctional state, exhibiting hallmarks of TST cell dysfunction including the expression of inhibitory receptors and loss of effector cytokines 3 , 5 . Numerous transcription factors were dysregulated in dysfunctional TCR TAG cells (such as NFAT, TCF-1, LEF1, IRF4 and BLIMP1) compared with functional effector or memory TCR TAG cells generated during acute infection with Listeria (using a recombinant Listeria monocytogenes strain that expressed TAG epitope I ( Lm TAG)) 5 . However, many of these transcription factors are also crucial for the development of normal effector and memory T cells 8 ; thus, we set out to identify transcription factors that were specifically expressed in dysfunctional TCR TAG cells. We analysed our RNA sequencing (RNA-seq) data 5 and found that the gene encoding the nuclear factor TOX was highly expressed in dysfunctional TCR TAG cells, but low in functional naive, effector and memory TCR TAG cells (Fig. 1b ). TOX is a nuclear DNA-binding factor and a member of the high-motility group box superfamily that is thought to bind DNA in a sequence-independent but structure-dependent manner 9 . Although TOX is required during thymic development of CD4 + T lineage cells, natural killer and innate lymphoid cells 10 , 11 , 12 , and in regulating CD8 T cell-mediated autoimmunity 13 , its role in tumour-induced T cell dysfunction is unknown. To assess TOX expression during CD8 T cell differentiation in acute infection and tumorigenesis, congenically marked naive TCR TAG cells were transferred into (i) wild-type C57BL/6 (B6) mice immunized with Lm TAG, or (ii) tamoxifen-inducible liver cancer mice (AST×Cre-ER T2 ; AST denotes albumin-floxStop-SV40 large T antigen) treated with tamoxifen (Fig. 1a and Extended Data Fig. 1a, b ). TOX was expressed at low levels early after Listeria infection but declined to baseline levels (by day 5 after infection) and remained low in memory T cells (Fig. 1c and Extended Data Figs. 1c , 2 ). By contrast, during tumour progression, TOX expression increased in TCR TAG cells and remained high (Fig. 1c and Extended Data Figs. 1c , 2 ). High expression of TOX correlated with high expression of several inhibitory receptors and low expression of TCF-1 (Fig. 1d and Extended Data Figs. 1d , 2b, c ). Moreover, TOX-expressing TCR TAG cells failed to produce the effector cytokines IFNγ and TNF after stimulation ex vivo with cognate peptide or phorbol myristate acetate (PMA) and ionomycin (Fig. 1e and Extended Data Fig. 1e–g ). Fig. 1: TOX is highly expressed in tumour-infiltrating CD8 T cells of mouse and human tumours. a , Experimental scheme for acute infection (green) and tumorigenesis (red). E 3 and E 7 , effector cells isolated 3 and 7 days after immunization, respectively; M, memory cells; T 7 and T 14–60 , T cells isolated from liver tumours at 7 and 14–60 days after transfer. b , Reads per kilobase of transcript per million mapped read (RPKM) values of Tox . n = 3 (naive (N), memory); n = 6 (E 5– 7 ); n = 14 (T 14–60 ) TCR TAG cells isolated from liver tumour lesions of AST×Cre-ER T2 mice at 14, 21, 28, 35 and more than 60 days after transfer 5 . c , Expression levels of TOX protein in TCR TAG cells during Listeria infection (green) or tumorigenesis (red), assessed by flow cytometry at indicated time points with n = 2–3 mice. MFI, mean fluorescent intensity; Tam, tamoxifen. d , Expression of TOX, TCF-1 and PD-1 in TCR TAG cells isolated from liver tumour lesions 35 days after transfer (T 35 ; red, n = 5); memory TCR TAG cells are shown as control (M; green). e , IFNγ and TNF production of memory TCR TAG cells (M; green, n = 2) and liver tumour-infiltrating TCR TAG cells (T; red, n = 3). Data are representative of more than five independent experiments. f – h , TOX expression in human tumour-infiltrating CD8 + T cells isolated from patients with melanoma ( n = 4) ( f ), breast cancer ( n = 4) ( g ), and lung cancer ( n = 6) ( h ). Each symbol represents an individual mouse (for b – e ) or individual patient (for f – h ). Data are mean ± s.e.m. * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001, two-sided Student’s t -test. Source Data Full size image Persistent antigen encounter or TCR stimulation drives expression of inhibitory receptors and T cell exhaustion during chronic infections 14 and in tumours 3 , 15 . Therefore, we analysed the expression of TOX and inhibitory receptors in GP33 virus-specific CD8 T (TCR P14 ) cells during acute infection with lymphocytic choriomeningitis virus (LCMV) Armstrong and chronic infection with LCMV clone 13 (Extended Data Fig. 2 ). TOX was transiently expressed early during acute infection with LCMV Armstrong but declined to baseline by day 5 after infection. In chronic infection with LCMV clone 13, TOX expression progressively increased in TCR P14 cells, remained increased, and correlated with high expression of several inhibitory receptors (Extended Data Fig. 2 ). We confirmed TOX expression in the mouse B16F10 (B16) melanoma model. B16 tumours overexpress two melanoma-associated proteins, TRP2 and PMEL, which are recognized by TRP2-specific (TCR TRP2 ) and PMEL-specific (TCR PMEL ) CD8 T cells, respectively 16 , 17 . Naive transgenic TCR TRP2 or TCR PMEL cells were adoptively transferred into B16 tumour-bearing mice, and again we found that dysfunctional, tumour-infiltrating TCR TRP2 and TCR PMEL cells expressed high levels of TOX and inhibitory receptors, and low levels of TCF-1 (Extended Data Fig. 3a–c ). Thus, persistent upregulation of TOX in T cells is induced in settings of chronic antigen stimulation such as chronic infection and cancer. Next, we examined the expression of TOX in human CD8 + tumour-infiltrating lymphocytes (TILs) and peripheral blood mononuclear cells (PBMCs) from patients with melanoma, breast, lung and ovarian cancer (Fig. 1f–h and Extended Data Fig. 3d–g ). CD45RO + PD-1 hi CD39 hi CD8 + TILs expressed high levels of TOX compared with CD45RO + PD-1 lo CD39 lo or CD45RA + TILs in the same tumour or CD45RO + PD-1 hi PBMCs from the same patient. PD-1 hi TILs expressed higher levels of TOX, CD39, TIM-3 and LAG-3 than PD-1 lo TILs from the same tumour (Extended Data Fig. 3g ). Thus, TOX is highly expressed in subsets of human TILs, and TOX expression in TILs correlates with other characterized markers of T cell exhaustion. To determine the role of tumour antigen stimulation versus the tumour immunosuppressive microenvironment in TOX induction, we co-transferred equal numbers of naive tumour-specific TCR TAG (Thy1.1) cells and non-tumour-specific TCR OT1 (Ly5.1) cells, which express a K b -restricted TCR specific for ovalbumin (OVA), into the liver of tumour-bearing AST×Alb-Cre (AST mice crossed with Alb-Cre mice) or wild-type B6 control mice (Fig. 2a ). One day later, recipient AST×Alb-Cre and B6 mice were immunized with Listeria co-expressing the TAG epitope I and OVA epitopes; TCR TAG and TCR OT1 cells expanded equally well and expressed similar levels of activation and proliferation markers CD44 and Ki67 (Extended Data Fig. 4a ). In B6 hosts, neither TCR TAG nor TCR OT1 cells upregulated TOX or inhibitory receptors, and both differentiated into functional memory T cells (Fig. 2b, c ). In tumour-bearing AST×Alb-Cre mice, TCR TAG cells upregulated TOX, PD-1, LAG-3, 2B4, CD38, CD39, TIM-3 and CD69, lost expression of TCF-1, and lost the ability to produce IFNγ and TNF or express CD107. By contrast, bystander TCR OT1 cells from the same liver tumours did not upregulate TOX or inhibitory receptors and remained functional (Fig. 2b, c and Extended Data Fig. 4a ). This finding is consistent with recent single-cell RNA-seq studies that describe distinct CD8 T cell populations in human tumours, including dysfunctional, tumour-reactive TOX hi T cells, and bystander cytotoxic T cells that are TOX low and lack hallmarks of chronic antigen stimulation 18 , 19 . Fig. 2: Chronic TCR stimulation drives TOX expression in tumour-specific CD8 T cells. a , Experimental scheme of TCR TAG (TAG) and TCR OT1 (OT1) T cell co-transfer. b , Top, expression profiles of TAG (red) and OT1 (black) isolated from the spleens of B6 mice (top; n = 6 (OT1), n = 4 (TAG)) or the livers of AST×Alb-Cre mice (bottom; n = 8 (OT1), n = 8 (TAG)), 3–4 weeks after transfer and immunization. Bottom, MFI values of TOX expression relative to naive T cells. Each symbol represents an individual mouse. Data are representative of three independent experiments. c , Intracellular IFNγ and TNF production of TAG and OT1 isolated 3–4 weeks after transfer and immunization from spleens of B6 mice (left) or liver tumour lesions of AST×Cre mice (right). Data are representative of three independent experiments. d , MA plot of the RNA-seq dataset. Significantly DEGs are shown in red. e , ATAC-seq signal profiles across the Tox and Tcf7 loci. Peaks uniquely lost or gained in TAG compared with OT1 are highlighted in red. Data are mean ± s.e.m. *** P ≤ 0.001, two-sided Student’s t -test. NS, not significant. Source Data Full size image RNA-seq and assay for transposase-accessible chromatin using sequencing (ATAC-seq) analyses of liver tumour-infiltrating TCR TAG and TCR OT1 cells revealed 2,347 differentially expressed genes (DEGs) and 19,071 differentially accessible peaks, including in Tox , Tcf7 and numerous inhibitory receptor-encoding genes (Fig. 2d , Extended Data Fig. 4b and Supplementary Table 1 ). Gene set enrichment analyses (GSEA) of the DEGs between TCR TAG and TCR OT1 cells revealed enrichment for gene sets of (i) T cell exhaustion during chronic viral infection 20 , and (ii) gene programs induced by a mutant, constitutively active form of NFAT1 in T cells resulting in anergy or exhaustion 21 (Extended Data Fig. 4c ). ATAC-seq revealed that DEGs had accompanying changes in chromatin accessibility: Tox , Pdcd1 (encoding PD-1), Entpd1 , Cd38 and Cd244 loci were more accessible in TCR TAG cells than in TCR OT1 cells, whereas the Tcf7 locus was less accessible (Fig. 2e , Extended Data Fig. 4d–f and Supplementary Table 2 ). Chromatin accessibility analysis of TILs from patients with melanoma and lung cancer 5 showed that PD-1 hi TILs uniquely gained several peaks of open chromatin in TOX and lost multiple peaks in TCF7 when compared with human naive CD45RA + CD8 + PBMCs, or central memory CD45RA − CD45RO + CD62L hi CD8 + PBMCs from healthy donors 5 (Extended Data Fig. 5a ). NFAT is a crucial regulator of T cell exhaustion and dysfunction 22 , and NFAT1-binding sites in genes encoding negative regulators and inhibitory receptors have increased chromatin accessibility in dysfunctional and exhausted T cells 4 , 5 , 21 , 23 , 24 . Thus, we compared published NFAT1 chromatin immunoprecipitation with high-throughput sequencing (ChIP–seq) data 21 with our published 5 and newly generated ATAC-seq datasets (Fig. 2 ) and found evidence that NFAT1 bound to regions within the Tox locus with significantly increased chromatin accessibility in dysfunctional TCR TAG cells (Extended Data Fig. 5b ). To inhibit NFAT, we treated AST×Cre-ER T2 mice adoptively transferred with TCR TAG cells with the calcineurin inhibitor FK506 as previously described 5 , 25 , 26 . We found that TCR TAG cells from FK506-treated mice had decreased expression of TOX and PD-1, and increased levels of TCF-1 (Extended Data Fig. 5c ), suggesting that NFAT regulates TOX expression. To determine whether ectopic expression of TOX in effector CD8 T cells in vitro was sufficient to induce exhaustion in the absence of chronic antigen and TCR stimulation, we transduced effector TCR TAG cells generated in vitro with retroviral vectors encoding full-length TOX fused to green fluorescent protein (GFP) or GFP alone (Fig. 3a ). After transduction, effector TCR TAG cells were cultured for 6 days with IL-2 (without any additional TCR stimulation) and sorted for GFP expression (Extended Data Fig. 6a ). RNA-seq analysis revealed 849 DEGs between TOX–GFP + and GFP + T cells (Fig. 3b , Extended Data Fig. 6b and Supplementary Table 3 ). GSEA revealed that the transcriptional program of TOX–GFP + TCR TAG cells was significantly enriched for genes associated with chronic infections and tumours, with reduced expression of several genes encoding transcription factors ( Tcf7 , Lef1 and Id3 ), and increased expression of genes encoding inhibitory receptors ( Pdcd1 , Cd244 , Havcr2 and Entpd1 ) and transcription factors such as Ahr , Nfil3 , Prdm1 and Id2 (Fig. 3b, c and Extended Data Fig. 6c–g ). Despite expressing numerous exhaustion-associated genes, TOX–GFP + TCR TAG cells remained highly functional and proliferative (Extended Data Fig. 6d–f ). Fig. 3: Ectopic expression of TOX is sufficient to induce a global molecular program characteristic of T cell exhaustion. a , Experimental scheme (see also Methods ). b , MA plot of RNA-seq dataset. Significantly DEGs are coloured in red. c , Heat map of RNA-seq expression (row-normalized log 2 (counts per million) for DEGs; false discovery rate (FDR) < 0.10) in TOX–GFP + and GFP + TCR TAG cells. Full size image Next, we examined how genetic deletion of Tox affected CD8 T cell differentiation during acute infection or in tumours. TCR TAG mice were crossed to Tox flox/flox mice 10 and mice expressing Cre-recombinase under the distal Lck promoter to generate TOX-knockout TCR TAG mice (Fig. 4a and Extended Data Fig. 7a ). TCR TAG cells from TOX-knockout TCR TAG mice developed normally and similarly to littermate control mice (Extended Data Fig. 7b, c ). Naive TOX-knockout and wild-type (Thy1.1 + ) TCR TAG cells were adoptively transferred into B6 (Thy1.2 + ) mice and immunized 1 day later with Lm TAG. TOX-knockout and wild-type TCR TAG cells expanded equally well in response to Lm TAG immunization (Fig. 4b ), became CD44 hi and CD62L lo , formed similar numbers of KLRG1 lo CD127 hi memory precursors and KLRG1 hi CD127 lo short-lived effector cells 8 (Extended Data Fig. 7d ), differentiated into memory T cells (3–4 weeks after immunization), and produced similar amounts of IFNγ and TNF after ex vivo stimulation with peptide (Fig. 4c and Extended Data Fig. 7e ). Thus, TOX is not required for the differentiation of naive T cells into effector and memory T cells during acute infection. Fig. 4: Phenotypic, functional, transcriptional and epigenetic analysis of TOX-deficient T cells. a , Experimental scheme. b , c , Percentage of wild-type (WT; black) and knockout (KO; red) Thy1.1 + effector ( b ) or memory ( c ) TCR TAG cells isolated from spleens 7 days ( b ) or 3 weeks ( c ) after Lm TAG infection, respectively. For b , n = 8 (WT); n = 7 (KO); for c , n = 5 (WT); n = 5 (KO); two independent experiments. d , Left, wild-type and knockout TCR TAG cells isolated from malignant liver lesions 5–8 days after transfer into AST×Cre-ER T2 (Thy1.1 + Thy1.2 + ) mice. Middle, ratio of the percentage of wild-type and knockout T cells. Right, TOX expression of liver-infiltrating wild-type and knockout TCR TAG cells; naive TCR TAG cells are shown in grey as a control. e , Expression profiles of liver-infiltrating wild-type and knockout TCR TAG cells 8–10 days after adoptive transfer. Naive TCR TAG cells are shown in grey. Data are representative of more than five independent experiments ( n = 4 (PD-1/LAG-3); n = 2 (2B4); n = 6 (CD39/CD38)). f , Left, intracellular IFNγ and TNF production of wild-type ( n = 4) and TOX-knockout ( n = 4) TCR TAG cells isolated 10 days after transfer from liver lesions of AST×Cre mice. Right, specific lysis of TAG-peptide-pulsed EL4 cells in chromium release assays by wild-type ( n = 6) and knockout ( n = 6) TCR TAG cells isolated and flow-sorted from liver tumour lesions. Results from two independent experiments. Memory (Mem) TCR TAG cells are shown as a control. g , Percentage of Ki67-positive wild-type and knockout TCR TAG cells from malignant liver lesions 6–8 days after transfer into AST×Cre mice. Data are from three independent experiments. h , Wild-type and knockout donor TCR TAG cells 19 days after transfer in liver tumours (WT, n = 5; KO, n = 5). Data are representative of two independent experiments. In b – h , each symbol represents an individual mouse. i , MA plot of RNA-seq data. Significantly DEGs are in red. j , Chromatin accessibility of wild-type and knockout TCR TAG cells. Each row represents one peak (differentially accessible between wild-type and knockout; FDR < 0.05) displayed over a 2-kb window centred on the peak summit; regions were clustered with k -means clustering. Genes associated with peaks within individual clusters are highlighted. k , ATAC-seq signal profiles across the Tox and Tcf7 loci. Peaks uniquely lost or gained in knockout TCR TAG cells are highlighted in red or blue, respectively. Data are mean ± s.e.m. * P ≤ 0.05, ** P ≤ 0.01, *** P ≤ 0.001, two-sided Student’s t -test. Source Data Full size image Next, we adoptively transferred naive TOX-knockout and wild-type TCR TAG cells into AST×Cre mice. TOX-knockout and wild-type TCR TAG cells equivalently infiltrated the liver (Fig. 4d ), proliferated and upregulated CD44, CD69 and CD25 (Fig. 4e and Extended Data Fig. 7f ). Notably, by 8–10 days after transfer, TOX-knockout TCR TAG cells did not upregulate inhibitory receptors including PD-1, LAG-3, CD38, CD39 and 2B4, in contrast to wild-type TCR TAG cells (Fig. 4e and Extended Data Fig. 7f ). Nevertheless, TOX-knockout and wild-type TCR TAG cells showed comparable reductions in the production of IFNγ and TNF, the expression of CD107, granzyme B (GZMB), and the specific lysis of TAG-peptide-pulsed EL4 target cells (Fig. 4f and Extended Data Fig. 7g–i ). Thus, despite their normal, ‘non-exhausted’ phenotype (Fig. 4e ) and proliferative capacity (Fig. 4g ), TOX-knockout TCR TAG cells remained dysfunctional, revealing that the regulation of inhibitory receptors is uncoupled from T cell effector function. Notably, by 2–3 weeks after transfer, very few TOX-knockout TCR TAG cells could be found in liver tumour lesions, whereas wild-type TOX TCR TAG cells persisted throughout the course of tumour progression (Fig. 4h and Extended Data Fig. 8a ). Indeed, TOX-knockout TCR TAG cells had increased levels of active caspases 3 and 7, increased annexin V staining, and an enrichment of apoptosis genes, although the expression of pro- and anti-apoptotic proteins such as BIM, BCL-2 and BCL-xL was similar between knockout and wild-type TCR TAG cells (Extended Data Fig. 8b–e ). We performed RNA-seq and ATAC-seq analyses from TOX-knockout and wild-type TCR TAG cells isolated from liver tumours of AST×Cre mice 8–9 days after adoptive transfer and identified 679 DEGs and 12,166 differentially accessible chromatin regions, respectively (Fig. 4i, j , Extended Data Fig. 9 and Supplementary Tables 1, 2 ). TOX-knockout TCR TAG cells had low expression of genes encoding transcription factors and inhibitory receptors including Nfil3 , Prdm1 , Cish , Pdcd1 , Entpd1 , Tigit , Havcr2 and Cd38 , and high expression of the transcription factors Tcf7 , Lef1 and Id3 . GSEA of DEGs between TOX-knockout and wild-type TCR TAG cells revealed strong enrichment for genes and pathways associated with T cell exhaustion during chronic infection and tumorigenesis (Extended Data Fig. 9b ). Transcriptional differences were associated with corresponding changes in chromatin accessibility patterns of the respective genes (Fig. 4j and Extended Data Fig. 9c–g ). For example, the loci of Tox , Pdcd1 , Cd38 and Entpd1 were less accessible in TOX-knockout TCR TAG cells than in TOX wild-type TCR TAG cells, whereas the loci of Tcf7 , Cd28 , Fyn and Il7r were more accessible (Fig. 4k and Extended Data Fig. 9e ). More accessible regions in TOX-knockout TCR TAG cells showed significant enrichment for Gene Ontology (GO) terms associated with (i) cytokine and chemokine receptor activity; (ii) chromatin binding and bending, regulatory region DNA binding; and (iii) β-catenin binding (Extended Data Fig. 9f ). We also found enrichment of apoptosis pathways in TOX-knockout TCR TAG cells and increased expression of genes associated with apoptosis such as Fas , Tnf , Gas2 and Tnfrs25 (which encodes DR3) (Extended Data Figs. 8e , 9e ). In summary, TOX is specifically required for T cell differentiation in settings of chronic antigen stimulation (such as tumours and chronic infection). A key finding of our study is that the regulation of inhibitory receptor expression is uncoupled from the loss of effector function in dysfunctional TST cells. Supporting this point is the notable phenotypic and transcriptional similarities between dysfunctional TOX-knockout TCR TAG TILs (Fig. 4 ) and functional TOX-negative, bystander TCR OT1 TILs (Fig. 2 and Extended Data Fig. 10a, b ). TOX-deficient TST cells failed to persist in tumours, and we hypothesize that the TOX-induced gene regulation of inhibitory receptors and other exhaustion-associated molecules serve as a physiological negative feedback mechanism to prevent overstimulation of antigen-specific T cells and activation-induced cell death in settings of chronic antigen stimulation such as chronic infection and cancer (Extended Data Fig. 10c ). Methods Mice AST (Albumin-floxStop-SV40 large T antigen (TAG)) mice were previously described 3 , 5 , 7 . TCR TAG transgenic mice (B6.Cg-Tg(TcraY1,TcrbY1)416Tev/J) 27 , Cre-ER T2 (B6.129-Gt(ROSA)26Sor tm1(cre/ERT2)Tyj /J), Alb-Cre (B6.Cg-Tg(Alb-cre)21Mgn/J), TCR OT1 (C57BL/6-Tg(TcraTcrb)1100Mjb/J), Ly5.1 (B6.SJL-Ptprc a Pepc b /BoyJ), B6.Cg-Tg(Lck-icre)3779Nik/J (dLck-Cre) and C57BL/6J Thy1.1 mice were purchased from The Jackson Laboratory. Tox flox/flox mice 10 were previously described, and obtained from M. Glickman, with permission from J. Kaye. Tox flox / flox mice were crossed to TCR TAG and dLck-Cre 28 mice to obtain TCR TAG Tox −/− (knockout) mice. TCR TRP2 mice were obtained from N. Restifo, with permission from A. Hurwitz. TCR TRP2 and TCR TAG mice were crossed to Thy.1.1 mice to generate TCR TRP2 and TCR TAG Thy.1.1 mice, respectively. TCR OT1 mice were crossed to Ly5.1 mice to generate TCR OT1 Ly5.1 mice. AST mice were crossed to Cre-ER T2 (Cre recombinase fused to tamoxifen-inducible oestrogen receptor) or Alb-Cre mice to obtain AST×Cre-ER T2 and AST×Alb-Cre mice, respectively. TCR PMEL and TCR P14 mice were purchased from The Jackson Laboratory. AST mice were also crossed to Thy1.1 mice to generate AST×Cre-ER T2 Thy1.1/Thy1.2 mice. All mice were bred and maintained in the animal facility at MSKCC. Experiments were performed in compliance with the MSKCC Institutional Animal Care and Use Committee regulations. B16 tumour model Approximately 5 × 10 5 –1 × 10 6 B16 tumour cells were injected into C57BL/6J wild-type mice. Once tumours were established (1–2 weeks later), around 2 million naive TCR TRP2 or TCR PMEL (Thy1.1 + ) T cells were adoptively transferred and isolated from tumours at indicated time points. Tumour volumes did not exceed the permitted volumes specified by the MSKCC IACUC protocol. Adoptive transfer studies during acute Listeria infection and in AST×Cre-ER T2 tumour models Naive CD8 + splenocytes from TCR TAG Thy1.1 transgenic mice were adoptively transferred into AST×Alb-Cre mice, or AST×Cre-ER T2 mice and treated with 1 mg tamoxifen 1–2 days later. For TCR TAG and TCR OT1 co-transfer experiments, 3–4 × 10 4 TCR TAG Thy1.1 and TCR OT1 Ly5.1 CD8 + splenocytes were adoptively transferred into AST×Alb-Cre mice or B6 control mice; 1 day later, mice were infected with 5 × 10 6 colony-forming units (CFU) L. monocytogenes ( Lm ) TAG-I OVA (co-expressing TAG-I epitope and OVA epitope SIINFEKL). For the generation of effector and memory TCR TAG CD8 + T cells, 100,000 CD8 + splenocytes from TCR TAG Thy1.1 wild-type or knockout mice were adoptively transferred into congenic B6 mice; 1 day later, mice were infected with 5 × 10 6 CFU Lm TAG. Effector TCR TAG CD8 + T cells were isolated from the spleens of B6 host mice and analysed 5–7 days after Listeria infection; memory TCR TAG CD8 + T cells were isolated from spleens of B6 host mice and analysed at least 3 weeks after Listeria infection. For wild-type and knockout studies, CD8 + splenocytes from TCR TAG (wild-type) or TCR TAG TOX-knockout mice were adoptively transferred into AST×Cre-ER T2 (and 1–2 days later, mice were treated with 1 mg tamoxifen) or into AST×Alb-Cre mice. For these studies, we define knockout TCR TAG as TOX-deficient T cells. LCMV clone 13 and LCMV Armstrong infection model LCMV infection was done as previously described 29 . In brief, 10,000 TCR P14 cells were adoptively transferred intravenously into congenic 6–8-week-old C57BL/6 mice, and mice were infected 1 day later with LCMV Armstrong (2 × 10 5 plaque-forming units (PFU), intraperitoneally) or LCMV clone 13 (2 × 10 6 PFU, intravenously). In mice receiving LCMV clone 13, CD4 T cells were depleted with 200 μg anti-CD4 antibody (clone GK1.5) 2 days before T cell transfer 29 . Antibodies for flow cytometric analysis For mouse studies, the following antibodies were purchased: from BioLegend: 2B4 (m2B4), BCL-2 (BCL/10C4), CD101 (Moushi101), CD11c (N418), CD127 (A7R34), CD19 (6D5), CD25 (PC61.5), CD3 (145-2C11), CD38 (90), CD39 (Duha59), CD40 (3/23), CD44 (IM7), CD62L (MEL-14), CD69 (H1.2F3), CD70 (FR70), CD80 (16-10A1), CD86 (GL-1), CD90.1 (OX-7 and HIS51), CD90.2 (30-H12 and 53-2.1), CXCR5 (L138D7), Eomes (Dan11mag), GZMB (GB11), IFNγ (XMG1.2), IL-2 (JES6-5H4), KLRG1 (2F1), LAG-3 (C9B7W), MHC I-A/I-E (M5/114.15.2), PD-1 (RMP1-30), T-bet (4B10), TIM-3 (RMT3-23), TNF (MP6-XT22), and 7-amino-actinomycin (7-AAD); from BD Biosciences: annexin V, CD95 (Jo2), Ki67 (B56), Vb7 (TR310); BCL-xL (H-5; Santa Cruz Biotechnology); BIM (C34C5; Cell Signaling Technology), CD8 (53-6.7; eBioscience), CTLA-4 (UC10-410-11; Tonbo Biosciences), TCF-1 (C63D9; Cell Signaling Technology), TIGIT (GIGD7; eBioscience). For human studies, the following antibodies were purchased: CD39 (A1; BioLegend), CD45RA (HI100; BioLegend), CD45RO (UCHL1; BioLegend), CD8 (RPA-T8; BioLegend), LAG-3 (17B4; Enzo Life Sciences), PD-1 (EH12.1; BD Biosciences) and TIM-3 (F38-2E2; BioLegend). For flow cytometric detection and analysis of mouse and human TOX, anti-human/mouse TOX antibody clone REA473 was used (Miltenyi Biotec); antibody clone REA293 was used as TOX isotype (Miltenyi Biotec). Tamoxifen treatment Tamoxifen was purchased from Sigma-Aldrich. A tamoxifen stock solution (5 mg ml −1 in corn oil) was prepared by warming tamoxifen in 1-ml sterile corn oil at 50 °C for approximately 15 min, then further diluted in corn oil to obtain the stock concentration of 5 mg ml −1 . Tamoxifen (1 mg; 200 μl) was administered once intraperitoneally into AST×Cre-ER T2 mice. Flow cytometric analysis Flow cytometric analysis was performed using BD Fortessa FACS Cell Analyzers; cells were sorted using BD FACS Aria (BD Biosciences) at the MSKCC Flow Core Facility. Flow data were analysed with FlowJo (Tree Star). Listeria infection The L. monocytogenes ( Lm ) Δ actA Δ inlB strain 30 expressing the TAG epitope I (206-SAINNYAQKL-215, SV40 large T antigen) together with the OVA SIINFEKL epitope was generated by Aduro Biotech as previously described 3 , 5 . The Lm strain was constructed using the previously described strategy 31 . Experimental vaccination stocks were prepared by growing bacteria to early stationary phase, washing in PBS, formulated at approximately 1 × 10 10 CFU ml −1 , and stored at −80 °C. Mice were infected intraperitoneally with 5 × 10 6 CFU of Lm TAG. Cell isolation for subsequent analyses Spleens were mechanically disrupted with the back of a 3-ml syringe, filtered through a 70-μm strainer, and red blood cells were lysed with ammonium chloride potassium buffer. Cells were washed twice with cold RPMI 1640 media supplemented with 2 μM glutamine, 100 U ml −1 penicillin/streptomycin, and 5–10% FCS. Liver tumour and B16 tumour tissues were mechanically disrupted and dissociated with scissors (in 1–2 ml of cold complete RPMI). Dissociated tissue pieces were transferred into a 70-μm strainer (placed into a 60-mm dish with 1–2 ml of cold complete RPMI) and further dissociated with the back of a 3-ml syringe. Cell suspension was filtered through 70-μm strainers. Tumour homogenate was spun down at 400 g for 5 min at 4 °C. Pellet was resuspended in 15 ml of 3% FCS in HBSS, 500 μl (500 U) heparin, and 8.5 ml Percoll, mixed by several inversions, and spun at 500 g for 10 min at 4 °C. Pellet was lysed with ammonium chloride potassium buffer and cells were further processed for downstream applications. Human samples PBMC and tumour samples were obtained from patients with cancer enrolled on a biospecimen procurement protocol approved by the MSKCC Institutional Review Board (IRB). Each patient signed an informed consent form and received a patient information form before participation. Human samples were analysed using an IRB-approved biospecimen utilization protocol. Breast cancer samples were selected from patients who had evidence of a dense mononuclear cell infiltrate on conventional haematoxylin and eosin (H&E) staining. For human ovarian tumour samples (Extended Data Fig. 3 ): tumour samples were obtained as per protocols approved by the IRB. All patients provided informed consent to an IRB-approved correlative research protocol before the collection of tissue (Memorial Sloan Kettering Cancer Center IRB 00144 and 06-107). Human peripheral blood lymphocytes were obtained from the New York Blood Center or from patients where indicated. Human tumours were mechanically disrupted as described for solid mouse tumours, centrifuged on Percoll gradients and further assessed by flow cytometric analysis. FK506 studies Naive TCR TAG (Thy1.1 + ) cells were transferred into AST×Cre-ER T2 (Thy1.2 + ) mice, which were treated with tamoxifen 1 day later. On days 2–8, mice were treated with the calcineurin inhibitor FK506 (Prograf, 5 mg ml −1 ) (2.5 mg per kg per mouse intraperitoneally, once daily). Control mice were treated with PBS. All mice were analysed on day 10. TOX overexpression experiments Mouse Tox cDNA (accession number NM_145711.4) without the stop codon fused in-frame with the coding sequence of a monomeric form of green fluorescent protein (mGFP) was obtained from OriGene Technologies (MR208435L2). PCR cloning was used to amplify TOX–mGFP, which was then cloned into the pMIGR1 retroviral vector to generate pMIGR1 TOX–mGFP using the restriction enzymes EcoRI and PacI. pMIGR1 TOX–mGFP and control pMIGR1-GFP containing only mGFP were used for retroviral transduction of TCR TAG CD8 + T cells as follows: on day 1, the retroviral packaging cell line Plat-Eco (Cell Biolabs) was transfected using Effectene (Qiagen) following the manufacturer’s instructions. On day 2, splenocytes from TCR TAG mice were isolated and stimulated with soluble anti-CD3 and anti-CD28 antibodies. On day 3, activated splenocytes were resuspended in the viral supernatant containing 50 U ml −1 IL-2 and 5 µg ml −1 Polybrene (Santa-Cruz Biotechnology), transferred to 12-well plates, and spun at 1,000 g for 90 min. This process was repeated the next day. Transduced T cells were cultured for six additional days, replacing media and adding fresh IL-2 (100 U ml −1 ) every other day. T cells were collected and flow-sorted for high GFP expression for downstream transcriptome analysis. Intracellular cytokine and transcription factor staining Intracellular cytokine staining was performed using the Foxp3/Transcription Factor Staining Buffer Set (eBioscience) per manufacturer’s instructions. In brief, T cells were mixed with 2 × 10 6 congenically marked splenocytes and incubated with TAG epitope I peptide (0.5 μg ml −1 ) or OVA peptide (0.1 μg ml −1 ) for 4–5 h at 37 °C in the presence of GolgiPlug (brefeldin A). Where indicated, naive splenocytes or APCs were activated either in vivo (single intraperitoneal injection of 50 µg lipopolysaccharide (LPS; Sigma; L2630), 24 h before euthanization) 32 or in vitro (1-h pulse at 37 °C with 1 µg ml −1 LPS followed by extensive washing) 33 . Where indicated, cells were also stimulated with PMA (20 ng ml −1 ) and ionomycin (1 μg ml −1 ) for 4 h. After staining for cell-surface molecules, the cells were fixed, permeabilized and stained with antibodies to IFNγ, TNF and GZMB. Intracellular transcription factor staining was performed using the Foxp3/Transcription Factor Staining Buffer Set (eBioscience) as per the manufacturer’s instructions. Annexin V staining Apoptosis was assessed by flow cytometry using V450 Annexin V (BD Biosciences; 560506) and 7-AAD following the manufacturer’s instructions. Active caspase-3/7 analysis For the flow cytometric analysis of active caspase-3/7, cells were incubated with 500 nM CellEvent Caspase 3/7 Green Detection Reagent (Invitrogen; C10423) for 30 min at 37 °C. Chromium release assay Mouse EL4 lymphoma cells were loaded with 150 μCi of [ 51 Cr]sodium chromate for 2 h. TAG epitope I peptide (SAINNYAQKL) at a concentration of 1 μg ml −1 was added during last 30 min of incubation. 51 Cr-labelled, TAG-I-pulsed EL4 cells were co-cultured with flow-sorted memory TCR TAG T cells or wild-type or knockout TOX TCR TAG T cells isolated and flow-sorted from liver tumours of AST×Cre mice (6–8 days after transfer) at a 5:1 (effector:target) ratio for 16 h. Medium alone or 2% Triton-X was added to set spontaneous or total lysis, respectively. Specific killing was calculated using following formula: percentage lysis = ((test counts per min − spontaneous counts per min)/(total counts per min − spontaneous counts per min))×100. Sample preparation for ATAC-seq and RNA-seq Replicate samples were isolated from spleens or livers and sorted as follows: (i) naive TCR TAG Thy1.1 + T cells were sorted by flow cytometry (CD8 + /CD44 lo ) from spleens of TCR TAG Thy1.1 transgenic mice. (ii) Wild-type and knockout TOX TCR TAG T cells were sorted from livers of established AST×Cre mice 8–9 days after transfer. Cells were gated on CD8 + Thy1.1 + PD-1 hi/lo LAG hi/lo CD39 hi/lo . A small aliquot of sorted cell populations was used to confirm TOX expression (for wild-type) and TOX deficiency (for knockout). (iii) TCR OT1 and TCR TAG T cells were sorted from livers of established AST×Cre mice 20–21 days after transfer/ Listeria infection. After flow-sorting, all samples for downstream ATAC-seq analysis were frozen in 10% FCS in DMSO and stored at −80 °C; samples for RNA-seq were directly sorted into Trizol and frozen and stored at −80 °C. Transcriptome sequencing Samples for RNA-seq were sorted directly into TRIzol LS (Invitrogen). The volume was adjusted to 1 ml with PBS and samples frozen and stored at −80 °C. RNA was extracted using RNeasy mini kit (Qiagen) per instructions provided by the manufacturer. After ribogreen quantification and quality control of Agilent BioAnalyzer, total RNA underwent amplification using the SMART-seq V4 (Clonetech) ultralow input RNA kit for sequencing (12 cycles of amplification for 2–10 ng of total RNA). Subsequently, 10 ng of amplified cDNA was used to prepare Illumina Hiseq libraries with the Kapa DNA library preparation chemistry (Kapa Biosystems) using 8 cycles of PCR. Samples were barcoded and run on a Hiseq 4000, in a 50-bp/50-bp paired-end run, using the TruSeq SBS Kit v3 (Illumina). ATAC-seq Frozen 25,000–50,000 cells were thawed and washed in cold PBS and lysed. Transposition was performed at 42 °C for 45 min. After purification of the DNA with the MinElute PCR purification kit (Qiagen), material was amplified for five cycles. Additional PCR cycles were evaluated by quantitative PCR. Final product was cleaned by Ampure Beads at a 1.5× ratio. Libraries were sequenced on a Hiseq 2500 1T in a 50-bp/50-bp paired-end run, using the TruSeq SBS Kit v.3 (Illumina). Bioinformatics methods The quality of the sequenced reads was assessed with FastQC and QoRTs (for RNA-seq samples (ref. 34 and Babraham Bioinformatics v.0.11.7 (2010)). Unless stated otherwise, all plots involving high-throughput sequencing data were obtained with custom R scripts (see github.com/friedue/Scott2019 for the code; R: A Language and Environment for Statistical Computing (2014); and ref. 35 ). RNA-seq DNA sequencing reads were aligned with default parameters to the mouse reference genome (GRCm38) using STAR 36 . Gene expression estimates were obtained with featureCounts using composite gene models (union of the exons of all transcript isoforms per gene) from Gencode (version M17) 37 , 38 . DEGs DEGs were determined with DESeq2. The q -value cut-offs for the final lists of DEG were as follows: (i) TOX–GFP versus GFP: 849 DEGs with q < 0.10; (ii) TAG versus OT1: 2,347 DEGs with q < 0.05; and (iii) wild-type versus knockout: 679 DEGs with q < 0.05. Pathway and GO term enrichment analyses Gene set enrichment analyses were done using GSEA 39 on RPKM values against a gene set permutation (the seed was set to 149). Heat maps Heat maps were created using log 2 (counts per million) of genes identified as differentially expressed by DESeq2 (adjusted P < 0.05 unless otherwise noted). Rows were centred and scaled using z -scores. ATAC-seq ATAC-seq data 5 were downloaded from GEO (accession GSE89308). These datasets were processed in the same manner as the newly generated datasets described in this study. Alignment and identification of open chromatin regions The data was processed following the recommendations of the ENCODE consortium (The ENCODE Consortium ATAC-seq Data Standards and Prototype Processing Pipeline ). Reads were aligned to the mouse reference genome (version GRCm38) with BWA-backtrack 40 . Post-alignment filtering was done with samtools and Picard tools to remove unmapped reads, improperly paired reads, non-unique reads, and duplicates (ref. 41 and Broad Institute Picard (2015)). To identify regions of open chromatin represented by enrichments of reads, peak calling was performed with MACS2 42 . For every replicate, the narrowpeak results of MACS2 were used after filtering for adjusted P < 0.01. Differentially accessible regions Regions where the chromatin accessibility changed between different conditions were identified with diffBind ( DiffBind: Differential Binding Analysis of Chip-Seq Peak Data (2011)) with the following options: minOverlap=4, bUseSummarizeOverlaps=T, minMembers=2, bFullLibrarySize=TRUE. A total of 12,166 differentially accessible peaks were identified between wild-type and knockout TCR TAG cells (see Fig. 4 ); 19,071 differentially accessible peaks were identified between TCR TAG and TCR OT1 cells (see Fig. 2 ). Coverage files Individual coverage files per replicate normalized for differences in sequencing depths between the different samples were generated with bamCoverage of the deepTools suite 42 using the following parameters: -bs 10 --normalizeUsing RPGC --effectiveGenomeSize 2150570000 --blackListFileName mm10.blacklist --ignoreForNormalization chrX chrY --ignoreDuplicates --minFragmentLength 40 -p 1. To create merged coverage files of replicates of the same condition, we used multiBigwigSummary to obtain the sequencing-depth-normalized coverage values for 10 bp bins along the entire genome, that is, for every condition, we obtained a table with the coverage values in every replicate within the same bin. Subsequently, we chose the mean value for every bin to represent the coverage in the resulting ‘merged; file (see github.com/friedue/Scott2019 for the code that was used). Merged coverage files were used for display in IGV and for heatmaps. Heat maps Heat maps displaying the sequencing-depth-normalized coverage from different ATAC-seq samples were generated with computeMatrix and plotHeatmap of the deepTools suite 43 . Every row corresponds to a single region that was determined to be differentially accessible when comparing either TCR TAG (TAG) to TCR OT1 (OT1) T cells or wild-type to TOX-knockout TCR TAG T cells. The plots display the centre of each differentially accessible peak region ± 1 kb; the colour corresponds to the average normalized coverage across all replicates of the respective condition. Gene labels indicate genes that overlapped with a given differentially accessible region (anywhere along the gene). Combining RNA-seq and ATAC-seq data The relationship between RNA-seq and ATAC-seq was explored via ‘diamond’ plots for select genes detected as differentially expressed via DESeq2. Each gene was represented by a stack of diamond-shaped points coloured by the associated chromatin state of the gene (blue indicating closing and red indicating opening). The bottom-most point in each stack corresponds to the log 2 -transformed fold change in expression for that gene. NFAT1 ChIP–seq (publicly available) NFAT1 ChIP–seq samples were generated as previously described 21 from cells expressing endogenous NFAT1 (wild type) or lacking NFAT1 (knockout). Cells lacking endogenous NFAT1 were transduced with an empty GFP vector (mock) or with a vector containing a mutated form of NFAT (CA-RIT-RV). Either cell type was either left resting (none) or stimulated with PMA and ionomycin (P + I) for 1 h. We downloaded the sequencing results (fastq files generated by SOLiD sequencing technology) from the Sequence Read Archive (GEO series GSE64407); see Supplementary Table 4 for further details. SOLiD adapters had to be trimmed off, which we did with cutadapt 44 specifying --format=sra-fastq --minimum-length 15 --colorspace and the sample specific adapter sequences via -g and -a (see for the sample-specific adapters). The trimmed reads were subsequently aligned to the mouse genome version GRCm38 with bowtie1 using the colorspace option 45 . Coverage tracks normalized for differences in sequencing depths were be generated with bamCoverage of the deepTools suite (v.3.1.0) 42 using the following parameters: -bs 10 --normalizeUsing RPGC --effectiveGenomeSize 2150570000 --blackListFileName mm10.blacklist --ignoreForNormalization chrX chrY --ignoreDuplicates --minFragmentLength 40 -p 1. Blacklisted regions were downloaded from . Regions of statistically significant read enrichments in the ChIP samples compared with the corresponding input samples (peaks) were identified with MACS2 (2.1.1.20160309) 42 using ChIP and corresponding input files and the following parameters: -g 1.87e9 -p 0.01 --keep-dup all. For final peak files, the narrowpeak outputs of MACS2 were used, keeping only peaks with adjusted P values below 0.01. Digital droplet PCR TOX–GFP-overexpressing and GFP-overexpressing TCR TAG T cells were sorted directly into TRIzol (Invitrogen). RNA was extracted with chloroform. Isopropanol and linear acrylamide were added, and the RNA was precipitated with 75% ethanol. Samples were resuspended in RNase-free water. Quantity was assessed by PicoGreen (ThermoFisher) and quality by BioAnalyzer (Agilent). Droplet generation was performed on a QX200 ddPCR system (Bio-Rad; 864001) using cDNA generated from 100 pg total RNA with the One-Step RT-ddPCR Advanced Kit for Probes (Bio-Rad; 1864021) according to the manufacturer’s protocol with reverse transcription at 42 °C and annealing/extension at 55 °C. Each sample was evaluated in technical duplicates. Reactions were partitioned into a median of approximately 30,000 droplets per well. Plates were read and analysed with the QuantaSoft sotware to assess the number of droplets positive for the gene of interest, reference gene ( Gapdh ; dMmuCPE5195283), both, or neither. PrimePCR ddPCR Expression Probe Assays were ordered through Bio-Rad for the following genes of interest: Lag3 (dMmuCPE5122546), Id2 (dMmuCPE5094018), Prdm1 (dMmuCPE5113738), Prf1 (dMmuCPE5112024), and Gzmb (dMmuCPE5093986). Data reporting No statistical methods were used to predetermine sample size. The investigators were not blinded to allocation during experiments and outcome assessment, and experiments were not randomized. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All data generated and supporting the findings of this study are available within the paper. The RNA-seq and ATAC-seq data have been deposited in the Gene Expression Omnibus (GEO) under accession number GSE126974 . Source Data are provided with the online version of the paper. Additional information and materials will be made available upon request.
Immune checkpoint therapy has revolutionized cancer therapy, leading to long-term remission for patients with advanced cancer. However, most cancer patients either do not respond or have only short-term responses to checkpoint therapy, which targets inhibitory receptors on T cells. A study published June 17 in Nature offers clues as to why blocking inhibitory receptors on tumor-infiltrating T cells may not always work. Mary Philip, MD, Ph.D., assistant professor of Medicine in the Division of Hematology and Oncology and a senior author on the story, together with Andrea Schietinger, Ph.D., of the Sloan Kettering Institute, found that the thymocyte selection-associated high-mobility group box protein, TOX, is expressed at high levels in dysfunctional tumor-infiltrating T cells in mice and humans. The investigators found that TOX controls the high expression of inhibitory receptors such as PD1 on dysfunctional tumor-infiltrating T cells. These inhibitory receptors act like brakes on T cells. The team deleted TOX from tumor-infiltrating T cells to see if that would restore their function. To their surprise, though the tumor-infiltrating T cells no longer expressed PD1 and other inhibitory receptors, the T cells were still dysfunctional and unable to eliminate cancers. Even more surprising, the T cells without TOX were unable to survive long term. The study demonstrates that control of the killing machinery in T cells is uncoupled from regulation of inhibitory receptors. "Taking off the brakes is not enough to restore the killing capacity of anti-tumor cells. In fact, T cells need the brakes to avoid getting over-activated and dying," Philip said. The study follows a previous investigation published in Nature on May 25, 2018, by Philip and colleagues on T cell dysfunction in liver cancer using mouse models. Philip was the lead author of that study. The overarching goal of Philip's research group is to decipher the mechanisms regulating T cell dysfunction in cancers and to design new strategies to override these mechanisms to improve cancer immunotherapy.
10.1038/s41586-019-1324-y
Biology
How chromosomes change their shape during cell differentiation
Hisashi Miura et al. Single-cell DNA replication profiling identifies spatiotemporal developmental dynamics of chromosome organization, Nature Genetics (2019). DOI: 10.1038/s41588-019-0474-z Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-019-0474-z
https://phys.org/news/2019-10-chromosomes-cell-differentiation.html
Abstract In mammalian cells, chromosomes are partitioned into megabase-sized topologically associating domains (TADs). TADs can be in either A (active) or B (inactive) subnuclear compartments, which exhibit early and late replication timing (RT), respectively. Here, we show that A/B compartments change coordinately with RT changes genome wide during mouse embryonic stem cell (mESC) differentiation. While A to B compartment changes and early to late RT changes were temporally inseparable, B to A changes clearly preceded late to early RT changes and transcriptional activation. Compartments changed primarily by boundary shifting, altering the compartmentalization of TADs facing the A/B compartment interface, which was conserved during reprogramming and confirmed in individual cells by single-cell Repli-seq. Differentiating mESCs altered single-cell Repli-seq profiles gradually but uniformly, transiently resembling RT profiles of epiblast-derived stem cells (EpiSCs), suggesting that A/B compartments might also change gradually but uniformly toward a primed pluripotent state. These results provide insights into how megabase-scale chromosome organization changes in individual cells during differentiation. Main DNA replication has served as an excellent forum in which to investigate the principles of megabase (Mb)-scale organization of the genome 1 , 2 , 3 . Early- and late-replicating chromosomal bands correspond to Giemsa R- and G-bands, respectively 1 . 5-Bromodeoxyuridine (BrdU) pulse-labeling of mammalian cells has identified ~1 Mb units of DNA replication called the replication foci 1 . Each focus completes replication within ~1 h and remains stable as a unit after multiple cell cycles 3 . RT-profiling technologies have mapped Mb-sized replication domains genome wide 4 , 5 , 6 . If we define replication domains as stretches of DNA that show uniform RT separated by timing transition regions (TTRs) 4 , replication-domain boundaries of a given cell type constitute approximately half of all potential boundaries and replication-domain organization changes dynamically during differentiation 4 , 5 , 6 , 7 . Aside from various chromosomal banding studies, evidence for Mb-scale structural units of chromosomes other than replication domains, replication foci and lamina-associated domains (LADs) were virtually nonexistent 1 , 8 . This situation changed with the advent of Hi-C, a genome-wide chromosome conformation capture (3C) technology 9 . Hi-C has shown that mammalian chromosomes can be subdivided into stable Mb-sized self-associating units called TADs 10 , 11 , which are separated into A and B subnuclear compartments 9 . Unlike TADs, A/B compartments change dynamically during differentiation 11 , 12 but their boundaries coincide with a subset of TAD boundaries 7 , 12 . Interestingly, A and B compartments align remarkably well with early- and late-replicating domains, respectively 6 , suggesting that Hi-C and RT analyses probe similar aspects of genome organization at the Mb scale. However, we still do not know how TADs, A/B compartments and replication domains relate to each other 3 . How and when do RT and A/B compartments change during differentiation? Are the changes coordinated to maintain the tight relationship observed in cultured cells? To address these questions and gain insights into the regulatory principles of three-dimensional (3D) genome organization, we performed Hi-C and RT analyses at one-day intervals during mESC differentiation and analyzed their relationship extensively. Moreover, we took advantage of our latest single-cell RT-profiling technology, scRepli-seq 13 , to infer how they change at the single-cell level. Results A mESC neural differentiation system in a defined medium We modified a differentiation protocol developed by Hayashi et al. 14 and combined it with the SFEBq neural differentiation protocol 15 . In this protocol (Fig. 1a ), naïve mESCs grown in MEK and GSK3 inhibitors (2i) and leukemia inhibitory factor (LIF) (see Methods ) are first differentiated to epiblast-like cells (EpiLCs). On day 2, the monolayer EpiLCs are detached, aggregated as embryoid bodies (EBs) in Lipidure-coated 96-well plates and cultured until day 7. The only difference from the SFEBq method 15 is the use of EpiLCs instead of mESCs as the starting materials for EB formation. Fig. 1: An mESC neural differentiation system in a defined medium. a , Neural differentiation of mESCs via EpiLCs. EB photographs were taken after transfer from 96-well plates to single plates. b , Immunofluorescence staining of representative cell colonies (days 0 and 2) and EB sections (days 3–7) during mESC differentiation with antibodies against Oct4, Nanog, Sox1 and Eomes (two independent experiments showed similar results). Nuclei were counterstained with DAPI. c , Comparison of genome-wide RT profiles from differentiation intermediates derived from CBMS1 mESCs (this study) and D3 mESCs, as well as ES-Gsc gfp Sox17 huCD25 mESCs (mesoderm and endoderm cells) 5 by hierarchical clustering. d , Comparison of fold changes in gene expression values between the two mESC differentiation protocols described in c , by RNA-seq (CBMS1) or expression microarrays (D3). Pearson’s R values are shown. See also Supplementary Fig. 3b . e , Percentage of outlier cells as assayed by the scRepli-seq technology. See also Fig. 4e . N , number of cells analyzed. f , A comparison of RT and A/B compartments (Hi-C PC1) during differentiation of CBMS1 mESCs at one-day intervals. The ∆RT and ∆PC1 plots present RT and Hi-C PC1 differentials, respectively, from day 0 to day 7. Regions 1 and 2 are representative early to late/A to B and late to early/B to A switching regions, respectively. Full size image By RNA-seq, naïve CBMS1 mESCs 16 correctly adopted an EpiLC fate on day 2, sharply downregulating early inner cell mass markers ( Prdm14 , Zfp42 , Tbx3 , Tcl1 , Esrrb , Nanog , Klf2 , Klf4 , Klf5 ), upregulating epiblast markers ( Fgf5 , Wnt3 , Dnmt3b ) and either maintaining ( Pou5f1/Oct4 , Fgf4 ) or downregulating ( Sox2 ) pluripotency markers, as expected 14 (Supplementary Fig. 1a and Supplementary Table 1 ). By day 7, ectodermal and pluripotency markers were up- and downregulated, respectively (Supplementary Fig. 1a,b ), while many mesoderm and endoderm markers were expressed at low levels (Supplementary Fig. 1b ). During day 4–5 transition, Pou5f1/Oct4 showed sharp downregulation, while ectodermal markers showed sharp upregulation (Supplementary Fig. 1c ). As expected, activation of the epiblast marker Fgf5 was transient (Supplementary Fig. 1c ). Thus, differentiation seems to be relatively synchronous. Next, we performed immunofluorescence. While Oct4 expression was strong during days 0–2, Nanog expression was strong on day 0 but decreased on day 2. We observed increased Sox1 expression after day 5, which became more uniform on days 6–7 (Fig. 1b and Supplementary Fig. 2 ). Eomes-expressing cells transiently appeared on day 4 (Fig. 1b , Supplementary Fig. 2 and Supplementary Table 2 ), suggesting transient acquisition of late epiblast fate. After day 3, Oct4 and Nanog expression appeared weak, partly due to sample preparation differences (days 0–2, on chamber slides; days 3–7, EB sections), but did not disappear; in fact, their weak expression was maintained in small patches of Sox1-negative cells on days 6–7 (Supplementary Fig. 2a ). Overall, however, these results are consistent with cells responding to differentiation cues in a timely and relatively uniform manner. Genome-wide RT profiles can be used to profile cell types 5 , 17 , 18 (Supplementary Fig. 3a ). RT data obtained during CBMS1 mESC differentiation were highly reproducible (Supplementary Table 3 ) and were compared with those of D3 mESCs, their neural differentiation intermediates (EBM3/6/9, EBs grown in MEDII medium for 3, 6 or 9 d) and mESC-derived mesoderm and endoderm cells (Fig. 1c ) 5 . CBMS1 mESCs and EpiLCs were most similar to D3 mESCs and early epiblast cells (EBM3), respectively (Fig. 1c ). Day 7 cells resembled neural precursor cells (NPCs, EBM9) and definitive ectoderm cells (EBM6), but were clearly distinct from mesoderm and endoderm cells (Fig. 1c ). This was corroborated by expression profiling (Fig. 1d and Supplementary Fig. 3b ). Lastly, we utilized our single-cell (sc)Repli-seq technology 13 to assess differentiation homogeneity. scRepli-seq measures copy number differences between replicated and unreplicated DNA and generates single-cell RT profiles. Cell-to-cell RT heterogeneity is confined and profiles are cell-type specific 13 , meaning that scRepli-seq can identify outlier cells (details discussed in ’RT changes gradually but uniformly in differentiating cells’). We found that outliers were <13% during differentiation (Fig. 1e ), suggesting that RT changes synchronously and homogeneously during differentiation. Taken together, we concluded that a reliable neurectoderm differentiation protocol was established that depends on defined culture conditions. Changes in RT and A/B compartments are coordinated We performed Hi-C and RT analysis at one-day intervals during CBMS1 mESC differentiation (days 0 and 2–7) (Fig. 1f ). For Hi-C, we followed the in situ Hi-C protocol 19 and A/B compartments were defined as the first principal component (PC1) obtained by principal component analysis (PCA) of the Hi-C map 9 (Supplementary Fig. 3c,d ). RT and A/B compartments were highly correlated (Fig. 1f : day 0, Pearson’s R = 0.85; day 7, R = 0.91; chr18) and reproducible (Supplementary Tables 3 and 4 ), as expected 6 . Moreover, their changes were coordinated during differentiation (Fig. 1f ). We analyzed this relationship for all 200-kilobase (kb) bins genome wide and confirmed the strong correlation between the differentials in RT and A/B compartments during mESC differentiation, and when mESCs were compared with mouse embryonic fibroblasts (MEFs) (Fig. 2a ), which suggests that the relationship is not lineage specific. We compared human ESCs (hESCs) and differentiated cells using publicly available RT 20 and Hi-C data 12 and observed a similar relationship (Supplementary Fig. 4a ). Fig. 2: Coordinated developmental changes in RT, A/B compartments, subnuclear positioning and nuclear lamina binding. a , Genome-wide comparison of ∆RT and ∆Hi-C PC1 in 200-kb windows. Pearson’s R values are based on top 10% early to late (E to L) ( n = 2,358) and late to early (L to E) ( n = 2,350) ∆RT values of bins. d0, day 0 mESCs. b , Compartment profiles of constitutively early (E to E) and late (L to L) 200-kb bins. See Supplementary Fig. 4b for early/late definition. c , Comparison of ∆radial positioning 5 and ∆Hi-C PC1 during differentiation. n = 8 loci (∆median), based on two independent DNA FISH ( n = 71–223) 5 . R , Pearson’s R . d , Summary of c and Supplementary Fig. 5a . e , Comparison of RT, Hi-C PC1 and NL (lamin B1) binding of day 0 and day 7 cells (RT, PC1) or mESC-derived NPCs (NL binding) 21 . NL binding data ( y axis) are reversed for easy comparison. f , Pearson’s R values for genome-wide comparison ( n = 11,795 bins) of Hi-C PC1 to RT and NL binding. g , Pearson’s R values for comparison of the top 10% A to B and B to A ∆PC1 data with ∆RT and ∆NL ( n = 2,338). h , A k -means clustering ( k = 30) of genome-wide RT profiles (200-kb bins). Pixels show average daily RT and PC1 values of each RT cluster. Thirty clusters were classified into six groups: two ‘constitutive’, two RT or compartment switching and two transiently RT-switching groups. i , j , For early to late/A to B-switching clusters 8–11 ( i ) and late to early/B to A clusters 18–21 ( j ), centroid RT (blue), average PC1 (red) and average difference from day 0 (orange; by RNA-seq) are plotted. Dashed lines, border of each value (red, PC1 = 0; blue, RT = 0). Full size image Furthermore, >90% of regions that remained as early replicating stayed in the A compartment during differentiation, while A/B compartment switches were rare (Fig. 2b ). Likewise, >80% of constitutively late regions stayed in the B compartment (Fig. 2b ). This was confirmed by comparison using four-state hidden Markov models (HMM) (Supplementary Fig. 4b–e and Supplementary Table 4 ). Taken together, early to late and late to early RT changes correlated well with A to B and B to A compartment changes, respectively, and the relationship is evolutionarily conserved. Moreover, A/B compartment changes were rare in regions with constant RT. Compartment changes are linked to subnuclear repositioning RT changes are accompanied by changes in radial nuclear positioning 5 (Supplementary Fig. 5a ) and possibly A/B compartments. To test this, we compared RT, Hi-C and DNA fluorescence in situ hybridization (DNA FISH) data of eight genomic loci during mESC differentiation to neurectoderm: six loci that changed RT or radial positioning and two loci (Oct4, Nanog) that did not change 5 . As anticipated, A/B compartment changes correlated remarkably well with changes in radial positioning (Fig. 2c ). Thus, changes in RT, A/B compartments and subnuclear positioning are tightly coupled (Fig. 2d ). Radial movement suggested the involvement of the nuclear lamina (NL) 21 . We compared our Hi-C PC1 with NL association data during neural differentiation of mESCs 21 and found a good correlation, similar to the RT (Fig. 2e,f ). Moreover, changes in A/B compartments correlated with changes in lamin B1 binding (Fig. 2g ). Thus, repositioning toward or away from the nuclear periphery correlates with increased or decreased NL association, respectively, providing an explanation for subnuclear repositioning associated with compartment changes. B to A compartment changes precede late to early RT changes To determine the temporal order of changes, we performed a k -means clustering analysis of RT profiles and compared this with compartment profiles. For each cluster, we calculated the mean RT values per day and defined clusters that traversed zero and showed >0.5 changes as RT-switching clusters. As shown in Fig. 2h , 6% of the genome were early to late-switching clusters that showed coordinated A to B switches, while 7% were late to early-switching clusters that showed coordinated B to A switches. Meanwhile, 3% became transiently late replicating (transient L) and another 3% became transiently early replicating (transient E) without compartment changes (Fig. 2h and Supplementary Table 5 ). Thus, changes in RT and compartments are separable in certain contexts. Of the remaining 81% of the genome, 34% were constitutively early clusters within A compartments, while 47% were constitutively late clusters within B compartments (Fig. 2h ). Representative k -means clusters (Fig. 2i ) exhibited early to late changes coincident with A to B changes (cluster no. 8–11). In contrast, late to early changes either coincided with (cluster no. 18 and 20) or initiated 1–2 d later than B to A changes (cluster no. 19 and 21) (Fig. 2j ), suggesting that RT changes reflect compartment changes. Next, we assessed the relationship between early to late changes and gene expression (Fig. 2i ). Decrease in expression of early to late clusters precedes (cluster no. 9), roughly coincides (cluster no. 10 and 11) or follows (cluster no. 8) compartment changes, indicating no specific temporal order. As for late to early clusters (Fig. 2j ), increase in expression follows B to A compartment changes (cluster no. 19–21) and coincides with late to early changes (cluster no. 19, 21). Coincidence of expression increase and late to early changes are observed in all three transient L clusters when they make late to early changes (cluster no. 12–14), and in the transient E cluster (cluster no. 15), suggesting that late to early changes generally coincide with transcriptional upregulation (Supplementary Fig. 5b ). Consistently, RT showed higher correlation with expression than with A/B compartments (Supplementary Fig. 5c ). Gene ontology analysis showed that early to late/A to B clusters were enriched for genes involved in chromatin regulation (cluster no. 8), centrosomes (cluster no. 9), peptidase activity (cluster no. 10) and metabolic processes (cluster no. 11) (Supplementary Fig. 5d ). Histone H1 clusters (no. 8) becoming late replicating is consistent with a previous report 17 . In contrast, the late to early/B to A clusters were enriched for genes involved in neural development (cluster no. 20), development and transcriptional regulation (cluster no. 18 and 19) and cell mobility (cluster no. 21), which is consistent with roles during neural differentiation (Supplementary Fig. 5d ). B to A changes preceding late to early changes and transcriptional upregulation are consistent with compartment switching creating a transcriptionally competent state for activation of neural genes. Day 5 A/B compartment organization resembles that of EpiSCs To investigate when A/B compartments change during differentiation, we performed a hierarchical clustering of compartment profiles. MEFs served as the outgroup (Fig. 3a and Supplementary Table 4 ). Naïve CBMS1 mESCs grown in 2i and LIF clustered with mESCs grown in fetal bovine serum (FBS) and LIF 11 , 22 , 23 (Fig. 3a ), thus confirming recent reports 13 , 24 . Previous studies reported widespread differences in enhancer usage and expression between ESCs and EpiLCs 14 , 25 . However, day 2 EpiLCs clustered with, and were indistinguishable from, mESCs (Fig. 3a ), indicating that simple acquisition of an early epiblast fate is insufficient for A/B compartment changes. Consistently, representative genes upregulated (for example, Dnmt3a/b , Fgf5 , Fgf15 , Oct6/Pou3f1 , Otx2 , Wnt3 , Wnt8a ) or downregulated (for example, Esrrb , Klf2 , Klf4 , Prdm14 , Tbx3 , Tcl1 , Zfp42 ) upon differentiation to EpiLCs mostly reside in the A compartment, with a few exceptions such as Fgf5 (B compartment; not shown). Together, these observations suggest that local enhancer reorganization in EpiLCs occurs mostly within the A compartment and does not readily translate to A/B compartment reorganization. However, compartments changed gradually after day 2, and days 5–7 formed a cluster that was distinct from days 0–4 (Fig. 3a ). Intriguingly, EpiSCs clustered with day 5–7 cells, suggesting that differentiating cells transiently acquired a compartment organization that resembled that of primed EpiSCs 26 . Fig. 3: Genome organization of day 5 cells during differentiation resembles that of the EpiSCs. a , A hierarchical clustering tree of A/B compartment (Hi-C PC1) profiles showing the relationship of CBMS1 mESCs (grown in 2i and LIF) and their differentiation intermediates (days 0–7) to mESCs from other laboratories, EpiSCs and MEFs. The mESCs, 46C (ref. 23 ), F123 (ref. 22 ) and J1 (ref. 11 ) were grown in FBS and LIF. MEFs 51 serve as an outgroup, while day 2 cells correspond to EpiLCs. Note the distinction between days 0–4 and days 5–7, with the former grouped together with ESCs and EpiLCs, while the latter resembled EpiSCs. b – d , PCA plots showing the relationship between differentiation intermediates of CBMS1 mESCs, EpiSCs and MEFs (total sample size n = 10) based on A/B compartment organization ( b ), RT ( c ) and RNA-seq ( d ). e , Heat maps showing average contact enrichment between pairs of 200-kb bins sorted by their Hi-C PC1 values, from the lowest (the most extreme B) to the highest (the most extreme A). The heat maps show a transition from higher interaction within A compartments (A–A interaction) in day 0 mESCs to higher interaction within B compartments (B–B interaction) in day 7 differentiated cells. Heat maps of EpiSCs and MEFs are also shown. f , Ratios of A–A/B–B interaction during CBMS1 mESC differentiation. E14 mESCs, EpiSCs and MEFs are also shown. EpiSCs resembled day 5 cells based on the ratio of A–A/B–B interaction ( f ). g , Differentials of ratios of A–A/B–B interaction between each day of differentiation. Note the large differential between days 4 and 5. E14 mESC Hi-C data was from Krijger et al. 52 . Full size image The PCA plot of A/B compartment profiles showed large-scale changes during days 4–6, with EpiSCs closest to day 5 (Fig. 3b ). A similar PCA plot of RT profiles also identified large-scale changes during days 4–6, with EpiSCs in between days 5 and 6 (Fig. 3c ), consistent with compartment changes preceding RT changes (Fig. 2j ). Compartment profiles of ESCs and EpiLCs were similar (Fig. 3b ) but their RT profiles were distinct (Fig. 3c ), perhaps due to the RT changes of ‘transient early’ and ‘transient late’ clusters that occurred without compartment changes (Fig. 2h ). On the PCA plot of RNA-seq profiles, EpiSCs are located between days 4 and 5 but are slightly off the differentiation trajectory (Fig. 3d ). Thus, our protocol does not bring mESCs exactly through an EpiSC state. However, cells transiently become EpiSC-like regarding RT and compartment profiles. Consistently, chromatin interaction within B compartments (B–B interaction), which strengthens upon mESC differentiation 27 , became closest to that of EpiSCs on day 5 (Fig. 3e,f ). Moreover, days 4–5 marked the transition from an A–A interaction-dominant to a B–B interaction-dominant phase, which resembled EpiSCs (Fig. 3f,g ). RT changes gradually but uniformly in differentiating cells Despite active investigation 28 , 29 , 30 , developmental regulation of 3D genome organization in single cells is still largely unknown. We took advantage of scRepli-seq 13 and analyzed single cells throughout the S phase (Fig. 4a, b ). Owing to limited cell-to-cell heterogeneity 13 , dimensionality reduction of scRepli-seq data by force-directed layouts of k -nearest-neighbor graphs using SPRING 31 highlighted a cell-type-specific RT trajectory (LOESS regression curve) of how cells move through the S phase (Fig. 4c ). Start and end points of trajectories corresponded to G1- and G2-phase cells, respectively, and were common to all cell types, as expected (Fig. 4c,d ). Remarkably, only 1.3% of mESCs and 7.9% of EpiSCs were outliers off the trajectories, underscoring their uniform scRepli-seq profiles. It follows, then, that scRepli-seq should allow us to monitor the degree of RT homogeneity/heterogeneity during differentiation. Fig. 4: RT changes gradually but uniformly in differentiating cells, as assayed by scRepli-seq. a , A cell-cycle profile of cells stained with propidium iodide during FACS analysis is shown, along with the various gates used to collect cells throughout the S phase shown in b . b , Binarized scRepli-seq profiles of 153 mESCs and 165 day-7 cells throughout the S phase. Population BrdU-IP RT and Hi-C PC1 data are shown for comparison. c , Visualization of individual scRepli-seq profiles by a force-directed layout algorithm using SPRING 31 . Dots represent single cells and colors represent days during differentiation or EpiSCs. The plot indicates that the y axis roughly represents the degree of differentiation. d , A plot identical to c but with different color coding. Here, colors represent percentage replication scores, which indicates that the x axis roughly represents cell-cycle time. Total number of cells in c and d is 884 (G1- and S-phase cells combined). e , Cell-type-specific RT trajectories of scRepli-seq profiles during differentiation and in EpiSCs. The dotted lines (days 0–7) and the thick line (EpiSCs) represent LOESS regression curves of each cell type. The earliest (<5% replication score) and latest (>95% replication score) S-phase cells are depicted as gray dots, while outlier cells are depicted as white dots. The number of outlier cells per total number of cells analyzed is shown within each graph. Here, cells with <5% and >95% replication scores are excluded from the total number of cells. See also Supplementary Table 4 . Full size image Figure 4e shows the scRepli-seq RT trajectories during CBMS1 mESC differentiation, which shifted unidirectionally along the y axis each day, resembled EpiSCs on day 5, then shifted away from EpiSCs on day 6–7. Importantly, outliers were rare throughout differentiation (Figs. 1e and 4e ). Thus, RT changed gradually but uniformly within a differentiating population, with cells homogeneously acquiring scRepli-seq profiles resembling EpiSCs on day 5. scRepli-seq can predict A/B compartments in single cells 13 , 32 . Comparison of day 0 and day 7 mid-S scRepli-seq profiles (Supplementary Table 5 ) with Hi-C PC1 confirmed their close relationship (Supplementary Fig. 6 and Supplementary Note 2 ). Interestingly, phi coefficient comparison showed that scRepli-seq profiles and Hi-C PC1 correlate better when the cell types are matched (Supplementary Fig. 6b and Supplementary Note 2 ). Day 0 and day 7 scRepli-seq profiles are distinguished by developmentally regulated RT regions that showed distinct patterns in ensemble RT and Hi-C PC1 data (Supplementary Fig. 6c–e and Supplementary Note 2 ). Taken together, scRepli-seq profiles can predict A/B compartments in a cell-type-specific manner. Compartment boundaries are newly formed near TAD boundaries A/B compartment boundaries and early/late RT boundaries coincide with TAD boundaries 7 , 12 . To address whether this holds for developmentally regulated boundaries, we first defined TAD boundaries using insulation scores 33 , which allows TAD boundary identification at 40-kb resolution even from low-depth Hi-C data 34 (Fig. 5a and Supplementary Table 5 ). We generated boundary scores as reported 35 and set the threshold to 0.1, which detected more TAD boundaries in nonmetaphase than in metaphase 36 (Supplementary Fig. 7a–c ). As reported 10 , 11 , 12 , day 0 and day 7 TAD boundaries and peak patterns were conserved (Fig. 5a and Supplementary Fig. 7d ). Fig. 5: Developmental regulation of A/B compartment boundaries and their relationship to TADs. a , Relationship between Hi-C PC1 profiles, HMM PC1 profiles (to precisely define compartment boundaries), Hi-C boundary scores and TAD boundaries (threshold score of 0.1). b , Theoretically, A/B compartments change either by boundary shifting or by isolation. Boundary shifting can be either from both sides or from one side. In the former case, relatively short stretches of A- or B-compartment domains become entirely B- or A-compartmentalized, respectively, while the latter represents unidirectional boundary shifting. c , Percentages of boundary shifting and isolation among A to B ( n = 239) and B to A ( n = 152) compartment-switching events, which were defined as A to B or B to A changes spanning >200 kb from day 0 to 7 (see Methods ). d , e , Representative A to B ( d ) and B to A ( e ) compartment (comp.) switches by boundary shifting. f , TAD numbers (rounded to the nearest integer) affected by A to B/B to A compartment changes (see Methods ). g , Size distribution of A to B/B to A compartment changes. h , Size distribution of A-TADs and B-TADs in CBMS1 mESCs. In all box plots, horizontal bars represent the 25th, 50th (median) and 75th percentiles, while the whisker ends represent upper/lower quartile ± 1.5 times the interquartile distance. Dots, outliers. n , sample size. i , Bar plots similar to f , showing the TAD numbers affected during mouse B cell reprogramming. j , k , PCA plots showing the relationship between differentiation intermediates of CBMS1 mESCs, EpiSCs and reprogramming intermediates (from B, Bα, D2, D4, D6, D8, to induced pluripotent stem cells (iPSCs); total sample size n = 15) 37 based on compartment organization ( j ) and RNA-seq ( k ). Full size image We used a two-state HMM to subdivide Hi-C PC1 values into A and B (Supplementary Table 5 ) to define compartment boundaries, and asked whether they coincide with TAD boundaries. As expected 7 , 12 , ‘shared’ compartment boundaries between day 0 and day 7 were significantly closer to TAD boundaries than control (Supplementary Fig. 7e ). However, ‘d0-specific’ and ‘d7-specific’ boundaries were also comparably close to TAD boundaries (Supplementary Fig. 7e ). RT boundaries were similarly close to TAD boundaries, although compartment boundaries were closer (Supplementary Fig. 7f ). Overall, both constitutive and developmentally regulated RT/compartment boundaries form close to TAD boundaries. A/B compartments changed primarily by boundary shifting Theoretically, compartment changes can occur in two ways (Fig. 5b ). First, changes can occur by boundary shifting or sliding, which can occur from one or both sides (Fig. 5b ). Alternatively, a new B-compartment domain can emerge within an A-compartment domain, or vice versa, by isolation (Fig. 5b ). We found that 87% and 94% of A to B and B to A compartment changes, respectively, were boundary shifts (Fig. 5c–e ). Isolation events (Supplementary Fig. 7g ) were rare (Fig. 5b and Supplementary Table 5 ). Compartment changes affect single TADs We counted the number of contiguous TADs affected per switching event. Fifty-six percent and 58% of A to B and B to A switches, respectively, affected single TADs, while only 14% and 24%, respectively, affected ≥2 TADs (Fig. 5f ). The median sizes of single TADs affected were significantly larger for B to A than for A to B changes (Fig. 5g ), and, interestingly, were identical to those of B-compartment TADs (B-TADs) and A-compartment TADs (A-TADs), respectively (Fig. 5h ). This explains why B to A changes are larger, and supports the hypothesis that compartment changes primarily affect single TADs. Compartments change in similar ways during reprogramming To explore compartment regulation in other contexts, we analyzed Hi-C data during mouse B cell reprogramming to induced pluripotent stem cells 37 . Compartments changed primarily by boundary shifting 37 , and, interestingly, they most frequently affected single TADs (Fig. 5i ). However, PCA plots of Hi-C and RNA-seq data indicated that the reprogramming and differentiation trajectories never overlapped, even during differentiation days 2–5, when the cells resemble the epiblast (Fig. 5j,k ). Thus, while all three germ layers originate from the epiblast during differentiation 38 , reprogramming does not necessarily proceed in the reverse order via the epiblast-like state 38 . Similarly, mESCs have been shown to differentiate via multiple paths to the same motor neuron state 39 . Single-TAD-level RT regulation in single cells Can single-TAD-level regulation be observed in single cells by scRepli-seq 13 ? To address this, day 0 and day 7 scRepli-seq profiles surrounding all compartment-switching events affecting single TADs were analyzed (Fig. 6a–h ). Heat maps in Fig. 6a–d show aggregated smoothed average RT values of three consecutive TADs, with the central TAD switching compartments. Here, averaged scRepli-seq profiles sorted by percentage replication scores (percentage of the genome replicated) are shown, with all TADs normalized to the same size. Heat maps of ‘both sides’ boundary-shifting events were consistent with single-TAD-level switching (Fig. 6a,b ). Plotting of compartment boundaries and averaged scRepli-seq RT boundaries corroborated single-TAD-level RT switching of ‘both sides’ events (Fig. 6a,b,e,f ), in which individual compartment domains corresponded to single TADs. For ‘one side’ boundary-shifting events, single-TAD-level regulation was not immediately apparent (Fig. 6c,d,g,h ), although some individual loci clearly showed single-TAD-level RT switching (Fig. 6i,j ). We speculate that the combined effect of aggregating averaged RT data of multiple triplet TADs and the slight misalignment between TAD boundaries and RT boundaries (that is, TAD boundaries align better with early borders of TTRs 7 ) probably obscured single-TAD-level regulation in the aggregated smoothed RT heat map of ‘one side’ events (Fig. 6c,d ). Fig. 6: RT profiles of compartment-switching TADs in single cells. a – d , Aggregated smoothed RT heat maps of triplet TADs based on 153 day 0 and 165 day 7 scRepli-seq data. In the heat maps, each horizontal line represents a single cell, sorted by percentage replication score. The x axis shows three consecutive TADs, with the central TAD switching compartments from day 0 to day 7. Three TADs were normalized to the same length (25 bins per TAD) to calculate RT (75 bins total, for triplet TADs). For the 75 bins in each cell, the average RT values of 75, 44, 38 and 43 triplet TADs categorized as ‘A to B, both sides’ ( a ), ‘B to A, both sides’ ( b ), ‘A to B, one side’ ( c ) and ‘B to A, one side’ ( d ), respectively, were calculated. Then, the average RT values of each cell (75 bins) were smoothed and color-coded to generate the aggregated smoothed RT heat map shown. Day 0 and day 7 heat maps have identical percentage replication score distributions for a fair comparison. See Methods for details. e – h , Plots similar to a – d showing aggregated smoothed RT and Hi-C PC1 line graphs of triplet TADs. For RT, early/late borders were modeled by LOESS regression to find percentage replication scores that are closest to 0.5 for each of the 75 bins. Similarly, for Hi-C, the PC1 lines trace the A/B compartment boundary (that is, PC1 = 0) of population Hi-C averages. i , j , scRepli-seq profiles of representative A to B ( i ) and B to A ( j ) regions that show ‘one side’ boundary shifting of single TADs (gray arrows). scRepli-seq profiles are ordered by their percentage replication score. TAD boundaries common to day 0 and day 7 are shown. RT, BrdU-IP RT. Full size image Xist cloud formation precedes RT change of the inactive X Thus far, we have excluded the X chromosome from the analysis for a fair male/female comparison. Here, we addressed the timing of early to late RT and compartment changes of the inactive X (Xi) upon X-chromosome inactivation (XCI) relative to autosomal RT and compartment changes. While the active X (Xa) and the Xi exhibit markedly distinct Hi-C profiles 40 , 41 , 42 , 43 , Xi-specific Hi-C is possible only when both haplotype-resolved analysis and 100% XCI skewing are fulfilled 42 . To predict the Xi compartment organization in a system without these assets, we analyzed the X-chromosome RT, which can identify a late-replicating Xi 44 in female cells despite representing the sum of two Xs 5 . In day 0 mESCs, X-chromosome RT exhibited many early-replicating peaks, reflecting the presence of two early-replicating Xa (Fig. 7a ). These peaks persisted until day 4 but became later replicating during days 5–7 (Fig. 7a and Supplementary Table 6 ). Unlike on autosomes (Figs. 3 and 4 ), X-chromosome RT became closest to EpiSCs on day 7 on a PCA plot (Fig. 7b ). As expected, female and male MEFs exhibited late-replicating and early-replicating X-chromosome profiles, respectively (Fig. 7a ), and the female MEFs resembled day 7 cells and EpiSCs (Fig. 7b ). Identifying A/B compartments on the murine Xi is difficult due to its chromosome-wide heterochromatinization 40 , 41 , 42 , 43 . Nonetheless, chromosome-wide early to late RT changes predict that the Xi undergoes A to B compartment switching during days 5–7. These Xi changes appear to temporally follow the autosomal compartment changes, which occur slightly earlier (Figs. 2 – 4 ). Fig. 7: Xist cloud formation precedes RT change of the inactive X. a , X-chromosome RT profiles of CBMS1 female mESCs during differentiation, female EpiSCs and male or female MEFs. Female X-chromosome RT profiles represent the sum of two Xs. b , A PCA plot showing the relationship between X-chromosome RT profiles of differentiation intermediates of CBMS1 mESCs, EpiSCs and MEFs (total sample size n = 10). c , Xist RNA expression during CBMS1 mESC differentiation and in EpiSCs and MEFs as assayed by RNA-seq. Dots represent experiments. Bars represent the mean of three (day 0, days 3–6, MEFs, EpiSCs) or five (day 2, day 7) independent experiments. Error bars, ±1 s.d. from the mean. d , Xist RNA FISH during CBMS1 mESC differentiation. The percentages of nuclei with a Xist cloud are shown. e , Haplotype-resolved, binarized mid-S scRepli-seq profiles of day 0 mESCs, day 2 EpiLCs and day 7 cells. On day 7, later replicating X was defined as the Xi and the earlier-replicating counterpart as the Xa. f , A model of A/B compartment dynamics during differentiation. A tug-of-war situation exists between neighboring A and B compartments that frequently affects single TADs by boundary shifting, either one side or both sides, which is accompanied by subnuclear repositioning. Cumulative boundary shifting might represent or reflect major cell-fate transitions, such as the naïve to primed pluripotency transition. In addition, chromatin interactions within the A compartments (A–A) are weakened upon mESC differentiation, while those within the B compartments (B–B) are strengthened. This transition roughly coincides with the acquisition of the EpiSC state. Full size image In contrast, the long noncoding RNA, Xist , which presumably plays a central role in XCI 45 , was sharply upregulated by day 2 and maintained thereafter (Fig. 7c ). Xist RNA had already coated the Xi in 65% of day 2 EpiLCs (Fig. 7d ), which clearly preceded the early to late RT change of Xi. Haplotype-resolved mid-S scRepli-seq profiles corroborated the absence of a late-replicating X until day 2, but not day 7 (Fig. 7e ). Given the close coordination of RT and compartment changes (Fig. 2 ), A to B compartment changes on the Xi may also be preceded by Xist cloud formation. Discussion In this study, we performed RT and Hi-C analyses at one-day intervals during mESC differentiation and analyzed the relationships between TADs, A/B compartments and replication domains in detail. Changes in compartments and RT were highly correlated and accompanied by subnuclear repositioning, with B to A compartment changes preceding late to early RT changes and transcriptional upregulation (Fig. 2 ). Population Hi-C showed that compartments change by boundary shifting, frequently affecting single TADs, which was conserved during reprogramming (Fig. 5 ) and consistent with single-TAD-level RT switching in single cells (Fig. 6 ). Furthermore, scRepli-seq showed gradual but uniform RT changes, suggesting an intriguing possibility that cells respond to differentiation cues and change RT/compartment organization uniformly within a population (Fig. 3 ). The scRepli-seq profiles could predict A/B compartments on the Xi or in single cells, for which obtaining compartment profiles is technically challenging (Fig. 7 and Supplementary Fig. 6 ). Upon differentiation of naïve mESCs, cells acquired an A/B compartment profile similar to that of primed EpiSCs (Figs. 3 and 4 ), suggesting that cumulative boundary shifting represents or reflects major cell-fate transitions (Fig. 7f ). During mESC differentiation, compartment boundaries frequently shifted from one TAD boundary to the next, affecting single TADs. Single-TAD-level regulation was confirmed in single cells by scRepli-seq (Fig. 6 ), supporting the hypothesis that single TADs are units of RT or compartment regulation. However, because TAD boundaries align better with early borders of TTRs 7 , which are long (since TTRs are replicated by a single replication fork), detecting single-TAD-level RT regulation by scRepli-seq is challenging. Thus, scRepli-seq analysis may have underestimated the rate of single-TAD-level regulation. Compartment changes were accompanied by changes in subnuclear positioning, RT and NL association, with B compartments being more peripheral, later replicating and closer to NL. Theoretically, compartment changes could occur without actual movement of loci, for instance, by repositioning of the A/B compartment interface in the nuclear space. However, this does not fit our DNA FISH data, which show radial repositioning of individual loci (Fig. 2c ). Because B to A compartment changes generally preceded late to early RT changes (Fig. 2 ), entry into an A compartment could occur first and causally affect RT, for instance, by facilitating the recruitment of replication initiation factors. In contrast, A to B compartment changes roughly coincided with early to late RT changes. It may be that B compartments are simply incompatible with early replication due to their replication initiation factor deficiency during early S phase. While we cannot currently manipulate compartments to test causality, the physical separation of A and B compartments during early G1 phase coincides with the establishment of an RT program 46 , 47 . B to A compartment changes also preceded transcriptional upregulation during differentiation (Fig. 2 ), consistent with B to A and late to early changes being a prerequisite for transcriptional activation. In contrast, A to B changes and transcriptional downregulation showed no specific temporal order, which is reasonable given that any gene can be turned off, even within the A compartment. Interestingly, day 0–2 (mESC to EpiLC) transition was not accompanied by compartment changes and yet gene expression changed extensively (Fig. 3 ). Thus, it appears that differentiation-induced transcriptional activation can occur either upon a B to A compartment switch or within the A compartment. Activation of A-compartment genes in EpiLCs is accompanied by extensive changes in local enhancer usage 25 , which could be related to the strengthening of chromatin interaction within the A compartment (Fig. 3 ). Our results are consistent with, and update, the replication domain model of Pope et al. 7 , suggesting a tug-of-war situation between neighboring A and B compartments for boundary positioning, which affects single TADs (Fig. 7f ). One possibility is that developmentally regulated genes are somehow confined to genomic regions adjacent to A/B compartment boundaries, and their local chromatin activation or repression causes compartment changes. This could explain why compartment changes are confined to regions adjacent to A/B compartment boundaries. Alternatively, the physical proximity in 3D to euchromatin or heterochromatin compartment boundaries might determine the propensity of genomic regions to switch compartments during development, which may in turn affect local chromatin activation or repression. We favor the latter possibility because the genomic regions flanking A/B compartment boundaries naturally fulfill this criterion and, perhaps as a result, show a higher compartment-switching tendency or larger cell-to-cell RT heterogeneity in mESCs, even before differentiation 13 . In fact, this explanation is consistent with the observation that the earliest- and latest-replicating portions of the genome do not change RT, while intermediate RT regions tend to change RT during development 5 . While hESCs changed RT or compartments upon differentiation, their changes were not tightly coordinated with expression changes 12 , 20 . In contrast, during mESC differentiation, changes in RT, compartments and expression were coordinated. While this could be due to species differences, developmental stages could be important. For instance, during the naïve to formative 48 (mESC to EpiLC) transition, compartments did not change, while expression changed significantly within the A compartment. In contrast, compartments changed considerably during the formative to primed transition (days 4–6) and were coordinated with RT and expression changes. Thus, coordinated changes may be evident only when the changes are sufficiently large. Unlike TADs 49 , the significance of A/B compartments is still unclear. However, two well-studied stem cells with rather indistinct epigenetic differences 26 , mESCs and mEpiSCs, could be clearly distinguished by their compartment organization, suggesting the potential significance. Differentiating mESCs transiently acquired compartment profiles resembling EpiSCs, as if they underwent a naïve to primed transition in compartment organization. Moreover, scRepli-seq profiles changed gradually but uniformly, suggesting an intriguing possibility that A/B compartments may also change gradually but uniformly in individual cells within a differentiating population, which is something that has not been extensively discussed in the literature. It may be that RT and compartment heterogeneity of cells within a differentiating population or tissue is smaller than one might think. We also predicted the Xi compartment organization by RT profiling. The highly uniform scRepli-seq profiles among cells, whether on the X chromosomes or autosomes, suggest that compartment profiles are conserved among cells. However, this conservation does not necessarily mean that all cells have identical patterns of chromatin interactions and subnuclear positions. For instance, a given B-TAD may associate with different sets of B-TADs in different cells, or could be near the nuclear periphery in one cell but near the nucleolar periphery in another 50 . In summary, our data provide insights into the regulatory principles of 3D genome organization in single cells during differentiation (Fig. 7f ). Moreover, scRepli-seq may serve as a valuable means for cell-type profiling from a ‘3D genome’ standpoint. Methods Cell culture and mESC differentiation CBMS1 mESCs were grown in 2i and LIF medium as described 53 . CBMS1 mESCs were differentiated to EpiLCs for 2 d in the presence of Activin A, bFGF and knockout serum replacement and then switched to EB aggregation culture in Nunclon Sphera 96U-well plates (ThermoFisher Scientific, catalog no. 174925), starting from 2,000 EpiLCs per well, exactly as previously described 53 , except for the use of GMEM + 15% knockout serum replacement 53 without any additional factors during the aggregation culture. This process is identical to the SFEBq neural method of mESC differentiation (serum-free floating culture of EB-like aggregates with quick reaggregation) 15 , except that we started from EpiLCs instead of mESCs. This resulted in efficient formation of neurectoderm based on RNA-seq. For FACS, cells were fixed in 75% ethanol as described 5 after single-cell suspension with trypsin for day-7 EBs 54 . EpiSCs (female) were cultured as described 55 . Male/female MEFs were isolated from E12.5 embryos from C57BL/6 mice and were cultured in DMEM + 10% FBS and penicillin/streptomycin. Immunostaining and immunohistochemistry Primary antibodies used were as follows: anti-Sox1 (goat polyclonal, R&D, catalog no. AF3369), anti-Oct3/4 (mouse monoclonal, C-10, Santa Cruz, catalog no. (C-10) sc-5279), anti-Nanog (rat monoclonal, ThermoFisher Scientific, catalog no. 14-5761-80), anti-Eomes (rat monoclonal, Abcam, catalog no. ab23345). Secondary antibodies used were as follows: Alexa488 donkey anti-goat IgG (Jackson ImmunoResearch, catalog no. 705-545-003), Alexa488 donkey anti-rabbit IgG (Jackson ImmunoResearch, catalog no. 711-545-152), Cy3 donkey anti-mouse IgG (Jackson ImmunoResearch, catalog no. 715-165-151), Cy3 donkey anti-rat IgG (Jackson ImmunoResearch, catalog no. 712-165-153) and Cy3 donkey anti-goat IgG (Jackson ImmunoResearch, catalog no. 705-165-147). CBMS1 mESCs were grown overnight on a chamber slide (Matsunami) coated with poly- l -ornithine and laminin (0.01% poly- l -ornithine solution; Sigma, catalog no. P3655) and then overnight in 300 ng ml −1 laminin in DMEM/F12:NBM (1:1) for 3–5 nights, followed by two PBS washes. EpiLCs were grown overnight for 3–5 nights on a chamber slide coated with 16.67 ng ml −1 fibronectin (ThermoFisher Scientific, catalog no. 33016015) in PBS. Cells were fixed in 3% paraformaldehyde/PBS (pH 7.4) for 10 min at room temperature. After three washes with PBS, cells were permeabilized with 0.5% Triton X-100 in PBS for 5 min, washed with PBS and blocked with blocking solution 1 (BS1; 1% BSA/PBS) for 1 h. After washing with PBS, slides were incubated with primary antibodies diluted in HIKARI Solution-A (Nacalai Tesque, catalog no. 237354; Sox1, 1:200; Oct3/4, 1:500; Nanog, 1:500; Eomes, 1:400) overnight at 4 °C. After washing with PBS-T (0.05% Tween-20/PBS), slides were incubated with fluorescently labeled secondary antibodies (1:500) in BS1 for 1 h at room temperature in a light-protecting container. After washing in PBS-T, cells were stained with 0.5 µg ml −1 DAPI in PBS-T. After washing with PBS-T, slides were mounted in PermaFluor (ThermoFisher Scientific, catalog no. TA-030-FM) and images were captured using a Nikon inverted fluorescence microscope (Eclipse Ti-E). EBs were fixed in 1% paraformaldehyde in phosphate buffer (0.1 M, pH7.4) at 4 °C overnight with gentle rotation. After fixation, EBs were washed in phosphate buffer with 10 mM glycine three times at 4 °C for 5 min. After washing with PBS, EBs were incubated in 25% sucrose in phosphate buffer overnight at 4 °C with gentle rotation. After removal of sucrose solution, EBs were collected with glass Pasteur pipettes and embedded in OCT compound blocks (Tissue-Tek, catalog no. SFJ:4583), which were frozen on a metal block cooled with liquid nitrogen and stored at −80 °C. Sections of 10-μm thickness were prepared using a cryostat, put onto glass slides and stored at −80 °C. For use, slides were returned to room temperature for 30 min and dried for at least 5 min with a dryer without heat. Then, slides were washed with PBS-T and incubated in 1× HistoVT One (Nacalai Tesque, catalog no. 06380-76) antigen retrieval solution for 30 min at 50 °C. After washing with PBS-T, samples were permeabilized and blocked in blocking solution 2 (BS2; 0.2% Triton X-100/1% BSA/PBS-T) for 1 h at room temperature. Then, slides were incubated with primary antibodies, washed, incubated with secondary antibodies, washed, stained with DAPI and mounted. The entire set of images using the same primary antibodies was captured with the same exposure time. To count the number of cells expressing Sox1, Oct3/4, Eomes and Nanog (days 3–7), cell debris at the center of EBs 56 , which emerged after day 4 (Supplementary Fig. 2b and identified by DAPI), were excluded from the analysis. Then, we manually selected all cell nuclei within each EB section and the median signal intensity of each nucleus was computed using Fiji software 57 . We set the threshold to 256, empirically, to define ‘positive’ cells (see Supplementary Fig. 2c ). Sample preparation for RT profiling by BrdU-IP (Repli-chip) We followed our routine BrdU-immunoprecipitation (IP)-based protocol, as described 18 . For FACS, we used a Sony SH800 cell sorter (ultrapurity mode), fractionating early and late S-phase populations. The BrdU-IP protocol has been described in detail elsewhere 18 . We used a Bioruptor UCD-250 (Sonic Bio) for genomic DNA sonication (high-output mode), with on/off pulse times of 30 s/30 s for 6 min. After BrdU-IP, immunoprecipitated DNA samples were subject to whole-genome analysis with a GenomePlex kit (Sigma, WGA2) for comparative genomic hybridization (CGH) microarray analysis 18 . In this study, we used the SurePrint G3 Mouse CGH 4 × 180K Array from Agilent (G4839A), labeling early- and late-replicating DNA samples after whole-genome analysis with Cy3 and Cy5 or vice versa, followed by overnight hybridization, washing and slide scanning, according to the manufacturer’s instructions. Sample preparation for RT profiling of single cells (scRepli-seq) Single cells (25%, 50% (mid-S), 75% S phase or G1 phase) were sorted with a Sony SH800 cell sorter (single-cell mode) (Fig. 4a ). Sample preparations were performed as described 13 . In total, 884 single cells (805 S phase and 79 G1 phase) were analyzed. For detailed statistics, see Supplementary Table 3 . Hi-C and library preparation Hi-C experiments were performed as previously described 58 , using 1–2 × 10 6 fixed cells. Hi-C libraries were subject to paired-end sequencing (80 or 125 or 150 base pair (bp) read length) using HiSeq 1500 or HiSeq X Ten. RNA extraction and library preparation Cells were lysed in TRI Reagent (Molecular Research Center, catalog no. TR 118) to extract total RNA. For RNA-seq, library preparation was performed using 500 ng of total RNA following the standard protocol of TruSeq Stranded mRNA Sample Prep Kit (Illumina). RNA-seq libraries were sequenced as 80-bp single-end reads by HiSeq 1500. Sequential RNA/DNA FISH For Xist RNA FISH, pXist cDNA-SS12.9 plasmid 59 (a gift from T. Sado) was used as a probe template. For X-chromosome territory DNA FISH, the following nine bacterial artificial chromosomes decorating the entire X chromosome at ~20-Mb intervals were used: RP23-413B4, RP23-6F23, RP23-480P14, RP23-11F10, RP23-36D11, RP23-392N24, RP23-316A19, RP23-371D3 and RP23-180G4. Briefly, bacterial artificial chromosomes/plasmids were individually labeled with fluorescence-dUTP (Green-dUTP (Enzo Life Sciences, catalog no. 02N32-050) or Red-dUTP (Enzo Life Sciences, catalog no. 02N34-050)) by nick translation (Abbott Molecular, catalog no. 07J00-001 (32-801300)). Labeled DNA probes, mouse Cot-1 (ThermoFisher Scientific, catalog no. 18440-016) and salmon sperm DNA (ThermoFisher Scientific, catalog no. 15632-011) were ethanol-precipitated, resuspended in hybridization buffer (10% dextran sulfate, 2× SSC, 1% Tween-20, 50% formamide) and denatured at 80 °C for 10 min before hybridization. For sequential RNA/DNA FISH, Xist RNA FISH was performed first. Fixed cells were dropped onto glass slides as described 4 , dried and slides were washed with 2× SSC and dehydrated in a series of 5-min washes with 70%, 90% and 100% ethanol at room temperature. After an overnight hybridization at 37 °C, slides were washed with 2× SSC three times and counterstained with DAPI before mounting with Vectashield (Vector Laboratories, catalog no. H1000). RNA FISH signals and their xy coordinates were recorded by DeltaVision Olympus IX71 equipped with Olympus PlanApo ×60 1.42 numerical aperture oil objective. For DNA FISH after Xist RNA FISH, coverslips were removed after recording by DeltaVision. Slides were washed with 2× SSC three times at 45 °C and incubated in 10 µg ml −1 RNaseA in 2× SSC for 1 h at 37 °C. Slides were washed once with 2× SSC and dehydrated by sequential 5-min washes with 70%, 90% and 100% ethanol at room temperature, before being air-dried at 58 °C for 1 h. Slides were then denatured in 70% formamide in 2× SSC at 80 °C for 3 min, dehydrated by sequential washes with cold 70%, 90% and 100% ethanol and air-dried until hybridization. After an overnight hybridization at 37 °C, slides were washed with 2× SSC three times and counterstained with DAPI before mounting with Vectashield. We recorded DNA FISH signals by DeltaVision at the same xy coordinates as RNA FISH. Images were deconvolved using algorithms in the SoftWorx package (Applied Precision) and analyzed using Fiji software 57 . For counting nuclei with a Xist cloud, we limited the analysis to nuclei with two X chromosomes observed by DNA FISH to exclude occasional XO nuclei that had dropped one X chromosome. Computations associated with RT profiling Microarray-based RT (Repli-chip) data were processed as described 18 . After obtaining raw genome-wide RT data (log 2 (early/late)), quantile normalization was performed using the limma package 60 . Sex chromosomes were excluded from analyses except when the X chromosomes were specifically analyzed. To compare our data and published RT data 5 , we calculated the mean RT values of 200-kb bins using raw RT data (excluding the sex chromosomes) and performed clustering analyses. For haplotype-unresolved scRepli-seq, read mapping and binarization (that is, making replicated or unreplicated calls) were performed as described previously at 80-kb bins 13 . Sex chromosomes were excluded from analyses except when the X chromosomes were specifically analyzed. For binarization, different options were applied to each cell depending on their percentage S-phase values, as defined by the FACS gate positions in Fig. 4a (2-HMM option for <50% S phase and G1 cells: most.frequent.state = ‘1-somy’; 2-HMM option for ≥50% S phase cells: most.frequent.state = ‘2-somy’). To calculate the correlation between Hi-C PC1 and binarized scRepli-seq profiles, we selected mid-S cells with 45–65% replication scores to analyze the data at 40-kb bins, and analyzed 28 and 15 datasets for day 0 mESCs and day 7 differentiated cells, respectively. To analyze the X chromosomes, haplotype-resolved scRepli-seq was performed as described 13 at 400-kb bins. We selected mid-S cells with 40–70% replication scores (days 0, 2 and 7) and two X chromosomes not called as ‘outlier cells’. The log 2 ((mappability corrected mid-S reads)/median) scores were computed using G1 control cells. Late-replicating Xi was selected by tSNE plot (Rtsne in the R package) using the log 2 scores of X chromosomes, as described 13 . Visualization of scRepli-seq profiles To visualize the high-dimensional scRepli-seq data, we used force-directed layouts of k -nearest-neighbor graphs using the SPRING algorithm 31 . A binary matrix (0, no data; 1, unreplicated; 2, replicated) with each row and column representing 80-kb genomic bins and samples, respectively, was used as input data (both S phase and G1 phase cells). First, we filtered out the rows with ‘no data (0)’ in more than 50 single cells. PCA was performed using this filtered binary matrix and then k -nearest-neighbor graphs ( k = 5) by ‘Euclidean’ distance metric were computed using all principle components (total number of cells). After this, relationships among cells were visualized by SPRING using 5,000 iterations. Identification of RT trajectories from scRepli-seq data S-phase RT trajectory (excluding G1 cells) was modeled by LOESS regression (span = 0.25) using the coordination ( x , y ) of the SPRING plot. The x and y axes reflected the percentage replication scores and the degree of differentiation, respectively. We calculated the median ± 1.5 × IQR (interquartile range) using the differences in y axis values from the LOESS curve for each sample set. The total number of S-phase cells with >5% and <95% replication scores were used for further analysis, and cells outside this range (median ± 1.5 × IQR) were defined as ‘outliers’ (Supplementary Table 3 ). After outlier removal, we performed another round of outlier filtering by a method described by Dileep and Gilbert 32 . By this approach, two and four cells were removed from day 0 and day 7, respectively. Computations associated with Hi-C data processing and A/B compartment calculation Read pairs were individually mapped to the mouse genome (mm9) or human genome (hg19) using the hiclib pipeline 61 with the iterative mapping method, or the juicer pipeline 62 . After read mapping, each side of the mapped reads was also applied to the hiclib pipeline using the bam file from the hiclib pipeline and the output file (‘merged_sort.txt’) from the juicer pipeline was applied to the hiclib pipeline by an in-house script. For E14 mESCs 52 , the ‘GSM2026260_E14_ESC_1_summary.txt.gz’ file was applied to the hiclib pipeline using an in-house script. First, uniquely mapped paired reads (for juicer pipeline MAPQ ≥ 30) were assigned to Hin dIII or Dpn II fragments. To filter out self-ligated and nonligated (dangling ends) read pairs, we performed three types of filtering to remove: (1) reads starting within 5 bp from the restriction sites (only applied to mapped reads from the iterative mapping method), (2) reads with extremely large and small restriction fragments (>100 kb and <100 bp, respectively) and (3) reads with extremely high and low count restriction fragments (top 0.5% of all counts and zero counts, respectively). Duplicated read pairs were removed. For Hi-C data 37 during reprogramming, we used 8% of the filtered read pairs by random sampling (8% was equivalent to our read depth). Summary of mapped read pairs in this study is described in Supplementary Table 3 , including total mapped read pairs and filtered read pairs. Biological replicates were merged and Hi-C contact heat maps were generated at 40- or 200-kb bins. To correct the bias of the contact heat maps, iterative correction was performed as described 58 . The bias-corrected contact heat maps were used to generate the A/B compartment (Hi-C PC1) profiles (in 200-kb bins or in sliding windows of 200-kb at 40-kb intervals) in each chromosome by the hiclib pipeline, with a small modification as described 58 . After obtaining the A/B compartment profiles, quantile normalization was performed using the limma package 60 . For comparative analysis of RT and A/B compartment profiles with LAD profiles 21 , we calculated the mean values of LAD profiles in 200-kb bins genome-wide. Computation of the A/B compartment strength We quantified the A/B compartment strength as described 37 . Briefly, 200-kb iterative corrected heat maps were converted to .hic format and the first eigenvalues (equivalent to A/B compartments) were generated from the .hic file by the juicer pipeline 62 . Average contact enrichments (log 2 (obs/exp)) between the pairs of 200-kb bins arranged by one-percentile groups of first eigenvalues were computed and shown as a heat map. The mean contact enrichments of pairs within the top or bottom 20-percentile groups of first eigenvalues were defined as A–A or B–B compartment strength and their ratio was computed by log 2 (A–A/B–B). Segmentation of RT and A/B compartment profiles by HMM For segmentation of RT/compartment profiles, a four-state HMM was applied to 200-kb resolution data using the ‘Rhmm’ package ( ). First, k -means ( k = 4) analysis using all chromosome data (excluding the sex chromosomes) of each sample was performed to determine the initial parameters of each state. Then, HMM was trained by the ‘HMMFit’ command of the RHmm package with the Baum–Welch algorithm. To calculate the optimal hidden states sequence, Viterbi’s algorithm (‘viterbi’ command of RHmm package) was used for each trained HMM. We subdivided the genome into four RT groups from early to late S (early-I, early-II, late-II and late-I) and into four A/B compartment (Hi-C PC1) groups, from A to B (A-I, A-II, B-II and B-I) according to the mean value of the distribution parameters in 4-HMM states. To apply a two-state HMM to RT (in 40-kb bins) and A/B compartment (in sliding windows of 200-kb at 40-kb intervals) data, k -means ( k = 2) analysis using each chromosome data (excluding the sex chromosomes) of each sample was performed for the initialization, and HMM was trained by the Baum–Welch algorithm. Viterbi’s algorithm was also used for calculating the optimal hidden states sequence. We assigned early and late, or A and B, according to the mean value of the distribution parameters in 2-HMM states. This binarization of the genome into A and B compartments allowed us to precisely define A/B compartment boundary positions, and those boundaries located within 200-kb in day 0 mESCs and day 7 cells were defined as ‘shared’ boundaries common to day 0 and day 7 cells, while the rest were defined as developmentally regulated boundaries (either ‘d0-specific’ or ‘d7-specific’). TAD boundary scores and boundary calling To call TAD boundaries, the insulation score method 33 and delta vector method 33 , 35 were applied. The insulation scores were calculated using the ‘matrix2insulation.pl (options: -is 200000 -ids 0 -im ‘mean’ -bmoe 0 -nt 0)’ ( ) script. The script was implemented using a bias-corrected 40-kb resolution Hi-C matrix. The delta vector method has been described 33 , 35 . Briefly, this method estimates the local minima in the insulation scores. First, delta vector was calculated at 200-kb window size (5 bins) from the insulation score, as described 33 , 35 . Then, the boundary scores were calculated by subtracting the first derivative from the second derivative of the Savitzky–Golay filtered delta vector (5 bins, 2 degrees) and the local max bin of the boundary score in continuous positive regions was defined as the putative TAD boundary. To call TAD boundaries, we set the threshold to 0.1 (see Supplementary Note 1 ). Day 0 and day 7 TAD boundaries located within 80 kb of each other were defined as shared boundaries common to day 0 and day 7 cells. Analysis of the overlap of A/B compartments, TADs and their boundaries Day 0 and day 7 TAD boundaries were combined into a single list. Then, cumulative probabilities of the overlap between this combined TAD boundary list and the A/B compartment boundaries of a given cell type from the two-state HMM results were computed based on their nearest distance. The random permuted TAD boundaries were generated by using the ‘shuffle’ command of Bedtools v.2.17.0 (ref. 63 ). To determine the number of TADs affected by compartment changes, a list of compartment-switching regions from the two-state HMM results was generated. By analyzing the overlap of compartment-switching regions and TADs, we counted the numbers of TADs that were affected by compartment changes, with the affected TAD numbers rounded to the nearest integer. That is, we counted the number of TADs as one if >50% of a single TAD switched compartment. Similarly, the count would be two if one TAD switched and another flanking TAD switched compartment for >50% of its sequence. For TAD size measurement, we defined A-TADs and B-TADs as TADs with >50% of its sequence being embedded in the A and the B compartment, respectively. We excluded TADs with ‘NA’ bins >50%. Generating aggregated smoothed RT heat maps of compartment-switching TADs from scRepli-seq Aggregated smoothed RT heat maps of triplet TADs were derived from binarized scRepli-seq data. The central TAD switches compartments from day 0 to day 7. To normalize TAD lengths, we downsized each TAD into 25 bins. For one side compartment switches, we inverted the data from left-to-right boundary-shifting regions so that they can be averaged together with right-to-left boundary-shifting regions. We then calculated the RT values of 75 bins (per three TADs) for 75, 44, 38 and 43 triplet TADs that were categorized as ‘A to B, both sides’ (Fig. 6a ), ‘ B to A, both sides’ (Fig. 6b ), ‘A to B, one side’ (Fig. 6c ) and ‘ B to A, one side’ (Fig. 6d ), respectively, based on Fig. 5c,f . We did this in each cell and calculated the average RT of each category for day 0 and day 7 cells. We first generated ‘original’ aggregated day 0 and day 7 RT matrices, sorted by the percentage replication scores of cells. Then, we merged these two original matrices to generate a single matrix ordered by the percentage replication scores of day 0 and day 7 cells combined. From this single matrix, we generated two intermediate matrices for day 0 and day 7, by filling NA values to day 7 cells and day 0 cells for day 0 and day 7 intermediate matrices, respectively. We regarded these intermediate matrices as image matrices and applied a kernel smoother to each matrix by using image.smooth (option: theta = 0.3) fields in the R package. From each of these ‘smoothed’ matrices, we obtained aggregated smoothed RT heat maps of scRepli-seq profiles shown in Fig. 6a–d . For RT, early and late RT borders were modeled by LOESS regression (span = 0.25) to find percentage replication scores that are closest to 0.5 (that is, 0.5 ± 0.05) for each bin (75 bins total). For Hi-C PC1, the format is almost identical to RT, except that the PC1 values come from population Hi-C data, taking averages of a set of triplet TADs for each line. RNA-seq analysis Before mapping, adapter-sequence trimming and removal of low-quality base reads were performed by cutadapt v.1.4.1 (ref. 64 ) and Fastx tool kit v.0.0.14 ( ). After these procedures, fastq files were aligned to the mouse genome (UCSC mm9) by HISAT2 v.2.0.4 (ref. 65 ). Mapped reads were quantified against the annotated UCSC transcriptome for mm9 to calculate FPKM (fragments per kilobase per million mapped fragments) values using the Cuffdiff program of the Cufflinks package v.2.2.1 (ref. 66 ). The log FC (fold change) values for d0 versus d2 and d0 versus d7 were calculated using the edgeR package in R 67 . To compare CBMS1 and D3 mESCs, differentially expressed genes were selected by analyzing D3 mESCs expression microarray data ( GSE17980 ) using GEO2R ( ) (adjusted P value < 0.05). Using this selected gene list, we computed the Pearson’s correlation coefficient between CBMS1 and D3 mESCs log FC data. Adjusted P values represent P values obtained from the moderated t statistics after correction for multiple hypothesis testing according to Benjamini and Hochberg. To compare gene expression changes during mESC differentiation, we calculated vsd counts by variance stabilizing transformation using the DESeq2 package 68 in R and genes with significant expression-level changes at any time point were identified with the nbinomLRT test (false discovery rate, FDR < 0.01) by DESeq2 and a >2 fold change between at least two time points (average vsd values). FDR shows the false discovery rate analog of P values obtained from the likelihood ratio test (nbinomLRT) after correction for multiple hypothesis testing according to BH. Clustering analysis The k -means clustering was performed using Cluster 3.0 (Euclidean distance as similarity metric) 69 for the time-course RT data (day 0 to day 7, k = 30). For this, the converted four-state HMM RT data from early to late S (early-I, early-II, late-II and late-I states were assigned +5, +1, –1 and –2, respectively; see Supplementary Table 4 ) at 200-kb bins was used. The mean value of RT data in each cluster was calculated and we categorized the clusters as follows: constitutive early (mean RT of >0 for all time points), constitutive late (mean RT of <0 for all time points), early to late (mean RT of >0 on day 0 and then <0 at some later time point, with >0.5 RT difference between any two days during differentiation), late to early (mean RT of <0 on day 0 and then >0 at some later time point, with >0.5 RT difference between any two days), transient early (mean RT of <0 on day 0, transiently >0 and then back to <0, with >0.5 RT difference between any two days), transient late (mean RT of >0 on day 0, transiently <0 and then back to >0, with >0.5 RT difference between any two days). For each cluster, mean A/B compartment (that is, Hi-C PC1) values for all time points were also calculated. Hierarchical clustering was done using Cluster 3.0 (Euclidian distance as similarity metric) 69 and SciPy of the Python library 70 ( ). Heat maps and dendrograms were generated by Java TreeView 71 and matplotlib of the Python library 72 ( ). For PCA, we used the ‘prcomp’ command (option: ‘scale = T’) in R 73 using 200-kb bin A/B compartment datasets, RT profiles and mean FPKM values of our RNA-seq data (filtering out genes with zero FPKM in all the samples (excluding the sex chromosomes)). To compare our RNA-seq data with published RNA-seq data during cellular reprogramming 37 , and to minimize the technical variability between two RNA-seq datasets, zFPKM transformation 74 was performed using average FPKM values of the biological replicates and then we performed PCA (option: ‘scale = F’) using zFPKM values. For X chromosome RT data, PCA was performed using all of the microarray probes on the X chromosome (SurePrint G3 Mouse CGH 4 × 180K Array from Agilent (G4839A)). Gene ontology analysis Gene ontology analyses were performed with the Molecular Signatures Database (MSigDB v.6.2) 75 using C5 collection gene sets. We used statistically significant (FDR < 0.01) terms. FDR ( q -values) represent the false discovery rate analog of hypergeometric P values after correction for multiple hypothesis testing according to Benjamini and Hochberg. Statistical analysis Statistical analyses were performed in R 73 . Statistical tests of differentially expressed genes in microarray and RNA-seq data are described in each section. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All RT datasets (BrdU-IP (Repli-chip) and scRepli-seq), Hi-C, and RNA-seq datasets are deposited in the NCBI Gene Expression Omnibus (GEO) database under accession code GSE113985 (GEO; ). Code availability Custom codes used in this study are available at .
The human genome is made up of 46 chromosomes, each of which has a length of about 100 to 200 million base pairs, the building blocks of the DNA double helix. Even during interphase, the period in between the cell division phases, chromosomes are still tightly packed inside the cell nucleus. On each chromosome, a regular structural unit called the nucleosome corresponds to a 146-base-pair-long strand of DNA wrapped around eight histone protein molecules. Until recently, no other regular structures beyond the nucleosomes were known. Thanks to the emerging genomics-based technology Hi-C (high-throughput chromosome conformation capture), researchers now know that there are regular structural units at the megabase scale, referring to millions of base pairs. It is now generally accepted that mammalian chromosomes are composed of megabase-sized globular units called topologically associating domains (TADs), which are separated by boundaries, presumably in a beads-on-a-string manner. Further, multiple TADs assemble to form what are called A and B subnuclear compartments. TADs containing many active genes form A compartments, while TADs with few or no active genes form B compartments. It is generally believed that TADs are stable units of the chromosomes and that their boundary positions do not change between cell types. By contrast, the organization of A/B compartments differs between cell types, meaning that the boundaries between them change during differentiation. However, nobody has ever observed changes in A/B compartments as they occurred. Scientists from the RIKEN Center for Biosystems Dynamics Research have now observed A/B compartment changes in detail during the differentiation of mouse embryonic stem cells (mESCs). They discovered many genomic regions that switched compartments, either from A to B or vice versa, which, interestingly, correlated well with the genomic regions that switched their replication timing (the temporal order of genomic DNA replication) from early to late or vice versa, respectively. A to B compartment changes were accompanied by movements from the nuclear interior to the periphery and by gene repression, while B to A compartment changes were accompanied by movements from the nuclear periphery to the interior and by gene activation. These results strongly suggest that A/B compartment changes represent physical movements of portions of chromosomes within the 3-D nuclear space, accompanied by changes in gene expression and replication timing. Regarding the temporal relationship between the physical movements of chromosomes and changes in gene expression and replication timing, the research team found that genomic regions that switched from B to A compartment clearly did so one to two days prior to gene activation, and that the changes in replication timing were from late to early. This raised an intriguing possibility that compartment changes might be a prerequisite for gene activation and replication timing changes. The team went on to characterize the features of genomic regions that changed A/B compartments. Compartments were found to change primarily by the shifting of A/B compartment boundaries, while the emergence of new compartments—for example the emergence of an A compartment within a stretch of B compartment or vice versa—was rare. Because compartment boundaries corresponded to a subset of TAD boundaries, they looked at how many TADs changed compartments and discovered that the majority of the changes affected single TADs. Importantly, this single-TAD-level switching of compartments was confirmed in single cells by a method, called single-cell Repli-seq, which was recently developed by the research team to analyze DNA replication regulation genome-wide in single cells (note that replication timing correlates very well with A/B compartments). The team also found that A/B compartment profiles changed gradually but uniformly within a differentiating cell population, with the cells transiently resembling the epiblast-derived stem cell (EpiSC) state, an advanced form of stem cells compared to ESCs. Taken together, the team's finding suggests that A/B compartments change primarily by the relocation of single TADs facing the A/B compartment interface to the opposite compartment. "It is possible," says Ichiro Hiratani, the leader of the group, "that the accumulation of these compartment switching events may reflect or represent changes in differentiation states such as from ESCs to EpiSCs." In this way, this study, published in Nature Genetics, explains how chromosomes undergo structural changes during cell differentiation. According to Hiratani, "Our study was the first to clearly demonstrate that changes in chromosome conformation preceded changes in DNA-based transactions such as gene expression and DNA replication timing. Intriguingly, chromosome conformation changes were regulated at the level of single TADs. We are eager to explore the basis of such single-TAD-level regulation of chromosomes and entertain the possibility of predicting DNA transactions based on preceding changes in chromosome structures."
10.1038/s41588-019-0474-z
Biology
How the clownfish earned its stripes: Color pattern evolution in coral reef fishes
Pauline Salis et al, Ontogenetic and phylogenetic simplification during white stripe evolution in clownfishes, BMC Biology (2018). DOI: 10.1186/s12915-018-0559-7 Journal information: BMC Biology
http://dx.doi.org/10.1186/s12915-018-0559-7
https://phys.org/news/2018-09-clownfish-stripes-pattern-evolution-coral.html
Abstract Background Biologists have long been fascinated by the striking diversity of complex color patterns in tropical reef fishes. However, the origins and evolution of this diversity are still poorly understood. Disentangling the evolution of simple color patterns offers the opportunity to dissect both ultimate and proximate causes underlying color diversity. Results Here, we study clownfishes, a tribe of 30 species within the Pomacentridae that displays a relatively simple color pattern made of zero to three vertical white stripes on a dark body background. Mapping the number of white stripes on the evolutionary tree of clownfishes reveals that their color pattern diversification results from successive caudal to rostral losses of stripes. Moreover, we demonstrate that stripes always appear with a rostral to caudal stereotyped sequence during larval to juvenile transition. Drug treatments (TAE 684) during this period leads to a dose-dependent loss of stripes, demonstrating that white stripes are made of iridophores and that these cells initiate the stripe formation. Surprisingly, juveniles of several species (e.g., Amphiprion frenatus ) have supplementary stripes when compared to their respective adults. These stripes disappear caudo-rostrally during the juvenile phase leading to the definitive color pattern. Remarkably, the reduction of stripe number over ontogeny matches the sequences of stripe losses during evolution, showing that color pattern diversification among clownfish lineages results from changes in developmental processes. Finally, we reveal that the diversity of striped patterns plays a key role for species recognition. Conclusions Overall, our findings illustrate how developmental, ecological, and social processes have shaped the diversification of color patterns during the radiation of an emblematic coral reef fish lineage. Background Understanding the diversification of phenotypes requires to integrate developmental and evolutionary analysis in an ecological context [ 1 ]. Having a well-defined phylogenetic context is essential to recognize the pattern of trait evolution as well as to detect events of parallel or convergent evolution. In addition, studying how phenotypic traits differ across natural environments as well as their adaptive value allows to reveal the factors shaping the emergence of diversity. Lastly, the study of trait development helps to identify the molecular mechanisms behind phenotypic diversification as well as constraints that bias their evolutionary trajectories. Pigmentation, in particular color patterns, provides an incredible number of cases that allow the exploration of the interplay between ecology, evolution, and development that are at the basis of trait diversification [ 2 , 3 , 4 , 5 , 6 ]. Among vertebrates, coral reef fishes provide classical examples of complex color patterns exhibiting a huge variety, and therefore, they offer a unique opportunity to better understand, in an integrated manner, the origin of those traits [ 7 ]. Most of coral reef fish species display spots, stripes, repeated lines, eyespots, grids, etc. This diversity in color patterns serves for species recognition [ 8 , 9 ], camouflage [ 10 , 11 ], mimicry [ 12 ], and/or warning [ 13 ]. For example, the eyespots of the damselfish Pomacentrus amboinensis have been suggested to serve as a subordinate signal directed to dominant males [ 14 ]. To date, work on coral reef fishes has mainly been focused on the link between color patterns, ecology, and behavior, that is, the ultimate role of these patterns [ 15 ]. However, the underlying development controlling these patterns and their evolution, that is, their proximal mechanism, is still largely unknown [ 15 , 16 ]. It is now well known that phenotypic diversification between lineages may be achieved by changes in developmental processes [ 1 , 17 ]. There are a number of possible developmental mechanisms that explain how specific changes in signaling pathways can induce phenotypic changes between lineages, and a main goal of Evo/Devo is to better understand these processes. Within this framework, various studies devoted to the pigmentation of zebrafish allowed to pinpoint changes in developmental mechanisms leading to color variation among related fish species [ 18 , 19 , 20 ]. However, the incredibly diverse color patterns of coral reef fishes have never been explored with such an Evo/Devo perspective. Despite this, there are some evidences that developmental processes may indeed sustain the diversification of color patterns in some species. For example, the polymorphic damselfish Chrysiptera leucopoma may retain its juvenile color (a bright yellow body with a dorsal blue line) or shift to the adult phenotype (a dark brown body) depending on habitat type and/or population densities [ 21 ]. However, in this example, no study of the underlying developmental mechanisms has been performed. Clownfishes ( Amphiprion and the monotypic Premnas ) are iconic coral reef fishes [ 22 ]. This tribe (Amphiprionini; [ 23 ]) within Pomacentridae is composed of 30 species that display a relatively simple color pattern made of zero to three white vertical stripes that are well visible on a yellow to red, brown, or even black body background [ 22 ]. Their life cycle includes a relatively short dispersive planktonic larval phase in the open ocean [ 24 ], followed by the settlement of juveniles into sea anemones where they live in a social group composed of a dominant breeding pair and a varying number of sexually immature subalterns [ 22 ]. The functional role of striped patterns in clownfishes is still unknown but could be associated with predator defense, foraging mode, macro-habitat type, species recognition, etc. as observed in various teleosts [ 15 , 25 ]. The relatively simple color pattern of Amphiprion offers a good opportunity to better delineate the patterns and processes allowing the diversification of such ornamental diversity. The clownfish evolutionary radiation has recently received much attention, providing a suitable phylogenetic framework for testing new evolutionary hypotheses on the rise of color diversity in coral reef fishes [ 26 ]. In this study, we focus on the vertical white stripes present in most species of Amphiprion . We first map their occurrence and pattern on the clownfish evolutionary tree and reconstruct the ancestral state in terms of white stripe presence/absence. Our results provide evidences that the diversification of clownfish color pattern results from successive caudal to rostral losses of stripes during evolution. Using specific drugs (e.g., TAE 684: an inhibitor of tyrosine kinase receptors expressed in zebrafish iridophores), we reveal that the white stripes are formed by iridophores and are essential for the patterning of the neighboring black stripes. Then, by an ontogenetic approach, we show that either the juvenile has the same number of stripes than adults or the juvenile has supplementary stripes that disappear caudo-rostrally later on. The reduction of stripe number over ontogeny totally matches the sequences of stripe losses across evolution, demonstrating that the diversification in color pattern among clownfish lineages results from changes in developmental processes. Finally, we determine the links between the number of stripes and other external morphological traits, and we provide some evidence that the various striped patterns have evolved for species recognition. This approach allows us to consider the relationships among striped patterns, fish morphology, and their ecology to suggest that both developmental and ecological processes have shaped the diversity of color patterns in clownfishes. Results Successive caudo-rostral loss of stripes during evolution Clownfishes can be classified into four categories according to their striped pattern at the adult stage: species without vertical stripe (group A) or species having one white vertical stripe (on the head—group B), two vertical stripes (on the head and the trunk—group C), or three vertical stripes (head, trunk, and caudal peduncle—group D) (Fig. 1 and Additional file 1 : Table S1). Interestingly, there is no species with a single stripe on the trunk or on the peduncle (Fig. 1 ). White stripe on the trunk is always associated with a head stripe. The white stripe on the peduncle is always preceded by stripes on the head and the trunk. Fig. 1 Adult color patterns of clownfish species. Pictures of adult clownfishes classified depending on their color patterns. a No vertical stripe, b one vertical stripe on the head, c two vertical stripes (one on the head, the other on the body), d three vertical stripes (one on the head, one on the body trunk, and the last one on the peduncle), e fishes having stripes polymorphism Full size image To understand the evolution of color pattern in clownfishes, we performed a stochastic mapping of striped patterns on the most complete time-calibrated phylogeny of Amphiprionini [ 27 ]. The analysis highly suggests that the common ancestor of extended clownfishes exhibited three vertical white stripes (90–100% of posterior probabilities; Fig. 2 ), independent from the color pattern polymorphism of some species (Additional file 2 : Figure S1). The state reconstruction for every internal node of the phylogeny illustrates successive losses of vertical stripes from the caudal to the rostral region (Fig. 2 ). The stochastic mapping, which was performed with a model assuming that all transition rates are free to vary, shows that some transitions between stripe morphs never occur. The evolutionary loss or gain of two white stripes appears very unlikely but the gain of one stripe, i.e., reversion, is possible (e.g., Amphiprion chrysogaster —Fig. 2 and Additional file 3 : Table S2). Thus, we tested these hypotheses by comparing the fit of four evolutionary models varying in their matrix of transition rates between stripe patterns using the Multiple State Speciation Extinction (MuSSE) method [ 28 , 29 ]. In these four models, rates of speciation and extinction were constrained to be equal among stripe morphs in order to reduce the number of model parameters. The best-fitting models according to the Akaike Information Criterion are the most constrained models iii and iv (Table 1 and Additional file 4 : Table S3). Both models assume that the loss or the acquisition of two stripes is not allowed. Moreover, in the great majority of the scenarios tested (Additional file 4 : Table S3), model fitting highly suggests that the transition rates among stripe morphs are symmetric, i.e., the rate shift from stripe morph A to B is equal to the rate shift from stripe morph B to A. The estimation of transition rates from the best-supported model (model iv) suggests that the appearance/disappearance of the third stripe on the caudal peduncle occurred more slowly (mean ± SD = 0.052 ± 0.001) than the one of the second stripe in the trunk (0.103 ± 0.001) or the first in the head (0.123 ± 0.001). Fig. 2 Successive caudo-rostral loss of stripes during evolution. Phylogenetic tree of clownfishes from Litsios et al. 2014 [ 26 ] with a summary map of white stripe number histories generated through stochastic character mapping. This trait mapping shows that the diversification of white striped pattern is a history of loss from an ancestral clownfish having three stripes and that these losses occurred in a progressive and sequential fashion from caudal to rostral. Circles at the tips of the tree indicate each species striped pattern and circles at every internal nodes give probabilities of ancestral striped pattern Full size image Table 1 Model fitting of the four striped pattern evolutionary models Full size table This evolutionary analysis therefore highlights that the diversification of white stripe pattern in clownfishes is a history of loss from an ancestral lineage having three stripes and that these losses always occurred in a progressive and sequential fashion from caudal to rostral regions. For example, in all two striped species, the peduncle stripe has been lost and the head and the trunk stripes are retained. All one-stripe species retained the head stripe and have lost the peduncle and trunk stripes. Ontogeny of stripe formation reveals a rostro-caudal stereotyped pattern The fact that stripes always appear at the same location in clownfishes and that the losses of stripes during evolution occurred in a sequential manner from caudal to rostral suggests that this loss may be constrained by a developmental mechanism. We thus tested whether a variation in the number of stripes could occur during clownfish ontogeny by studying the development of Amphiprion ocellaris and Amphiprion frenatus that display three stripes (i.e., similar to the ancestral state) or a single head stripe at the adult stage, respectively. At 8 days post hatching (dph), larvae of both species do not harbor any vertical stripe (Fig. 3 a-a′ and d-d′). At 10–11 dph, A . ocellaris larvae acquire the head and trunk stripes simultaneously (Fig. 3b ). Surprisingly, the same is true for A . frenatus (Fig. 3 e-e′). In A . ocellaris , the third stripe on the caudal peduncle is formed at 14 dph (Fig. 3c ). Strikingly, we also observed the development of a third stripe on the caudal peduncle of some A . frenatus at the same larval period, as in A . ocellaris (Fig. 3f ). In our husbandry conditions, larvae of A . frenatus reach the juvenile stage with either two or three stripes (Fig. 3h ): the anterior one on the head, the medial one on the trunk, and the posterior one on the caudal peduncle whereas A . ocellaris reach the juvenile stage with three stripes (Fig. 3g ). The loss of the trunk stripe will occur after several months and is therefore a prominent feature of juvenile A . frenatus . Fig. 3 Ontogeny of stripe formation reveals a rostro-caudal stereotyped pattern. A . ocellaris (a–c′ and g) and A . frenatus (d–f′ and h) color pattern ontogenesis at 8 dph (a-a′, d-d′, A . ocellaris : n = 10; A . frenatus : n = 3), 11 dph (b-b′, e-e′, A. ocellaris : n = 10; A . frenatus : n = 3), 14 dph (c-c′, f-f′, A. ocellaris : n = 10; A. frenatus : n = 3) and 6 months post hatching (g and h, n = 5). Higher magnification of the medial white stripe ontogenesis (a′, b′, c′, d′, e′, f′). Note that the white stripes appear in the same rostral to caudal sequence in both species. Scale bars correspond to 1 mm Full size image Our data provide strong evidence that there are two distinct and inverse phenomena: (i) an evolutionary pattern of stripe loss, with a caudo-rostral progression that is observed in adult fish, and (ii) a developmental pattern of rostro to caudal stripe gain during the larval to juvenile transition exemplified in A . ocellaris and A . frenatus followed by a sequential loss of caudo-rostral during juvenile stage in some species such as A . frenatus . Taken together, these results emphasize the fact that clownfish color pattern evolution is constrained by developmental processes that may also explain why there is no species with a single stripe on the tail or trunk. In order to better understand the white stripe ontogenesis, we examined the cellular process by which the stripes are formed. In teleost fishes, different types of pigment cells (or chromatophores) are described according to their ultrastructure and their pigment type [ 30 ]. Xanthophores (orange cells), iridophores (white and iridescent cells), leucophores (white), and melanophores (black cells) are the four main characterized cells. In A . ocellaris , we observed at least three types of cells: xanthophores, melanophores, and white cells (Fig. 4f ). From juvenile to adult stage, orange skin is composed of xanthophores and round melanophores. The white stripe comprises stellar melanophores and white cells whereas the black stripes are formed of a dense number of melanophores (Fig. 4a ). To determine if leucophores or iridophores are involved in white stripe formation in A . ocellaris , we tested whether the drug TAE684 (TAE), known as an inhibitor of leukocyte tyrosine kinase (Ltk) and anaplastic lymphoma kinase (Alk) and leading to a decrease of iridophore number in zebrafish [ 31 ], could disrupt white stripe formation. Treatments with TAE at 0.6 μM or 0.3 μM during the metamorphosis of A . ocellaris larvae (i.e., from 5 until 18 dph) induced a dose-response effect into the formation of white stripes (Fig. 4a – d ). While the control develops the head, the trunk, and the peduncle white stripes (Fig. 4a ), fish treated with 0.6 μM TAE develop a complete transparent head stripe whereas the body stripe is uncompleted. This suggests that there is a decrease in the number of iridophores (Fig. 4d , e ). Fish treated with 0.3 μM show an intermediate phenotype with 50% of the fish having the same phenotype as fish treated with 0.6 μM (Fig. 4c , e ) and 50% having full transparent head and trunk vertical stripes (Fig. 4b , e ). Similarly, a dose-response effect on the iridescence of the eye was observed (Fig. 4 right panel a–d). Indeed, whereas the eyes of fish control and fish treated at 0.3 μM TAE are iridescent (Fig. 4a – c ), those of fish treated at 0.6 μM TAE are blackish (Fig. 4d ). Together, these results demonstrate that white cells correspond to iridophores but also that Ltk and/or Alk is required for the formation of iridophores and color pattern during metamorphosis of A . ocellaris . Fig. 4 Cellular mechanism of color pattern ontogenesis in A . ocellaris . a – d Dose-dependent modifications of color pattern (left and middle panel) and iridescence of the eye (right panel) after 13 days of TAE684 drug treatment of A . ocellaris at 18 dph at 0.6 μM ( d ) and 0.3 μM ( b , c ) compared to DMSO (control, a ). e Cumulative histogram of fishes having fully stripes: one stripe (head—red), two stripes (head and trunk—green), or three stripes (blue) in control ( n = 6), TAE 0.3 μM ( n = 16) and TAE 0.6 μM ( n = 3). f Stereomicroscope pictures showing the three types of chromatophores within the trunk of juvenile A . ocellaris . ( g , h , n = 4). Live imaging pictures of the same A . ocellaris individual at 10 dph ( g ) and 11 dph ( h ) show that during medial white stripe formation, the distance increases between melanophores underlined with red dots and melanophores underlined with blue dots Full size image TAE treatments during the metamorphosis of A . ocellaris larvae allow the understanding of how iridophores contribute to color pattern development. White vertical stripes are surrounded by a thin stripe of melanophores over an orange body. Interestingly, we observe that when no white vertical stripe is formed over the body (Fig. 4d ), melanophores are dispersed with xanthophores over the flank and they do not form any stripe. This is interesting to link to this observation that, during formation of the white stripe in control fish, iridophores initially appear at the future stripe location and push black melanophores at the periphery to form stripe pattern (Fig. 4g , h ). This suggests that melanophores are expelled from the white stripe in normal condition to form the black stripes. In addition, we observed that xanthophores are not localized at their proper location after exposure with TAE. These data not only reveal that the cellular substrate underlying the white color is likely based on iridophore cells but also that they shed light on the fact that specific interactions between different chromatophore types play an important role in the stripe formation in A . ocellaris . Stripe loss during ontogeny occurs multiple times in Amphiprion The observation that A . frenatus juveniles have more stripes than adults prompted us to further document the evolution of ontogenetic trajectories of color pattern in clownfishes. For this, we compiled ontogenetic information on 26 Amphiprion species with a multi-tiered approach utilizing the primary literature, online databases, and field observations made by various experts (Additional file 1 : Table S1). We observed that a minimum of nine species show extra stripes during the juvenile phase when compared to adult. This is illustrated on Fig. 5 for four species: A . frenatus , A . melanopus , A . rubrocinctus , and A . ephippium. This contrasts with other cases (e.g., A . nigripes or A . sandaracinos ; Fig. 5e , f ) for which the number of stripes is invariant over ontogeny. We therefore categorized the species showing a loss of stripes during ontogeny (group 1) and the species without a loss of stripes during their development (group 2) (Additional file 1 : Table S1), and we studied the evolution of these two trajectories during the evolution of clownfishes (Fig. 5g ). Ancestral state reconstruction supports (64% of posterior probabilities) that the last common ancestor of extended clownfishes did not lose white stripes during ontogeny. Moreover, stochastic mapping reveals a minimum of five major transitions in the occurrence of white stripe loss during ontogeny: (i) one in the frenatus clade, (ii) one in A. chrysopterus , (iii) one in A . latifasciatus , (iv) one in A . allardi , and (v) one in the clade grouping A . mccullochi and A . akindynos (Fig. 5g , red circles). This reveals a convergence in the process of loss of stripes during ontogeny, probably triggered by selective factors (ecological, behavioral, etc.), and/or shared molecular and cellular mechanisms. Fig. 5 Stripe loss during ontogeny occurs multiple times in Amphiprion. a – f Pictures of juveniles (big picture) and adult (small picture-top right) of A . frenatus ( a ), A . melanopus ( b ), A . rubrocinctus ( c ), A. ephippium ( d ), A . nigripes ( e ), and A . sandaracinos ( f ). A. frenatus ( a ), A . melanopus ( b ), A . rubrocinctus ( c ), and A. ephippium ( d ) show that juveniles have extra stripes compared to its respective adult whereas the number of vertical stripes does not vary over ontogeny in A . nigripes ( e ) and A . sandaracinos ( f ). Pictures of juveniles were nicely provided by GR Allen. g Maximum clade credibility phylogeny of clownfishes [ 27 ] with a summary map of striped pattern ontogenesis generated through stochastic character mapping. It reveals a minimum of five major transitions to an ontogenetic pattern made of white stripe loss, occurring (1) in the A . frenatus clade, (5) in the clade grouping A . mccullochi and A . akindynos , and in three individual species (2) A . chrysopterus , (3) A . latifasciatus , and (4) A . allardi (number in red circles) Full size image Links between striped patterns, ecology, and external morphology Striped patterns are adaptive and related to ecological and behavioral differences among cichlid species [ 25 ]. In butterflyfishes, striped body patterns showed correlated evolution with a number of ecological factors including habitat and sociality [ 15 ]. Clownfishes vary in their ecology [ 22 ], and one of the most striking variations among species is the diversity of sea anemone hosts. Some clownfishes are specialists, living with only one sea anemone species, while some others are generalists, capable of living in association with several host species [ 32 ]. To establish if striped patterns of clownfishes are related to this ecological difference, we first tested the simple prediction that the number of white stripes is related to the number of possible sea anemone hosts using phylogenetic generalized least-squares (PGLS) regressions. However, the number of white stripes is unrelated to the number of hosts in which they exhibit mutualistic interactions ( F = 0.13, P = 0.72; Additional file 5 : Table S4). Clownfishes also vary in their external morphology (Fig. 6 ), and it was suggested that this morphological disparity is related to variation in both host type and habitat partitioning [ 33 ]. At a first glance, the shape of the dorsal fin varies among clownfish species according to their white stripe patterns (Fig. 6 ). Indeed, an indentation at the middle of the dorsal fin in clownfish species having two and three stripes is visible (Fig. 6a , b , e – g ) whereas this is less obvious in species having one and zero stripes (Fig. 6c , e – g and Additional file 6 : Figure S2). Thus, we focus on fish body form, knowing that the number of color stripes might be size-dependent [ 34 ] and fin morphologies since these traits are usually linked to adaptation towards different macro-habitats (e.g., [ 35 , 36 ]). We quantified body size, body elongation, and dorsal fin morphology of a minimum of 22 clownfish species and we tested whether stripe patterns are correlated with these morphological traits. Phylogenetic generalized least squares (PGLS) reveal that the evolution of the number of white stripes is unrelated to body size ( F = 0.03, P = 0.87) and body elongation ( F = 0.64, P = 0.43; Additional file 5 : Table S4). On the other hand, PGLS analysis shows a strong correlation between the anterior lobe morphology of the dorsal fin and the number of white stripes ( F = 14.53, P < 0.001) whereas the co-evolution of posterior lobe morphology and the number of white stripes is weaker ( P = 0.05; Additional file 5 : Table S4). Fig. 6 Morphological trait analysis reveals a link between striped pattern and shape of the dorsal fin. a – d Pictures of A . ocellaris ( a ), A . bicinctus ( b ), A . frenatus ( c ), and A . ephippium ( d ) and cartoons illustrating their dorsal fin shape (A anterior, P posterior). There is an indentation at the middle of the dorsal fin (black arrowhead) with the anterior spiny rays longer than the most posterior one in clownfishes having two or three stripes. e Methods for quantification of anterior lobe and posterior lobe morphology index (l1 length of the third dorsal spine, l2 length of the most posterior spine, lr length of the longest soft ray, L length of the dorsal fin was used for normalization). Anterior and posterior lobes morphology indexes correspond to (l1-l2)/L and (lr-l2)/L, respectively. f , g Scatterplots showing the relationship between the numbers of vertical white stripes ( x -axis) and lobe morphologies index of the dorsal fin ( y -axis). Each point corresponds to one clownfish species Full size image These analyses highlight that if there are few links between the striped pattern and clownfish body morphology, there is one between the striped pattern and dorsal fin shape. This link suggests that both traits depend on linked developmental and/or selective processes. Are striped patterns used for species recognition? “Species recognition” refers to the behavior whereby individuals identify and keep track of conspecifics for group coherence or identify a suitable sexual partner [ 37 ]. Accordingly, we hypothesized that the striped patterns of clownfishes could function for “species recognition,” discouraging association of non-conspecifics and/or encouraging association of conspecifics. Application of this hypothesis predicts that sympatric species should have distinctly different striped patterns. To test this hypothesis, we counted the number of identical pairs within eight communities of sympatric clownfishes [ 38 ] and we investigated whether similarity of striped patterns among sympatric species was less than expected if communities were composed of a random set of species. Relative to the four striped patterns, the diversity of clownfish communities is consistent with this species recognition hypothesis (Table 2 ). Indeed, except for The Keppels (Great Barrier Reef, Australia) and Komodo, the number of identical pairs was minimal within each natural community (Table 2 ). Moreover, the randomization test showed that this result is unexpected in the great majority of locations given the distribution of striped patterns among species. This test only failed for Keppels and Komodo communities, which house the highest number of polymorphic species ( A . clarkii and A . melanopus ). Overall, these data provide evidence that the distribution of striped patterns is not random in clownfish communities, suggesting that stripes help for species recognition. Table 2 Diversity of clownfish communities is consistent with the species recognition hypothesis Full size table Discussion Our study reveals an unexpected link between the variation in the number of stripes occurring during ontogeny and the same type of variation occurring during evolution of clownfishes. Strikingly, the sequence of events observed during evolution mirror the one seen during ontogeny. An identical sequence of white stripe loss than the one observed during evolution was observed during late juvenile stages. These observations strongly suggest that the diversification of stripe patterns observed during clownfish radiation is the product of modifications of an ancestral stereotyped ontogenetic trajectory. From a mechanistic point of view, such a successive gain of stripes from the head to the caudal region during ontogeny support the hypothesis that loss of stripes during evolution is likely constrained by ontogenesis. The sequential appearance of white stripes from the anterior to the posterior region during the development of two distantly related species, A . ocellaris and A . frenatus , is remarkable. This highly suggests a conserved mechanism of color pattern ontogeny across clownfishes. Here, we provide a first analysis into the mechanistic underpinnings of the patterns seen. In a first part, using an inhibitor of two receptor tyrosine kinases (Ltk and Alk) that are instrumental in iridophore formations in zebrafish, we show that the white coloration of stripes is produced by iridophores and not leucophores. After TAE treatments, we show that stripes are sometimes absent, incomplete, and/or less white. Disrupting the white stripe formation reveals that the presence of iridophores is instrumental for the distribution of the two other chromatophore types. In TAE-treated fish, xanthophores and melanophores are scattered on the flank and not properly organized as in wild type. This suggests that, as in zebrafish [ 39 ], cell-cell interactions are critically important for pattern generation in clownfish. Additionally, our results suggest that clownfish color pattern is not formed by a reaction/diffusion mechanism based on Turing model [ 40 ] as in zebrafish. Up to now, our knowledge of stripe formation in teleost fishes derived from studies achieved in zebrafish, the most widely used fish model species (reviewed in [ 41 ]). However, it is likely that the pattern of stripe formation in zebrafish and clownfishes are controlled by different mechanisms. In zebrafish, the color adult pattern composed of periodical horizontal blue and orange stripes is effectively formed by a reaction/diffusion mechanism which predicts the periodic pattern [ 40 ]. This model is able to regulate the width of the stripes and it explains that, during growth, new stripes insert between the preexisting ones to keep stripe width [ 42 ]. Such Turing-like model was confirmed in long-fin zebrafish mutant which continues to form perfectly new stripes as the fins grow [ 43 ] and during normal growth of Pomacanthus imperator [ 44 ]. The case of clownfishes is totally different since the number of stripes is fixed and independent of fish body size. Moreover, in clownfishes, new stripes do not form when the distance between two previous ones increases but following an ordered anterior-to-posterior pattern. In addition, the disappearance of stripes during growth that we observed in some clownfish species (e.g., A . frenatus and A . chrysopterus ) does not fit with Turing predictions and thus allows a clear refutation of the “Turing pattern hypothesis.” This suggests that, in clownfishes, when and where the stripes must be formed are controlled by specific patterning mechanisms that remain to be analyzed. Various clownfish species exhibit extra stripes during the juvenile phase when compared to the adult stage, and this convergent loss of stripes occurred at least five times across the evolution of clownfishes (Fig. 5 ). Such a convergence may be explained by shared molecular and cellular mechanisms triggering signals of stripe loss during ontogeny and/or by shared responses to selective factors (ecological, behavioral, etc.). In terms of proximal mechanisms, this loss of white stripes likely does not involve a modification of an ancestral pre-pattern but could result from different mechanisms. In zebrafish, it is known that pigment cells must be continuously formed to maintain the pattern of bands [ 45 , 46 ]. The loss of white stripe observed in some clownfish species during late juvenile life may be caused by a spreading of white iridophores over the body, an extensive apoptosis of these cells, and/or a dedifferentiation of the chromatophores resulting in an absence of pigment cell synthesis. The correlated evolution between the numbers of vertical white stripes and dorsal fin shape suggests that both phenotypic traits may depend on the same or linked developmental processes. An ecomorphological interpretation of these results would be that patterns of stripes are somehow linked to differences in macro-habitats, but this hypothesis certainly needs further analyses. On the other hand, the fact that the white stripe is present mostly in species with an indentation in the dorsal fin may suggest that it may be linked to a disruptive effect [ 47 ], the band and indentation both helping to hide the fish silhouette. This may be part of a general strategy of poor swimmer fish to reinforce their disruptive coloration to avoid being tackled by predators. However, these observations urge for additional studies in order to disentangle the relationship between the color patterns and the ecology of clownfish species. Beyond ecological adaptation, we provide evidence that the striped patterns play a role for species recognition in clownfishes. Indeed, the number of species pairs showing the same striped pattern is exceptionally low in natural communities compared with random ones. Non-significant results from Keppels and Komodo (Table 2 ) are probably due to stripe polymorphism of species living in these locations, a character which was included by default in our analyses. We hypothesize that this polymorphism of stripe numbers is likely driven by the function of species recognition but this needs to be tested. The diversity of white stripe patterns may result from social selection, where signals encourage association of conspecifics [ 48 ]. Indeed, social selection on visual signals can be very strong in clownfishes, which live in social groups based on a size dominance hierarchy where agonistic interactions are numerous [ 49 , 50 , 51 ]. We expect that a variation in the number of stripes during ontogenesis among individuals of a same species forming a social group may mediate or reduce agonistic interactions, ultimately facilitating the access for breeding positions [ 49 ]. The cohabitation of different clownfish species in the same sea anemone host may also support this hypothesis. Indeed, cohabiting clownfishes differ morphologically and always show dissimilar white stripe patterns [ 38 ]. While this hypothesis needs to be further explored, interspecific signaling and social selection on visual signals probably operate on the phenotypic divergence of clownfishes. Conclusion Our study highlights a strong link between the diversification of pigmentation pattern in clownfish and the developmental trajectories underlying white stripe formation. This sets up exciting perspectives for the study of color pattern diversity in coral reef fishes, for which an Evo/Devo approach was clearly underused until now. As suggested by our integrated study of clownfishes, the diversification of color patterns results from interplay among developmental, ecological, and behavioral processes. Methods Coding of striped patterns and associated polymorphisms Clownfishes were classified into four categories according to their striped pattern at the adult stage [ 52 ]: species without vertical stripe (group A) or species having one white vertical stripe on the head (group B) or two (head and trunk) (group C), or three (head, trunk, and caudal peduncle) (group D) (Additional file 1 : Table S1). Four species ( A . akallopisos , A . sandaracinos , A . perideraion , and A . pacificus ) show an atypical pattern with a white line browsing the dorsal region [ 53 ]. This horizontal stripe was not taken into consideration in our study as it may come from a different developmental process. Three species ( A . clarkii , A . melanopus , and A . polymnus ) are polymorphic with two different patterns observed in natural populations (Additional file 1 : Table S1). Accordingly, we repeated all comparative analyses using every combination of coding (i.e., eight combinations). Phylogeny, ancestral state reconstruction, and stochastic mapping We used stochastic character mapping [ 54 ] to infer possible histories of the stripe pattern and stripe pattern ontogenesis. The stochastic mapping and the ancestral state reconstruction was produced using the function make.simmap in the package phytools (version 0.5.38; [ 55 ]) for R [ 56 ]. We then sampled 10,000 character histories allowing the incorporation of the uncertainty associated with the timing of the transitions between morphological states. For the parameterization of make.simmap , we used the estimated ancestral state and the best model for the transition matrix from our empirical data. To assess the best model for the transition matrix, we fitted a model with equal rate of transition between states and a model with all rates different using the function ace in the R-package ape [ 57 ]. The likelihood of these two models was then compared using a likelihood ratio test, which suggested the use of unequal rates (see MuSSE results). Statistics for stripe morph histories were retrieved using the function describe.simmap from the phytools R-package. Model of striped pattern evolution To test whether the evolution of striped patterns is not random in clownfishes, we compared transition rates between stripe morphs using the “multiple state speciation extinction” (MuSSE) method. MuSSE is an extension of the BiSSE maximum likelihood-based test, which is described in [ 28 , 29 ]. To test our hypotheses, we used the R-package diversitree [ 29 ] to compare the fit of four different models: (i) a model in which all transition rates vary independently, (ii) a model with an equal rate transition matrix (e.g., q 14 = q 41 ), (iii) a model in which the loss or the acquisition of two stripes by evolutionary color shift is not allowed (i.e., q 31 = q 41 = q 42 = q 13 = q 14 = q 24 = 0), and (iv) a model combining the constrains of model (ii) and model (iii). In order to reduce the number of parameters, every model assumed that the speciation rates (λ) and the extinction rates (μ) are equal among stripe morphs. Moreover, we corrected for incompletely resolved phylogeny without specifying the state of missing species and we assumed that the missing species are randomly distributed on the phylogenetic tree. The fit of models was compared using sample-size-corrected Akaike’s Information Criterion (AICc) scores and weights [ 58 ]. A ΔAICc value of 4 or more was taken as an indication of support for one model over the others [ 58 ]. Ecological and morphological data Our morphological analysis includes body elongation, body size, and dorsal fin morphologies. Maximum body size was retrieved from Fishbase [ 59 ]. Body elongation and dorsal fin morphology quantifications were determined using pictures of Amphiprion sp. individuals found in Fishbase [ 59 ] or nicely provided by J.E. Randall ( ) and J. Williams. Others were studied in the marine vertebrate collections of S. Planes (CRIOBE) and Museum of Natural National History (MNHN). This analysis includes 22 species over the 30 described species (Additional file 1 : Table S1). The species A . chagosensis , A . latezonatus , A . leucokranos , A . mccullochi , A . pacificus , and A. sebae are not studied here because of the lack of individuals for quantifications. To quantify body elongation, we used the ratio between body height and the standard length. In order to describe the ingression between the anterior and the posterior lobe of the dorsal fin, we calculated two indexes based on the length of the third dorsal spine (l1), the length of the most posterior spine (l2), the length of the longest soft ray (lr), and the length of the dorsal fin (L). Anterior and posterior lobe morphology indexes correspond to (l1-l2)/L and (lr-l2)/L, respectively (Fig. 6e ). Ecological data (i.e., number of host sea anemones) were retrieved from FishBase [ 59 ] and the primary literature [ 33 , 60 ]. Relationships between striped patterns and fish morphology Body size and body elongation describe the overall fish shape, and it has been shown to be linked with swimming performance and with adaption towards different macro-habitats [ 35 , 36 ]. Dorsal fins are associated with stability and thrust [ 61 ], and their shape may thus provide information on the swimming behavior of clownfishes. We predicted that the striped pattern of clownfishes is related to ecological and behavioral differences, and thus, phylogenetic generalized least-squares (PGLS) regressions using a Brownian motion model were used to test for correlated evolutionary relationships between the number of stripes and the forms of fish body and dorsal fin. PGLS analyses were performed in the R-package geiger (version 2.0.6; [ 62 ]). Species recognition hypothesis The composition of clownfish communities at eight locations was retrieved from Camp et al. [ 38 ], and we counted the number of identical striped pairs within each of the eight communities. To test if color similarity among sympatric species was less than would be expected by chance, we generated 9999 random communities by creating random communities using the function randomizeMatrix [ 63 ] from the R-package picante (version 1.6.2; [ 63 ]). Variation of species morph pool due to color polymorphism was also included in the generated communities. Then, we compared the number of identical species in each random community to the number in the natural community by a binomial test. Larval rearing and observation of color ontogenesis A . ocellaris and A . frenatus were maintained at 26 °C in separate 60-L aquaria. Breeding pairs laid egg clutches on the underside of a terracotta pot placed in their aquarium. On the night of hatching (9 days post laying, 26 °C), egg clutches were transferred from the parental aquarium to a 30-L larval rearing aquarium. Larvae were fed rotifers ( Brachionus plicatilis ) at 10 individuals per milliliter three times a day for the first 7 days. The ratio of Artemia nauplii to rotifers was increased each day until larvae were fed only five individuals of Artemia nauplii per milliliter from day 7. From day 7 until day 20, larvae are euthanized or anesthetized in MS222 at 200 mg/L and 100 mg/L, respectively, in filtered aquarium water and photographed under a stereomicroscope. At least three larvae per species per day of development are studied. Drug treatment of larvae TAE684 (NVP-TAE684) (HY-10192, MedChem Express), a specific inhibitor of Ltk and Alk [ 31 ], was diluted in dimethylsulphoxyde (DMSO; Sigma-Aldrich Louis, MI, USA) to a final concentration of 6 mM. Larvae were treated from 5 until 18 dph in 0.005% DMSO with 0.3 μM or 0.6 μM TAE684 or without (controls). For each condition, five larvae were treated in 500-mL fish medium in a beaker (in total 20 individuals per conditions). One hundred milliliters of solution was changed every day.
Coral reef fishes, including clownfish, display a wide variety of colors but it remains unclear how these colors evolved or how they develop throughout a fish's life. Research published in BMC Biology sheds new light on the evolution of different stripe patterns in clownfish and on how these patterns change as individuals from different species grow from larvae into adults. Dr. Vincent Laudet, the corresponding author at Sorbonne University, France said: "We show that the ancestor of today's clownfish possessed three white stripes. Then, as some species evolved they lost stripes and we reveal a surprising similarity between this loss of stripes during species evolution and the development of different stripe patterns in individuals from different species today. " Studying two species of clownfish - Amphiprion ocellaris and Amphiprion frenatus - that have three stripes or a single head stripe, respectively, the authors found that shortly after hatching, the larvae of neither species had any stripes. Subsequently, both species acquired stripes on head and trunk at the same time, with A. oscellaris acquiring a third stripe near the tail and A. frenatus losing the trunk stripe before reaching adulthood. Examining development information for 26 additional species of clownfish, the authors observed that at least nine species have more stripes as juveniles than they do as adults, which prompted the authors to investigating the development of stripes across the evolution of clownfish. Dr. Laudet said: "Interestingly, every clownfish species existing today gains stripes from front to back after they are born, before individuals of some species lose stripes again from back to front as they grow into adults, which is similar to the loss of stripes observed during clownfish evolution; while all clownfish started out with three stripes—that is their last common ancestor had three stripes—as they diversified into what are now 30 different species, some clownfish lost stripes in a pattern that is similar to how today's clownfish lose stripes as they grow up." Fifteen-day-old juvenile clown fish (A. ocellaris). It already fully displays two anterior stripes, on the head and trunk, while a third is forming on the tail. Credit: © Natacha Roux Dr. Laudet added: "It is also interesting that while clownfish species vary in their number of stripes from zero to three, there is limited variation in how these stripes are organised. In all two-striped species, the stripe nearest the tail has been lost, while the head and the trunk stripes are retained. All one-striped species have retained the head stripe and have lost the trunk and tailfin stripes. So, some fish have no stripes at all, while others have one stripe near the head, one stripe each near the head and on the trunk, or three stripes near the head, on the trunk, and near the tail, but you will never find a clownfish with just one stripe near the tail, or one stripe near the tail and one near the head." In order to investigate the molecular mechanisms that underlie stripe formation and loss, the authors treated clownfish larvae with a substance known to suppress stripe development in zebrafish. The substance works by targeting certain receptors in iridophores; the cells that produce a reflective/ iridescent color. The authors found that larvae treated with the substance did not fully develop stripes or developed no stripes at all in a dose-dependent manner. The findings suggest that the white stripes in clownfish are produced by iridophores and that a decrease in the number of these cells will inhibit stripe formation. Dr. Laudet said: "Because coral reef fishes provide examples of complex color patterns, they offer a unique opportunity to better understand the origin of these traits. Unraveling the mysteries of why pigmentation patterns from coral reef fish are so diverse, how they evolved and where their diversity originated will help us to understand the formation of very complex phenotypes." The authors also suggest a possible purpose for the different stripe patterns; they may allow clownfish to recognize individuals belonging to the same species, including potential partners for reproduction.
10.1186/s12915-018-0559-7
Earth
How climate change could impact algae in the global ocean
The biogeographic differentiation of algal microbiomes in the upper ocean from pole to pole, Nature Communications (2021). DOI: 10.1038/s41467-021-25646-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-25646-9
https://phys.org/news/2021-09-climate-impact-algae-global-ocean.html
Abstract Eukaryotic phytoplankton are responsible for at least 20% of annual global carbon fixation. Their diversity and activity are shaped by interactions with prokaryotes as part of complex microbiomes. Although differences in their local species diversity have been estimated, we still have a limited understanding of environmental conditions responsible for compositional differences between local species communities on a large scale from pole to pole. Here, we show, based on pole-to-pole phytoplankton metatranscriptomes and microbial rDNA sequencing, that environmental differences between polar and non-polar upper oceans most strongly impact the large-scale spatial pattern of biodiversity and gene activity in algal microbiomes. The geographic differentiation of co-occurring microbes in algal microbiomes can be well explained by the latitudinal temperature gradient and associated break points in their beta diversity, with an average breakpoint at 14 °C ± 4.3, separating cold and warm upper oceans. As global warming impacts upper ocean temperatures, we project that break points of beta diversity move markedly pole-wards. Hence, abrupt regime shifts in algal microbiomes could be caused by anthropogenic climate change. Introduction Phytoplankton are a diverse group of largely photoautotrophic microorganisms encompassing algae and cyanobacteria 1 , 2 , contributing approximately half of the annual global carbon fixation 3 . Although the interconnected oceans generally do not limit their global dispersal 4 , 5 , 6 many studies have shown that their local diversity is correlated with geographical partitioning based on either oceanographic fronts that separate populations or larger-scale ecosystem gradients such as the latitude gradient in local species diversity 7 , 8 , 9 , 10 . However, there is also evidence that environmental and ecological selection in geographically well-defined and seemingly unstructured marine ecosystems likely plays a role in generating and maintaining microbial diversity 11 . Regardless as to whether inter or intra-specific variations are being considered to explain microbial diversity patterns in the global ocean, two variables usually explain most of the relatedness between species and populations, respectively: temperature and whole-community chlorophyll a 9 , 11 . Temperature is known to be a strong selecting agent evidenced by thermal tolerance limits according to the geographic origin of species 9 , 12 , 13 . Furthermore, temperature together with salinity and the flow of currents creates ecological boundaries in the upper ocean such as oceanographic fronts, which might impact the structure and evolution of inter and intra-specific diversity across spatio-temporal scales 10 , 14 . Chlorophyll a on the other hand, which is a proxy for the biomass of phytoplankton, suggests that ecological selection is at play via interactions with organisms that benefit from phytoplankton and vice versa 11 . Besides herbivores such as copepods and krill, heterotrophic microbes such as bacteria and archaea are among those groups with significant interactions with phytoplankton 15 . Some of them even form intimate relationships including mutualism and symbiosis 16 , 17 . The space where most of the interactions between phytoplankton and heterotrophic prokaryotes take place is the phycosphere, a microscale mucus region that is rich in organic matter surrounding a phytoplankton cell analogous to the rhizosphere in plants 18 , 19 . Thus, organic matter released by phytoplankton are used as substrates for prokaryotes, which sometimes provide essential bioactive compounds in return, such as vitamin B12. About 60% of examined heterokont microalgae (e.g. diatoms) require vitamin B12 that is synthesized by bacteria and archaea 20 . Thus, those bacteria have formed a mutualistic relationship with phytoplankton that potentially help to sustain primary productivity in many parts of the global ocean 16 . There is also evidence for species-specific diversity of algal microbiomes. Often, it is the phytoplankton partner that recruits heterotrophic microbes via the secretion of infochemicals, which elicits a response from the other microbes 19 . As these signalling processes can be species-specific and likely have co-evolved in association with responding partners, algal microbiomes are complex and dynamic and their diversity might be either driven by ecological or environmental selection, generating and maintaining these intimate relationships over space and evolutionary time. As algal microbiomes underpin some of the largest food webs on Earth and drive global biogeochemical cycles, significant international efforts, especially over the last decade have provided insights into what drives their diversity and global biogeography in the global ocean. For instance, large-scale ocean omics studies in the epipelagic realm as part of the Tara Oceans project 21 , 22 showed that associations among microbes were non-randomly distributed in co-occurrence networks and that their structure was driven by both local and global patterns 15 . Microbial networks that included a significant amount of prokaryotic phytoplankton (cyanobacteria) even appear to be responsible for the majority of carbon exported in the oligotrophic ocean 23 . Interestingly, some of the co-occurrence networks that contained eukaryotic phytoplankton groups were not taxon-specific and dominated by mutual exclusions, which suggests that their biogeography may be influenced by predator-prey dynamics 24 . These studies have provided a step change in our understanding of how ecological interactions in the context of changing environmental conditions likely influence the diversity of the photoautotrophic microbial interactome in the global ocean. However, to assess how environmental conditions such as temperature and variable nutrient concentrations impact the diversity of algal microbiomes, it is instrumental to include polar oceans. With their inclusion, the complete spectrum of environmental parameters that co-vary can be used to assess how these parameters on a truly global scale from pole-to-pole impact differences in the variation of species identities and abundances between local assemblages across larger regions (beta diversity) 25 , 26 of interacting algal microbiomes, which, to the best of our knowledge, has not been addressed in previous studies. The application of beta diversity enables us to understand the degree of differentiation among biological communities, which across the complete latitudinal scale from pole to pole will provide insights into how marine microbes are latitudinally distributed. As the Arctic and Southern Oceans and specifically their eukaryotic phytoplankton and associated prokaryotes are often not included in global biodiversity surveys, our understanding of how environmental variables including habitat characteristics of polar oceans influence differences in their diversity and activity is incomplete. However, with the inclusion of polar communities, biogeographic differentiation will not likely reveal drivers responsible for small-scale and local differences in the relatedness of communities because the extreme ends of the environmental spectrum are being considered. Rather, this approach will provide insights into environmental variables that are likely responsible for the most latitudinal differentiation of microbial diversity, potentially overshadowing variables responsible for local differences in microbial diversity patterns. Our study, therefore, addresses how large-scale environmental differences on a nearly complete latitudinal scale from pole-to-pole correlate with the biogeographical differentiation of algal microbiomes including the gene activity of eukaryotic phytoplankton. Furthermore, as the upper ocean is experiencing significant warming due to the production of anthropogenic carbon dioxide, we estimate how their biogeographic differentiation might alter based on a model from the IPCC 5th Assessment Report. The main outcome of our work shows that physico-chemical differences between polar and non-polar upper oceans have a strong influence on the dissimilarity of algal microbiomes with respect to changes in the diversity of their co-occurring microbes but also the gene expression activity of their primary producers. These results suggest that there is an ecological boundary in sub-polar oceans of both hemispheres, which not only alters the spatial scaling of algal microbiomes but also shifts pole-wards due to global warming. Results A meta-omics resource for algal microbiomes in the upper ocean from pole to pole Three different omics datasets were collected for this study from chlorophyll a maximum layers of the Arctic, Atlantic and Southern Oceans (Fig. 1A ): (1) 79× eukaryotic metatranscriptomes, 2) 57 × 16S and (3) 54 × 18S rDNA amplicon (V4 region) datasets as subsets of the 82 total samples (Fig. 1A ). Sequencing was done at the U.S. Department of Energy Joint Genome Institute (JGI) as part of the JGI Community Science Project 532/300780 (Sea of Change: Eukaryotic Phytoplankton Communities in the Arctic Ocean). Fig. 1: Sampling sites and environmental metadata. A Stations for metatranscriptome sequencing (green) and 16 and 18S rDNA amplicon sequencing (red). Map was generated using Ocean Data view. B Latitude versus temperature (degree celsius). C Latitude versus nitrate and nitrite concentrations. D Latitude versus silicate concentrations. E Latitude versus phosphate concentrations. Nutrient concentrations in µmol L −1 . Full size image This dataset consists of sequence data from 4 separate cruises: ARK-XXVII/1 (PS80)—17th June to 9th July 2012; Stratiphyt-II—April to May 2011; ANT-XXIX/1 (PS81)—1st to 24th November 2012 and ANT-XXXII/2 (PS103)—20th December 2016 to 26th January 2017 and covers a transect of the Atlantic Ocean from Greenland to the Weddell Sea (71.36°S to 79.09°N). The 79 eukaryotic metatranscriptomes were sequenced (Illumina HiSeq-2000 instrument) to an average depth of 251 Mbp each based on standard JGI protocols. These data were processed by the Integrated Microbial Genomes and Microbiomes (IMG) pipeline at JGI 27 . For estimating microbial diversity, 16S and 18S rDNA amplicon datasets were generated (Illumina MiSeq) with an average sequencing depth of 71.8 Mbp and 52.5 Mbp per sample, comprising an average of 393,247 and 142,693 sequences per sample, respectively. A custom bioinformatics pipeline was built for 18S rDNA classifications including a model to normalise the copy number of 18S rDNAs according to the estimated genome sizes of diverse eukaryotic microbes (Supplementary Figs. 1 , 2 ). Rarefaction analysis of all sequence datasets indicated that adequate sampling was achieved for all three types of datasets (Supplementary Fig. 3 ). Of the total number of contigs (34,241,890) in our metatranscriptome dataset, 36,354,419 non-redundant genes could be predicted, and from these genes ca. 31% (11,205,641 genes) could be assigned to a Pfam domain 28 . Most of the identified prokaryotic and eukaryotic taxa were present at more than 20 stations and had an evenness of J’ ≥0.5 (Supplementary Figs. 4 , 5 ). Only 22% of the 18S dataset could be assigned to taxa at the levels of species (Supplementary Figs. 4a , 6c ), while for the 16S dataset, 47% could be assigned to taxa at the levels of genus (Supplementary Fig. 4b , Supplementary Fig. 6d ). The metatranscriptomes represent a set of 36,354,419 non-redundant genes of which nearly 28% could be annotated as being of eukaryotic origin, and 31% had homology to known protein domains in the Pfam database. All sequence data were accompanied by measurements of temperature, salinity, dissolved inorganic nitrate/nitrite, phosphate and silicate at the depth of sampling (Fig. 1B–E ; Supplementary Table 1 ). Temperatures in both hemispheres ranged from ca. −1.74 to 29.02 °C reflecting the pole to equator distribution of annual average upper ocean temperatures (Fig. 1B ). Salinity varied between 31.0 and 36.9 PSU. Dissolved inorganic nutrients (µmol L −1 ) were most highly concentrated in the Southern Ocean with minima for all nutrients at ca. 30°S/N (Fig. 1C–E ). Based on a canonical correspondence analysis (CCA) all Pfams from metatranscriptomes against these individual environmental variables (Supplementary Fig. 6a, b ), temperature was determined to account for the highest percentage of variation compared to all other environmental variables in each dataset. Temperature also had a significantly positive correlation ( R 2 ≥ 0.63; p -value ≤ 0.001) with prokaryotic and eukaryotic diversity (Shannon Index) (Supplementary Fig. 7 ). Co-occurrence networks of expressed genes and microbial taxa The first pole-to-pole eukaryotic metatranscriptomes from chlorophyll a maximum layers (Fig. 1A ) enabled us to provide insights into how global-scale environmental conditions in the upper ocean drive biogeographic differentiation of eukaryotic community gene expression. To identify which environmental variable was most responsible for a possible latitudinal differentiation in gene co-expression networks, we applied a weighted gene co-occurrence network analysis (WGCNA) 29 based on Pfam gene counts. Our WGCNA revealed that there were two gene co-expression networks only based on positive links (Fig. 2A , Supplementary Table 5 ). A correlation statistical analysis which is part of the WGCNA package was conducted. This involved taking each network’s ‘eigengene’, a term used by WGCNA, which is the first principal component of a network, to be representative of that network in order to conduct a correlation analysis of networks to the environmental variables as shown in Fig. 2B . Based on this work, temperature was identified as the primary driver for both networks, which corroborates results from our CCA analysis (See above and Supplementary Fig. 6 ). Whereas salinity was co-correlated with temperature, the major inorganic nutrients such as nitrate, phosphate and silicate were significantly ( p -value ≤0.001) anti-correlated to temperature and salinity. The gene co-expression network designated as blue ( N = 1614 Pfams) has a strong positive relationship with temperature (correlation coefficient of +0.72; p -value = 2e−12), hence, this is considered to be the warm network. The network designated as turquoise ( N = 2369 Pfams) has a strong negative relationship with temperature (correlation coefficient of −0.8; p -value = 1e−16), hence, this is considered to be the cold network. 7,172,786 genes with an average length of 757 bps were part of the cold network whereas the warm network was composed of 4,954,085 genes that had an average length of 655 bps. The average GC content of transcripts in the cold network was 51% and in the warm network was 52%. 831,540,849 reads of the cold network and 1,239,584,159 reads of the warm network could be assigned Pfam domains. Unassigned Pfams designated as grey ( N = 2 Pfams) did not form a co-expression network and had only a significantly positive correlation (+0.39; p -value = 8e−04) with latitude. Fig. 2: Co-occurrence networks of protein families in eukaryotic metatranscriptomes and their gene ontology. On the log10-scaled gene counts of protein families (Pfams), two networks were found: A blue = warm ( n = 1614) and turquoise = cold ( n = 2369). B Co-occurrence analysis of Pfam protein families dataset, two networks were found, a turquoise (cold) and blue (warm), and also a grey (2 Pfams: no network). Correlation heatmap between the networks and environmental parameters. The colours correspond to the correlation values, red is positively correlated and blue is negatively correlated. The values in each of the squares correspond to the assigned Pearson correlation coefficient value on top and p -value in brackets below. C Gene ontology (GO) analysis of the co-occurrence of Pfam protein families dataset for both co-occurrence networks. Full size image Gene ontology (GO) analyses with Pfams from both networks (Fig. 2C ; Supplementary Fig. 8 ) showed that the cold network was enriched in several molecular functions associated with catalytic activity in general and specifically with acting on proteins and RNAs. Strongly enriched in the warm network were cellular components including mitochondria, ribosomes, non-membrane bound organelles, and the envelope. The mapping of the node-specific Pfam abundance for each network across all stations is shown in Fig. 3A , B . Pfams of the cold network mainly recruit from the Southern Ocean and the Arctic (86.7% total) with the lowest abundance of Pfams mapping to stations between 30°N/S (13.3% total). In contrast, Pfams from the warm network were mainly recruited from the tropical and temperate North Atlantic (48.1% total). Interestingly, slightly more Pfams were recruited from the Arctic (38.7% total) then the Southern Ocean (13.2% total) for this network. Fig. 3: Biogeographical mapping of the node-specific abundance for each protein family (Pfams) network across all stations from pole to pole. Contribution of Pfam containing sequences from individual metatranscriptome sites to corresponding protein family co-occurrence networks. Bubbles scaled according to percentage contribution to total abundance pool. A Pfam biogeography of cold co-occurrence network and B Pfam biogeography of warm co-occurrence network. Abundance is given in percentage contribution to the total sequence pool per site with increasing contribution from small to large circles and from blue to red. Full size image To reveal how environmental gradients from the Arctic to the equator influence associations between microbial eukaryotes and prokaryotes, we applied the same WGCNA 29 analysis as applied for the eukaryotic metatranscriptomes on log10 transformed normalized (according to genome size, Supplementary Fig. 2 ) abundances of 18S and 16S rDNA sequences. Co-occurrences were estimated on the normalized abundance of sequences at the species level for eukaryotes (18S) and genus level for prokaryotes (16S). Similar to the gene expression co-occurrence analysis, we obtained two major networks between eukaryotes and prokaryotes that correlated most strongly with temperature and latitude (Fig. 4A, B ). Thus, similar to the gene co-expression networks, we identified a cold (Blue; n = 51 species; correlation coefficient of ≤0.79; p -value ≤ 1e−10) and a warm network (Turquoise; n = 70 species; correlation coefficient of ≥0.83; p -value ≤ 3e−12) of co-occurring eukaryotic and prokaryotic microbes (Supplementary Table 2 ). Unlike for the metatranscriptomes, there were no unassigned 16 and 18S sequences. In the cold network, green algae of the group Prasinophytes were species rich and the Prymnesiophyte Phaeocyctis cordata had the highest number of connections to other species in this cluster (Supplementary Table 2 ). The prokaryotic community had several highly connected bacterial taxa known to include cold-adapted species some of which co-occurring with diatoms (e.g. Glaciecola) 30 . Two bacterial taxa in this cluster (Herbaspirillum, Bradyrhizobium) are known to have species that have the ability to fix atmospheric N 2 31 , 32 . Although Coscinodiscophyceae were particularly abundant in cold waters of the Arctic, only one species ( Actinocyclus actinochilus ) was part of this cluster. The network from warm waters was very different in terms of species composition and co-occurrence patterns. Unlike in the cold network, cyanobacteria were among the most highly connected taxa including Prochlorococcus and Synechococcus. Small and mostly flagellated species from the group of Heterokontophyta dominated the most diverse group of eukaryotes in this cluster. There were also Dinoflagellates, Haptophytes and Pelagophytes. Many highly connected heterotrophic bacteria in this cluster are known to be associated with particles (e.g. soil, biofilm) and two taxa are known to have photoheterotrophic species that contain bacteriochlorophyll (Erythrobacter, Roseivivax) 33 . This cluster contained neither diatoms nor prasinophytes. There were eight shared classes of species in both co-occurrence networks namely Gammaproteobacteria, Alphaproteobacteria and Flavobacteriia. A full list of the classes of species can be found in Supplementary Table 2 . Fig. 4: Co-occurrence networks of 16 and 18S rDNAs, their biodiversity and biogeographical mapping of the node-specific abundance for each taxonomic network across all stations from pole to pole. On the log10 transformed abundances of 18S rDNA species level and 16S rDNA genus level, two networks were found: A cold ( n = 51) and warm (n = 70). A list of species names and class names of the species can be found in the Supplementary Table 2 . B Co-occurrence analysis of 18S rDNA species level and 16S rDNA genus level, two networks were found, a turquoise (cold) and blue (warm). Correlation heatmap between the networks and environmental parameters. The colours correspond to the correlation values, red is positively correlated and blue is negatively correlated. The values in each of the squares correspond to the assigned Pearson correlation coefficient value on top and p -value in brackets below. C Taxa biogeography of cold 16/18S co-occurrence network. D Taxa biogeography of warm 16/18S co-occurrence network. Abundance is given in percentage contribution to the total sequence pool per site with increasing contribution from small to large circles and from blue to red. Full size image Biogeographical mapping of the node-specific 16 and 18S abundance for each network across all stations are shown in Fig. 4C, D . This revealed that 90.01% of sequences from the cold network were recruited from north of 60° in the Arctic Ocean with the opposite biogeographical recruitment pattern for the warm network (78.25 % from stations <60°N). The latitudinal differentiation (beta diversity) for expressed eukaryotic genes and microbial taxa As the co-occurrence analysis revealed for both expressed genes and taxa, that the environmental difference between polar and non-polar upper ocean waters appears to be most responsible for the geographical separation of algal microbiomes, we tested this result by calculating the ratio between regional and local sequence diversity (beta diversity) across all stations, which provides a measure of genetic differentiation between communities across latitudes. The partitioning of cold and warm co-occurrence networks suggests that there are major breakpoints in the genetic differentiation demarking the transition between polar and non-polar upper ocean ecosystems, with temperature and latitude likely being major drivers. In order to test this hypothesis, we calculated a presence–absence matrix for each dataset. A multiple-site dissimilarity was performed on the presence–absence matrix with beta.pair, a function from the betapart R package and a dissimilarity index set by Sørensen 34 . These values were then plotted against all environmental variables, to enable us to get a range of values in which the breakpoint might be located. We then searched through these possible breakpoints for the one with the lowest mean squared error. The search for breakpoints was performed using all environmental variables including nutrients and salinity as they are known to have an impact on microbial diversity and activity (Supplementary Figs. 9 , 10 ) 14 Latitude correlates like temperature (Figs. 2B , 4B , 5A, B ). Only the strong latitudinal gradient of temperature showed significant breakpoints in beta diversity, which largely separated cold from warm microbial communities and their associated metabolism (Fig. 5A ). For metatranscriptomes, the breakpoint was estimated to be at 18.06 °C (Fig. 5A ), for 16S we identified a breakpoint at ca. 9.49 °C (Fig. 5C ) and 18S at 13.96 °C (Fig. 5D ). The average temperature for the taxonomic and functional beta diversity of eukaryotic phytoplankton and their co-occurring bacteria is 14 °C ± 4.3. The metatranscriptome data enabled us to identify the geographical locations of the breakpoints as the dataset is pole to pole (Fig. 5B ). The two breakpoints identified largely separate polar from non-polar oceans (Fig. 5B ). Fig. 5: Beta diversity break-point analyses. A, B Represent breakpoints of protein families as part of the metatranscriptome dataset. C , D Represent breakpoints of the 18S rDNA and 16S rDNA datasets. The numbers correspond to sample locations as shown in Fig. 1A . The y -axis represents beta diversity across all stations. The x -axis in A , C and D represents temperature and in B represents latitude. The horizontal lines indicate the breakpoints in beta diversity. For the Pfam protein families dataset in ( A ), the breakpoint is at 18.06 °C with a p -value of 3.741e−10. In B the breakpoint is at 52.167 degrees altered latitude (37.833 degrees latitude) with a p -value of 2.225e−07. For the 16S rDNA dataset in ( C ), the breakpoint is at 9.49 °C with a p -value of 1.413e−4. For the 18S rDNA dataset in ( D ), the breakpoint is at 13.96 °C with a p -value of 8.407e−11. Full size image Projection of geographical shifts in beta-diversity breakpoints across the North Atlantic The global ocean is a significant sink of heat with the consequence that the upper ocean has become warmer over the past 100 years due to the anthropogenic production of carbon dioxide. Thus, stratified warm-water masses expand pole-wards. This is of particular relevance in the North Atlantic and North Pacific and even the Arctic Ocean 35 , 36 .To simulate how warming of the North Atlantic might impact the beta-diversity breakpoints and therefore local changes in the algal microbiomes, we utilised a model from the IPCC 5th Assessment Report. For estimates of changes over the 21st century, we use the RCP 8.5 HadGEM2-ES CMIP5 experiment 37 . A historical HadGEM2-ES experiment was also run for CMIP5, which we used to bias-correct the projected temperatures. The resulting shifts in breakpoints from these temperatures are shown in Fig. 6 . Grid boxes that contain sea ice in the climatology were ignored from this analysis. Projections from the model show that the most affected geographical region in terms of shifts in the diversity of algal microbiomes over the coming decades is the area between 40 and 60° N, which includes the North Sea and most of the British Isles (Fig. 6 ). Fig. 6: IPCC-based modelling of climate driven shifts in beta diversity breakpoints. Observations (1961–1990) and modelled (2010–2099) changes over the 21st century, in the thresholds for breakpoints in beta diversity. Regions are shown as red for metatranscriptomes (>18.06 °C), orange for 18S (<18.06 °C, >13.96 °C), yellow for 16S (<13.96 °C, >9.49 °C) and blue for temperatures <9.49 °C for a 1961–1990 observations from the HadISST dataset. Modelled estimates temperatures from the HadGEM2-ES CMIP5 run for the 30-year averages, 2010–2039, 2040–2069, and 2070–2099, respectively. Temperatures from HadGEM2-ES have been calibrated to the HadISST observations as described in methods. Black solid line represents the 15 °C and the dashed line the 14 °C average upper ocean temperature. Full size image Discussion Our study has provided evidence that differences in environmental conditions between polar and non-polar upper oceans can explain the partitioning of co-occurring sequences into two major algal microbiomes (Figs. 2 – 4 ). The latitudinal differentiation of their individual sequences based on beta diversity is mainly correlated with the latitudinal gradient of temperature in the upper ocean, especially at transition zones (breakpoints) between polar and non-polar oceans (Fig. 5 ), hence corroborating our WGCNA analysis (Figs. 2 – 4 ). However, many other environmental parameters including essential nutrients were either significantly negatively or positively correlated with temperature and latitude, suggesting that they also play an important role in the biogeographic differentiation of algal microbiomes in the upper ocean. The negative correlation of inorganic nutrients with temperature (Figs. 2B , 4B ) reflects the observation that cold upper waters are usually nutrient-rich whilst warmer upper ocean waters tend to be nutrient poor considering global and annual averages 38 . Thus, differences in the physical structure (e.g. seasonally mixed vs permanently stratified water) of the upper ocean caused by latitudinal gradients of temperature might be the main reason for the separation into largely polar (cold) and non-polar (warm) algal microbiomes. The difference in recruiting sequences from polar vs non-polar oceans is larger for the two taxonomic networks (Fig. 4C, D ) compared to the gene expression networks (Fig. 3A, B ). Considering that the number and redundancy of expressed genes and Pfams in metatranscriptomes is significantly higher than the more distinct datasets of 16 and 18S sequences, this numerical difference may have contributed to differences in the degree of latitudinal partitioning. A reason for the stronger recruitment of Pfams from the Arctic (38.7% total) compared to the Southern Ocean (13.2% total) for the warm network might be due to the North Atlantic Current (NAC), which was sampled (Fig. 1 ), and likely carried microbes from lower latitudes as the NAC is a northward prolongation of the Gulf Stream. In contrast, the frontal system in the Southern Ocean represents a boundary system less prone to a poleward range shift of microbial species from lower latitudes 10 . Hence, a lower number of Southern Ocean Pfams were recruited for the warm co-occurrence network. Although several global-scale studies, with Tara Oceans 22 being the most significant, have already revealed that temperature can be considered the best predictor of local epipelagic plankton diversity 9 our study has extended this work by including both polar oceans and by focusing on eukaryotic phytoplankton and their co-occurring prokaryotic microbes. Furthermore, this is the first study, at least to the best of our knowledge, which is based on latitudinal beta diversity to reveal genetic differentiation in marine microbial communities from pole to pole in relation to variable environmental conditions. Our results, therefore, provide insights into how changing environmental conditions correlate with biodiversity changes (breakpoints in beta diversity) subject to large-scale environmental fluctuation and disturbances 26 . This knowledge is essential for predicting the consequences of global warming (Fig. 6 ) and therefore may guide environmental management. Most previous studies compared local species diversity (alpha diversity) across latitudes 9 . Nevertheless, temperature was also identified as one of the most important variables explaining differences in species composition of local communities across large-scale latitudinal gradients. The concept of ocean biogeochemical provinces (Longhurst provinces) 39 often matches local differences in upper-ocean microbiomes 14 and their linked biogeochemical activity such as nutrient and carbon cycling 40 . Although our study confirms the large-scale genetic differentiation of algal microbiomes between polar (ICE, SPSS) and non-polar Longhurst provinces (e.g. STSS, NHSTPS, SHSTPS) covered by our pole-to-pole transect, we did not identify geographic differentiation between any of the non-polar Longhurst provinces. Arguably, there are no stronger environmental differences than between polar and non-polar upper oceans mainly caused by strong seasonality closer to the poles, overall low temperatures, the presence of sea ice, and differences in seasonal mixing 38 . Thus, environmental differences between polar and non-polar oceans may impose much stronger geographic differentiation in biodiversity of algal microbiomes and their expressed genes compared to environmental differences between Longhurst provinces of non-polar oceans (e.g. STSS, NHSTPS, SHSTPS). As the Arctic and the Southern Ocean do not significantly differ in their overall environmental conditions, this may explain why we have not seen a differentiation of algal microbiomes between both polar oceans. Hence, Pfams for the cold co-occurrence cluster have been recruited from both polar oceans (Fig. 3 ). The enrichment of GO terms for catalytic activity in the cold Pfam network likely reflects metabolic requirements to thrive under polar conditions. Most cold-adapted microbes optimise their enzymes to increase their catalytical activity at lower temperatures 41 . The optimization of enzymes to low temperature activity is usually facilitated by destabilisation of the molecular structures (e.g. active site). The enrichment of GO terms specifically for the catalytical activity of proteins and RNAs (Fig. 2C ) suggests that these polar microbial communities have not only increased their catalytical activity of enzymes but also catalytic activity that acts to modify RNAs 42 . The GO enrichment of cellular components in the warm network (Fig. 2C ) might reflect an increased turnover of subcellular compartments including their membranes due to increased metabolic activity (respiration in mitochondria) and stress (radical oxygen species) at higher temperatures, which is known to occur in microalgae 43 . The taxonomic differences based on 16 and 18S rDNA sequencing between cold and warm co-occurrence networks largely confirm differences in the biogeographical distribution of individual species across latitudinal regions of the global upper ocean 9 , 22 , 44 , 45 , 46 , 47 . For instance, Prochlorococcus and Synechoccus mainly dominate tropical and subtropical upper oceans together with eukaryotic pico- and nanoflagellates. Those taxa were found to be dominant in the warm network with a significant number of connections to additional taxa. In contrast, the cold network was characterised by abundant and well-connected sequences from phylogenetic groups known to include cold-adapted bacteria (e.g. Polaribacter, Glaciecola) and microalgae such as diatoms (e.g. Actinocyclus actinochilus) and Prymnesiophytes (e.g. Phaecystis cordata ). Interestingly, two previous studies have suggested a similar geographic partitioning but for phytoplankton productivity and mainly prokaryotic biodiversity. Behrenfeld et al. 38 identified that the physical environment of the upper ocean impacts the net primary production (NPP) of phytoplankton communities. On a global scale including polar oceans, they identified that differences in upper-ocean temperature and stratification across a latitudinal gradient were mainly responsible for the partitioning of NPP. The latter being higher in cold, nutrient-rich, and high-latitude regions whereas lower NPP was observed in warm, nutrient-poor and permanently stratified upper oceans. The demarcation zone between both global regions for NPP was estimated to be at approximately 15 °C on an annual average. This temperature is in good agreement with the average temperature for breakpoints in the taxonomic and functional beta diversity of eukaryotic phytoplankton and their co-occurring bacteria at 14 °C ± 4.3. A similar demarcation boundary was found for the latitudinal partitioning in diversity and activity of prokaryote-enriched metagenomes and metatranscriptomes, respectively 48 . Thus, our data together with these previous studied provide support for the hypothesis that environmental conditions separating cold (nutrient rich) from warm (nutrient poor) upper oceans are likely responsible for the latitudinal differentiation of algal microbiomes underpinning differences in ocean productivity and global biogeochemical cycles. The latitudinal gradient of temperature caused by seasonal differences in solar radiation together with associated conditions such as differences in upper-ocean stratification and nutrient concentrations appear to be the main drivers. As the anthropogenic production of carbon dioxide raises global temperatures, which has already caused significant ocean warming, it is likely that the spatial distribution of algal microbiomes will change according to poleward shifts in geographical demarcation boundaries matching breakpoints in beta diversity of species and their gene pool. Our model for the North Atlantic shows that the area between 40 and 60° N might be affected the most over the next approximate 100 years as we forecast a complete replacement of cold algal microbiomes (Fig. 6 ) in this geographical area. As the area between 40 and 60° N is known to be nutrient rich and, therefore, productive especially the North Sea, a replacement of current microbial communities is likely to have significant impact on food webs including fisheries with consequences for associated industries. Taken together, our study confirms the latitudinal distribution pattern in local (alpha) diversity of complex marine microbial communities with a significant decrease from the equator towards polar ecosystems (Supplementary Fig. 7 ) 9 . However, pole-to-pole datasets, which represent a more complete spectrum of environmental variables, offer the opportunity to identify the most pronounced differences in the variation of alpha diversity across larger biogeographic regions (beta diversity). The latter, to the best of our knowledge, has never been estimated before for oceanic microbes although this knowledge is instrumental for spatial scaling of changes in diversity, i.e. loss and gain 26 . The application of beta diversity to pole-to-pole algal microbiomes revealed for the first time that physico-chemical differences between polar and non-polar upper oceans have a strong influence not only on changes in their diversity but also the gene expression activity of their primary producers. Consequently, there appear to be ecological boundaries in sub-polar oceans of both hemispheres, which not only alter the spatial scaling of algal microbiomes (breakpoints in beta diversity), but also shift pole-wards due to global warming. Methods Research cruises This dataset consists of sequence data from 4 separate cruises: ARK-XXVII/1 (PS80)—17th June to 9th July 2012; Stratiphyt-II— April to May 2011; ANT-XXIX/1 (PS81)—1st to 24th November 2012 and ANT-XXXII/2 (PS103)—16th December 2016 to 3rd February 2017 and covers a transect of the Atlantic Ocean from Greenland to the Weddell Sea (71.36°S to 79.09°N) (Supplementary Table 1 ). In order to study the composition, distribution and activity of microbial communities in the upper ocean across the broadest latitudinal ranges possible, samples have been collected during four field campaigns as shown in Fig. 1A . The first collection of samples was collected in the North Atlantic Ocean from April to May 2011 by Dr. Willem van de Poll of the University of Groningen, Netherlands and Dr. Klaas Timmermans of the Royal Netherlands Institute for Sea Research. The second set of samples was collected in the Arctic Ocean from June to July 2012, and the third set of samples was collected in the South Atlantic Ocean from October to November 2012. Both of which were collected by Dr. Katrin Schmidt of the University of East Anglia. The final set of samples was collected in the Antarctic Ocean from December 2016 to January 2017 by Dr. Allison Fong of the Alfred-Wegener Institute for Polar and Marine Research, Bremerhaven, Germany. Sampling Water samples from the Arctic Ocean and South Atlantic Ocean expeditions were collected using 12 L Niskin bottles (Rosette sampler with an attached Sonde (CTD, conductivity, temperature, depth) either at the chlorophyll maximum (10–110 m) and/or upper of the ocean (0–10 m). As soon as the rosette sampler was back on board, water samples were immediately transferred into plastic containers and transported to the laboratory. All samples were accompanied by measurements on salinity, temperature, sampling depth and silicate, nitrate, phosphate concentration (Supplementary Table 1 ). Water samples were pre-filtered with a 100 μm mesh to remove larger organisms and subsequently filtered onto 1.2 μm polycarbonate filters (Isopore membrane, Millipore, MA, USA). All filters were snap frozen in liquid nitrogen and stored at −80 °C until further analysis. Water samples from the North Atlantic Ocean cruise were also taken with 12 L Niskin bottles attached to a Rosette sampler with a Sonde. However, these samples were filtered onto 0.2 μm polycarbonate filters (Isopore membrane, Millipore, MA, USA) without pre-filtration but snap frozen in liquid nitrogen and stored at −80 °C as the other samples. Water samples from the Southern Ocean cruise were taken with 12 L Niskin bottles attached to an SBE911plus CTD system equipped with 24 Niskin samplers. These samples were filtered onto 1.2 μm polycarbonate membrane filters (Merck Millipore, Germany) in a container cooled to 4 °C and snap frozen in liquid nitrogen and stored at −80 °C as the other samples. Environmental data recorded at the time of sampling can be found in Supplementary Table 1 . DNA extractions: Arctic Ocean and South Atlantic Ocean samples DNA was extracted with the EasyDNA Kit (Invitrogen, Carlsbad, CA, USA) with modification to optimise DNA quantity and quality. Briefly, cells were washed off the filter with pre-heated (65 °C) Solution A and the supernatant was transferred into a new tube with one small spoon of glass beads (425–600 μm, acid washed) (Sigma-Aldrich, St. Louis, MO, USA). Samples were vortexed three times in intervals of 3 s to break the cells. RNase A was added to the samples and incubated for 30 min at 65 °C. The supernatant was transferred into a new tube and Solution B was added followed by a chloroform phase separation and an ethanol precipitation step. DNA was pelleted by centrifugation and washed several times with isopropanol, air dried and suspended in 100 μL TE buffer (10 mM Tris-HCl, pH 7.5, 1 mM EDTA, pH 8.0). Samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. DNA extractions: North Atlantic Ocean samples North Atlantic Ocean samples were extracted with the ZR-Duet™DNA/RNA MiniPrep kit (Zymo Research, Irvine, USA) allowing simultaneous extraction of DNA and RNA from one sample filter. Briefly, cells were washed from the filters with DNA/RNA Lysis Buffer and one spoon of glass beads (425–600 μm, Sigma-Aldrich, MO, USA) was added. Samples were vortexed quickly and loaded onto Zymno-Spin™IIIC columns. The columns were washed several times and DNA was eluted in 60 μmL, DNase-free water. Samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. DNA extractions: Southern Ocean samples DNA from the Southern Ocean samples was extracted with the NucleoSpin Soil DNA extraction kit (Macherey‐Nagel) following the manufacturer’s instructions. Briefly, cells were washed from the filters with DNA Lysis Buffer and into a lysis tube containing glass beads was added. Samples were disrupted by bead beating for 2 × 30 s interrupted by 1 min cooling on ice and loaded onto the NucleoSpin columns. The columns were washed three times and DNA was eluted in 50 μL, DNase-free water. Samples were stored at −20 °C until further processing. Amplicon sequencing of 16S and 18S rDNA All extracted DNA samples were sequenced and pre-processed by the Joint Genome Institute (JGI) (Department of Energy, Berkeley, CA, USA). iTAG amplicon sequencing was performed at JGI with primers for the V4 region of the 16S (FW(515F): GTGCCAGCMGCCGCGGTAA; RV(806R): GGACTACNVGGGTWTCTAAT) 49 and 18S (FW(565F): CCAGCASCYGCGGTAATTCC; RV(948R): ACTTTCGTTCTTGATYRA) 50 . (Supplementary Table 6 ) rRNA gene (on an Illumina MiSeq instrument with a 2 × 300 base pairs (bp) read configuration 51 . 18S sequences were pre-processed, this consisted of scanning for contamination with the tool Duk (US Department of Energy Joint Genome Institute (JGI), 2017,a) and quality trimming of reads with cutadapt 52 . Paired end reads were merged using FLASH 53 with a max mismatch set to 0.3 and min overlap set to 20. A total of 54 18S samples passed quality control after sequencing. After read trimming, there was an average of 142,693 read pairs per 18S sample with an average length of 367 bp and 2.8 Gb of data over all samples. 16S sequences were pre-processed, this consisted of merging the overlapping read pairs using USEARCH’s merge pairs 54 with the parameter minimum number of differences (merge max diff pct) set to 15.0 into unpaired consensus sequences. Any reads that could not be merged are discarded. JGI then applied the tool USEARCH’s search oligodb tool with the parameters mean length (len mean) set to 292, length standard deviation (len stdev) set to 20, primer trimmed max difference (primer trim max diffs) set to 3, a list of primers and length filter max difference (len filter max diffs) set to 2.5 to ensure the Polymerase Chain Reaction (PCR) primers were located with the correct direction and inside the expected spacing. Reads that did not pass this quality control step were discarded. With a max expected error rate (max exp err rate) set to 0.02, JGI evaluated the quality score of the reads and those with too many expected errors were discarded. Any identical sequence was de-duplicated. These are then counted and sorted alphabetically for merging with other such files later. A total of 57 × 16S samples passed quality control after sequencing. There was an average 393,247 read pairs per sample and an average base length of 253 bp for each sequence with a total of 5.6 Gb. RNA extractions: Arctic Ocean and Atlantic samples RNA from the Arctic and Atlantic Ocean samples was extracted using the Direct-zol RNA Miniprep Kit (Zymo Research, USA). Briefly, cells were washed off the filters with Trizol into a tube with one spoon of glass beads (425–600 μm, Sigma-Aldrich, MO, USA). Filters were removed and tubes bead beaten for 3 min. An equal volume of 95% ethanol was added, and the solution was transferred onto Zymo-Spin™ IICR Column and the manufacturer instructions were followed. Samples were treated with DNAse to remove DNA impurities, snap frozen in liquid nitrogen and stored at −80 °C until sequencing. RNA extractions: Southern Ocean RNA from the Southern Ocean samples was extracted using the QIAGEN RNeasy Plant Mini Kit (QIAGEN, Germany) following the manufacturer’s instructions with on-column DNA digestion. Cells were broken by bead beating like for the DNA extractions before loading samples onto the columns. Elution was performed with 30 µm RNase-free water. Extracted samples were snap frozen in liquid nitrogen and stored at −80 °C until sequencing. Metatranscriptome sequencing All samples were sequenced and pre-processed by the U.S. Department of Energy Joint Genome Institute (JGI). Metatranscriptome sequencing was performed on an Illumina HiSeq-2000 instrument 27 . A total of 79 samples passed quality control after sequencing with 19.87 Gb of sequence read data over all samples for analysis. This comprised a total of 34,241,890 contigs, with an average length of 503 and an average GC% of 51%. This resulted in 36354419 of non-redundant genes detected. JGI employed their suite of tools called BBTools 55 for preprocessing the sequences. First, the sequences were cleaned using Duk a tool in the BBTools suite that performs various data quality procedures such as quality trimming and filtering by kmer matching. In our dataset, Duk identified and removed adaptor sequences, and also quality trimmed the raw reads to a phred score of Q10. In Duk the parameters were; kmer-trim (ktrim) was set to r, kmer (k) was set to 25, shorter kmers (mink) set to 12, quality trimming (qtrim) was set to r, trimming phred (trimq) set to 10, average quality below (maq) set to 10, maximum Ns (maxns) set to 3, minimum read length (minlen) set to 50, the flag “tpe” was set to t, so both reads are trimmed to the same length and the “tbo” flag was set to t, so to trim adaptors based on pair overlap detection. The reads were further filtered to remove process artefacts also using Duk with the kmer (k) parameter set to 16. BBMap 55 is another a tool in the BBTools suite, that performs mapping of DNA and RNA reads to a database. BBMap aligns the reads by using a multi-kmer-seed-and-extend approach. To remove ribosomal RNA reads, the reads were aligned against a trimmed version of the SILVA database using BBMap with parameters set to; minratio (minid) set to 0.90, local alignment converter flag (local) set to t and fast flag (fast) set to t. Also, any human reads identified were removed using BBMap. BBmerge 56 is a tool in the BBTools suite that performs the merging of overlapping paired end reads (Bushnell, 2017). For assembling the metatranscriptome, the reads were first merged with the tool BBmerge, and then BBNorm was used to normalise the coverage so as to generate a flat coverage distribution. This type of operation can speed up assembly and can even result in an improved assembly quality. Rnnotator 52 was employed for assembling the metatranscriptome samples 1–68. Rnnotator assembles the transcripts by using a de novo assembly approach of RNA-Seq data and it accomplishes this without a reference genome 52 . MEGAHIT 57 was employed for assembling the metatranscriptome samples 69–82. The tool BBMap was used for reference mapping, the cleaned reads were mapped to metagenome/isolate reference(s) and the metatranscriptome assembly. Metatranscriptome analysis JGI performed the functional analysis on the metatranscriptomic dataset. JGI’s annotation system is called the Metagenome Annotation Pipeline (MAP) (v4.15.2) 27 . JGI used HMMER 3.1b2 58 and the Pfam v30 59 database for the functional analysis of our metatranscriptomic dataset. This resulted in 11,205,641 genes assigned to one or more Pfam domain. This resulted in 8379 Pfam functional assignments and their gene counts across the 79 samples. The files were further normalised by applying hits per million. 18S rDNA analysis A reference dataset of 18S rRNA gene sequences that represent algae taxa was compiled for the construction of the phylogenetic tree by retrieving sequences of algae and outgroups taxa from the SILVA database (SSUREF 115) 60 and Marine Microbial Eukaryote Transcriptome Sequencing Project (MMETSP) database 61 . The algae reference database consists of 1636 species from the following groups: Opisthokonta, Cryptophyta, Glaucocystophyceae, Rhizaria, Stramenopiles, Haptophyceae, Viridiplantae, Alveolata, Amoebozoa and Rhodophyta. A diagram of the 18S classification pipeline can be found in Supplementary Fig. 1 . In order to construct the algae 18S reference database, we first retrieved all eukaryotic species from the SILVA database with a sequence length of > = 1500 base pairs (bp) and converted all base letters of U to T. Under each genus, we took the first species to represent that genus. Using a custom written script ( ), the species of interest (as stated above) were selected from the SILVA database, classified with NCBI taxa IDs and a sequence information file produced that describes each of the algae sequences by their sequence ID and NCBI species ID. Taxonomy from the NCBI database, eukaryote sequences from the SILVA database and a list of algal taxa including outgroups were used as input for the script. This information was combined with the MMETSP database excluding duplications. The algae reference database was clustered to remove closely related sequences with CD-HIT (4.6.1) 62 using a similarity threshold of 97%. Using ClustalW (2.1) 63 we aligned the reference sequences with the addition of the parameter iteration numbers set to 5. The alignment was examined by colour coding each species to their groups and visualising in iTOL 64 . It was observed that a few species were misaligning to other groups and these were then deleted using Jalview 65 . The resulting alignment was tidied up with TrimAL (1.1) 66 by applying parameters to delete any positions in the alignment that have gaps in 10% or more of the sequence, except if this results in less than 60% of the sequence remaining. A maximum likelihood phylogenetic reference tree and statistics file based on our algae reference alignment was constructed by employing RaxML (8.0.20) 67 with a general time reversible model of nucleotide substitution along with the GAMMA model of rate heterogeneity. For a description of the lineages of all species back to the root in the algae reference database, the taxa IDs were submitted for each species to extract a subset of the NCBI taxonomy with the NCBI taxtastic tool (0.8.4) 68 Based on the algae reference multiple sequence alignment, with HMMER3 (3.1B1) 69 a Profile HMM was created. A pplacer reference package using taxtastic was generated, which produced an organized collection of all the files and taxonomic information into one directory. With the reference package, a SQLite database was created using pplacer’s Reference Package PReparer (rppr). With hmmalign, the query sequences were aligned to the reference set and created a combined Stockholm format alignment. Pplacer (re-aligned to the reference set and created a combined Stockholm format alignment. Pplacer (1.1) 70 was used to place the query sequences on the phylogenetic reference tree by means of the reference alignment according to a maximum likelihood model 70 The place files were converted to CSV with pplacer’s guppy tool; in order to easily take those with a maximum likelihood score of > = 0.5 and counted the number of reads assigned to each classification. This resulted in 6,053,291 reads that were taxonomically assigned being taken for analysis. Normalisation of 18S rDNA gene copy number 18S rDNA gene copy number vary widely among eukaryotes. In order to create an estimate of abundances of the species in the samples the data had to be normalised. Previous work has explored the link between copy number and genome size 71 . However, there is not a single database of 18S rDNA gene copy numbers for eukaryote species. In order to address this, gene copy number and related genome sizes of 185 species across the eukaryote tree was investigated and plotted (Supplementary Fig. 2 , Supplementary Table 4 ) 68 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 . Based on the log transformed data, a significant correlation with a R2 of 0.55 with a p -value < 2.2e−16 between genome size and 18S copy number was observed. A regression equation was determined (f(x) = 0.66X + 0.75) as shown in Supplementary Fig. 2 . To derive this equation, the genome sizes for the species in the reference datasets were retrieved from the NCBI genome database. Since some of the genome sizes were unavailable, for species with missing genome sizes, an average of available genome sizes in closely related species was taken instead. More specifically, first a taxonomic lineage of the relevant subset of the NCBI database was obtained by submitting the taxa IDs using the NCBI taxtastic tool 68 . Average genome sizes were then calculated by utilizing the parent ID and taxa ID columns and the known genome sizes of the lowest common ancestor. The 18S datasets were normalised by assigning their genome sizes using the regression equation. The files were further normalised by applying the hits per million reads method. 18S rDNA file preparation In our 18S rDNA dataset, we had taxonomic assignments from the eukaryote node down to the species nodes. We employed Metagenome Analyzer (MEGAN) (5.10.3) 80 to cut out specific taxonomic levels. In MEGAN, we extracted the classifications at the taxonomic rank of species. This consisted of a file being generated for each station that contained the species names and their assigned abundances. The files were further normalised to hits per million. In MEGAN, we extracted the leaves of the taxonomy tree at the rank of class and above but excluded assignments to the eukaryote node. Firstly, this consisted of a file being generated for each station that contained all assignments to the class nodes as well as any assignments under their respective lineages down to species being summed up under the individual class node. Secondly, we included nodes that were not highlighted for class taxonomic level on the leaves of the tree in MEGAN. These leaves were not highlighted because in NCBI taxonomy there are species that do not have a taxonomy designation at every taxonomy level. We took the nodes that were not highlighted on leaves of the tree and summed them together within their respective lineages and placed them under a new name. For example, under the phylum Rhizaria, on the leaves of the tree, there is Cercozoa, Gromiidae and unclassified Rhizaria which are not highlighted. Their abundance was summed together and renamed Nc. Rhizaria, “Nc.” standing for “No class”. The abundances assigned to Rhizaria were not included in this calculation. The leaves of the tree made up 34% of the total 18S rDNA dataset. The internal nodes between the leaves of the tree at the taxonomic rank of class and the eukaryote node was given a “U.” in front of their name, “U.” standing for “Unknown”. This was done to highlight that while they are of course associated with the lower lineages they are in fact considered separate, as those assignments to those nodes could not be determined any lower. The internal nodes made up 29% of the total 18S rDNA dataset. The abundance assigned to the eukaryote node was excluded from our analysis as these sequences could not be classified lower. This comprised of a total of 37% of the 18S rDNA dataset. A file was generated for each station that contained the class nodes, “Nc.” nodes and “U.” nodes with their respective abundances. The files were further normalised to hits per million. Throughout the paper we refer to the analysis of these files at the taxonomic rank of class. 16S rDNA analysis JGI performed the classification analysis on the 16S rDNA dataset 81 , 82 . JGI’s 16S rDNA classification pipeline (JGI pipeline iTagger v2.1 16S classification pipeline) consists of firstly removing samples with less than 1000 sequences. The remaining samples and the de-duplicated identical sequences from the preprocessing step are then combined and their sequences organized by decreasing abundance. The sequences are divided out based on the criterion as to whether they contained a cluster centroid with a minimum size of at least 3 copies. The low-abundance sequences are put aside and not used for clustering. USEARCH’s 83 cluster otus command is employed to incrementally cluster the clusterable sequences. This begins at 99% identity and the radius is increased by 1% for each iteration until a OTU clustering identity of 97% is reached. At each step, the sequences are sorted by decreasing abundance. Once clustering is complete, USEARCH’s usearch global is used to map the low-abundance sequences to the cluster centroids. These are added to OTU counts if they were in the prescribed percent identity threshold. If they do not fall within this prescribed percent identity threshold they are discarded. USEARCH’s UTAX along with the SILVA database is used to evaluate the clustered centroid sequences. The predicted taxonomic classifications are then filtered with a cutoff of 0.5. Any chloroplast sequences identified are removed. The final accepted OTUs and read counts for each sample are finally placed in a taxonomic classification file. Normalisation of 16S rDNA gene copy number In order to normalise the 16S copy number, the 16S copy numbers for the species in the dataset were retrieved from the Ribosomal RNA Operon Copy Number Database (rrnDB) 84 The rrnDB database version 5.3 consisted at the time of 3021 bacterial entries. Firstly, since multiple entries of a species are in the rrnDB database due to the presence of different strains, we obtained an average copy number for each species in the rrnDB database, which resulted in 2876 species entries. The higher taxonomic levels for the rrnDB species needed to be established so that we could calculate their average copy number. For a description of the lineages of all species back to the root in the rrnDB database, we submitted the species names for each entry to extract a subset of the NCBI taxonomy with the NCBI taxtastic tool 68 thus producing a Taxtastic file. The Taxtastic file based on species from the rrnDB database was used to calculate the average copy number for higher taxonomic levels from the known copy number species level, with the assistance of the parent id and taxa id layout in the Taxtastic file. A Taxtastic file based on 16S rDNA species from our dataset was generated and we assigned our 16S species entries a copy number from species to root from the prepared average copy number rrnDB Taxtastic file. Not all copy numbers in the 16S rDNA dataset were known. We, therefore, took the average of closely related species from the above taxonomic level of those we could get and took that as the copy number for those that were missing in our dataset. The 16S dataset was normalised by dividing by the assigned copy number. The files were further normalised by applying the hits per million reads method. 16S rDNA file preparation In our 16S rDNA dataset, we had taxonomic assignments from the bacteria node down to the genus nodes. We extracted the classifications at the taxonomic rank of genus. This consisted of a file being generated for each station that contained the genus names and their assigned abundances. The files were further normalised by applying the hits per million reads method. We extracted the leaves of the tree that included class nodes and “Nc.” nodes with their respective abundances. This step resulted in 94% of the 16S rDNA dataset. Also, we extracted the internal nodes and placed “U.” in front of their names. This resulted in 3% of the 16S rDNA dataset. The abundance assigned to the bacteria node was excluded from our analysis and this comprised of a total of 3% of the 16S rDNA dataset. We generated a file for each station that contained the class nodes, “Nc.” nodes and “U.” nodes with their respective abundances. The files were further normalised by applying the hits per million reads method. Throughout the paper we refer to the analysis of these files at the taxonomic rank of class. Statistical analysis Alpha diversity (Shannon index) in relation to environmental covariates The Shannon index H’ 85 was used to calculate abundance weighted richness per station. The Shannon index was used over the Simpson index as the latter is heavily weighted towards the most abundant orders. The Shannon index was calculated based on the following equation: $$H^{\prime} =-\mathop{\sum }\limits_{i=1}^{S}{p}_{i}\,{{{{{\rm{In}}}}}}\,{{{{{{\rm{p}}}}}}}_{i}$$ Environmental covariates were related to the Shannon index ( H ’) by fitting generalized linear models. Step-by-step backwards selection of covariates was used for model building, removing non-significant covariates until remaining covariates were significant at a p -value < 0.05. Beta diversity in relation to environmental factors was calculated across the transect based on a Hellinger transformed class abundance matrix using the vegdist function of the vegan package 86 . The Bray-Curtis dissimilarity index 87 was used as a measure of beta-diversity and was calculated based on the following equation: $$B{C}_{ij}=\sum \frac{|{n}_{ik}-{n}_{jk}|}{({n}_{ik}+{n}_{jk})}$$ Evenness and occupancy An abundance, station evenness and occupancy plots were produced for each 18S rDNA class level ( n = 54) and 16S rRNA class level ( n = 57) (Supplementary Fig. 5 , Supplementary Table 3 ) The x -axis represents the number of times that class taxonomy occurs across the stations. The y -axis represents the evenness of that class taxonomy across stations it occurs in. This was calculated using a Dispersion index, which is a varient of J’ of Pielou’s evenness 88 and based on H’ of Shannon 85 , 89 . Each circle represents a class taxonomy abundance. The size of each circle is resized by replacing the area of the circle which represented the total abundance for that class with square root of the abundance divided by pi. Canonical correspondence analyses (CCAs) The R package VEGAN 90 was employed to perform a Canonical Correspondence Analysis (CCA) on each dataset of 18S, 16S and metatranscript Pfam against the individual environmental variables. The environmental data consisted of temperature, salinity, nitrate/nitrite, phosphate and silicate (Supplementary Fig. 6 ). Network analysis A network analysis was performed using the R package Weighted Gene Co-Expression Network Analysis (WGCNA) 91 The first analysis was performed on samples of combined prokaryotes at the taxonomic rank of genus and on eukaryotes at the taxonomic rank of species to describe networks derived from their log10-scaled abundances. The prokaryotes and eukaryotes normalised files were combined for each station. A signed adjacency measure for each lineage was determined by raising the absolute value of the Pearson correlation coefficient to the power of 11. A topological overlap measure (TOM) was calculated from the resulting adjacency matrix. Hierarchical clustering was carried out on the TOM measure, which resulted in two networks being discovered in the network (Fig. 4 ). The second analysis was performed on samples of the metatranscriptome Pfam dataset to describe networks derived from their log10-scaled gene counts. A signed adjacency measure for each lineage was determined by raising the absolute value of the Pearson correlation coefficient to the power of 12. A topological overlap measure (TOM) was calculated from the resulting adjacency matrix. Hierarchical clustering was carried out on the TOM measure, which resulted in two networks being discovered in the network (Fig. 2 , Supplementary Table 5 ). When incorporating environmental data, latitude values were redefined, so that the North pole is 0°, the Equator is 90° and the South pole is 180°. Unaltered environmental data can be found in Supplementary Table 1 . Beta diversity break-point analysis The break-point analysis is based on the methodology from ref. 92 . The beta diversity indices used in the break-point analyses is the Sørensen indices. A breakpoint was determined and plotted for each of the Pfam protein families, 18S rDNA and 16S rDNA datasets. Breakpoints in the 18S and 16S rDNA datasets were investigated between the temperature range of 7 °C to 29.02 °C. When incorporating environmental data, latitude values were redefined, so that the North pole is 0°, the Equator is 90° and the South pole is 180°. Unaltered environmental data can be found in Supplementary Table 1 . The break-point analysis was generated using piecewise regression in R. This was calculated by firstly producing a presence–absence matrix for each dataset. A multiple-site dissimilarity was performed on the presence–absence matrix with beta.pair, a function from the betapart R package and a dissimilarity index set to Sørensen, thus produced a distance object called beta.sor 34 . Outliers were identified with bagplot, a function from the aplpack R package and then removed from the analyses. Remaining values were then plotted against the environmental variable (temperature or altered latitude), these were searched through for possible breakpoints, that is for the one with the lowest mean squared error. For the 18S rDNA and 16S rDNA datasets, a number of samples in the North Atlantic Ocean did not pass quality control before sequencing. Due to this, when performing the 18S rDNA and 16S rDNA break-points analyses there were gaps in each of the datasets plots in the North Atlantic Ocean region. To investigate the effects of the missing samples, four model scenarios were produced to mimic the missing samples. The first model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with current closest by latitude stations. This resulted in breakpoints for the 18S and 16S rDNA of 20.66 °C and 9.49 °C, respectively. The second model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from the Arctic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 14.4 °C and 12.07 °C, respectively. The third model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from the South Atlantic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 9.49 °C and 12.22 °C, respectively. The fourth model scenario involved filling in beta diversity values for the missing North Atlantic Ocean with values from both the Arctic Ocean and the South Atlantic Ocean. This resulted in breakpoints for the 18S and 16S rDNA of 14.4 °C and 12.22 °C, respectively. A break-point analysis was performed for the Pfam protein families beta diversity against temperate with the North Atlantic Ocean samples (Stratiphyt-II) removed to test whether key results remain unchanged (Supplementary Fig. 10e ). A breakpoint of 18.2 °C was determined with a p -value of 1.65e−11. Hence, the main result (Fig. 5A ) remains unchanged. IPCC-based modelling of geographical shifts in beta-diversity breakpoints across the North Atlantic To assess where these boundaries are, we began with the HadISST dataset 93 , taking the 1961–1990 climatology (Fig. 6 ). For estimates of changes over the 21st century, we used the RCP 8.5 HadGEM2-ES CMIP5 experiment 37 . A historical HadGEM2-ES experiment was also run for CMIP5, which we used to bias-correct the projected temperatures. This was achieved by determining the differences between the 1961–1990 HadISST and HadGEM2-ES temperatures for each grid box and adding them to the projections. Grid boxes that contain sea ice in the climatology are ignored from this analysis. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability iTAG rDNA Data: . Eukaryotic metatranscriptome data: . ( ).
Global warming is likely to cause abrupt changes to important algal communities because of shifting biodiversity 'break point' boundaries in the oceans—according to research from the University of East Anglia and the Earlham Institute. A new study, published today in the journal Nature Communications, finds that as climate change extends the warm hemisphere, these boundaries are predicted to shift pole-wards over the next 100 years. Instead of a gradual change in microbial diversity due to warming, the researchers suggest it will happen more abruptly at what they call 'break points' - wherever the upper ocean temperature is around 15 degrees on an annual average, separating cold and warm waters. The UK is one of the areas most likely to be severely affected, and more suddenly than previously thought. But the team say that the changes could be stopped if we act swiftly to halt climate change. Prof Thomas Mock, from UEA's School of Environmental Sciences, said: "Algae are essential in maintaining a healthy ecosystem to balance ocean life. By absorbing energy from sunlight, carbon dioxide and water, they produce organic compounds for marine life to live off. "These organisms underpin some of the largest food webs on Earth and drive global biogeochemical cycles. "Accountable for at least 20 percent of annual global carbon fixation, temperature changes could have a significant impact upon the algae that our marine systems, fisheries and ocean biodiversity depend on. As average sea surface temperatures increase due to climate change, Thomas Mock has seen shifting aquatic life — for example, this European sea bass — off England’s southeast coast. European sea bass have a temperature optimum range of around 50 to 77 degrees Fahrenheit, while cod, iconic for its popularity at UK fish-and-chip shops, prefer to live between about 34 to 59 degrees Fahrenheit. Credit: Thomas Mock "We wanted to better-understand how the climate crisis is impacting algae worldwide from the Arctic to the Antarctic." The research was led by scientists at UEA in collaboration with the US Department of Energy (DOE) Joint Genome Institute (JGI, U.S.) and the Earlham Institute (UK). The major study was conducted over more than 10 years by an international team of 32 researchers, from institutions including the University of Exeter in the UK and the Alfred Wegener Institute for Polar and Marine Research in Germany. It involved the first pole-to-pole analysis of how algae (Eukaryotic phytoplankton) and their expressed genes are geographically distributed in the oceans. Thus, the team studied how their gene activity is changing due to environmental conditions in the upper ocean from pole to pole. As the upper ocean is already experiencing significant warming due to rising CO2 levels, the researchers estimated how the distribution of these algal communities might change based on a model from the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report. The algal communities' diversity and gene activity are shaped by interactions with microscopic single-celled organisms, or prokaryotes, as part of complex microbiomes. The researchers found that these global communities can be split into two main clusters—organisms that mainly live in cold polar and warm non-polar waters. Scientists sampling under-ice phytoplankton communities utilising a ‘mummy chair.’ Under-ice communities are vital for, for example, krill and other under-ice feeding organisms. Credit: Katrin Schmidt The geographic patterns are best explained by the differences in the water's physical structure (for example, seasonally mixed cold versus permanently stratified warm water) of the upper ocean caused by latitudinal gradients of temperature. The organisms were analyzed through nucleic acids extraction and DNA and mRNA sequencing of samples collected during four research cruises in the Arctic Ocean, North Atlantic Ocean, South Atlantic Ocean and Southern Ocean. Prof Mock said: "Significant international efforts have provided insights into what drives the diversity of these organisms and their global biogeography in the global ocean, however, there is still limited understanding of environmental conditions responsible for differences between local species communities on a large scale from pole to pole. "Our results provide new insights into how changing environmental conditions correlate with biodiversity changes subject to large-scale environmental fluctuation and disturbances. This knowledge is essential for predicting the consequences of global warming and therefore may guide environmental management. "We can expect the marine systems around the UK and other countries on this latitude to be severely affected, and more suddenly than previously thought. "The largest ecosystem change will occur when marine microalgal communities and their associated bacteria around the UK will be replaced by their warm-water counterparts. "This is expected to be caused by the pole-ward shifting ecosystem boundary or 'biodiversity break point' separating both communities. For this to take place, the annual average upper ocean temperature needs to become warmer than 15C. Colouring the water, the algae Phaeocystis blooms off the side of the sampling vessel, Polarstern, in the temperate region of the North Atlantic. Credit: Katrin Schmidt "It's not irreversible though, if we can stop global warming," he added. Co-author Dr. Richard Leggett at the Earlham Institute, added: "This study also shows what an important role advances in DNA sequencing technologies have played in understanding ocean-based ecosystems and, in doing so, helping researchers shed light on and grapple with some of the biggest environmental challenges facing the planet." The work was led by two former Ph.D. students from UEA's Schools of Environmental Sciences and Computing Sciences, Dr. Kara Martin (also based at the Earlham Institute) and Dr. Katrin Schmidt. Dr. Martin said: "These results suggest that the most important ecological boundary in the upper ocean separates polar from non-polar algal microbiomes at both hemispheres, which not only alters the spatial scaling of algal microbiomes but also shifts pole-wards due to global warming. "We predict that 'break points' of microbial diversity will move markedly pole-wards due to warming—particularly around the British Isles—with abrupt shifts in algal microbiomes caused by human-induced climate change. "This has been a wonderful experience and an incredible opportunity to work with a magnificent team. Together, we analyzed an amazing dataset which expands the latitude of our microbial ocean research, enabling us to gain insights to our changing ocean from pole to pole." Dr. Schmidt said: "During our research cruises we already noticed quite different algal communities from warm to cold waters. This initial finding was supported by our results suggesting that the most important ecological boundary in the upper ocean separates polar from non-polar algal microbiomes at both hemispheres. And more importantly, this boundary not only alters the spatial scaling of algal microbiomes but also shifts pole-wards due to global warming." A curious polar bear near Greenland checks out the icebreaker Polarstern. Polar bears, which feed on seals, are part of the arctic ocean food web that climate change threatens. Credit: Katrin Schmidt Prof Tim Lenton, from the University of Exeter said: "As the ocean warms up with climate change this century we predict that the 'break point' between cold, polar microalgal communities and warm, non-polar microalgal communities will move northwards through the seas around the British Isles. "As microalgae are key to the base of the food chain we can expect major changes in the rest of the marine ecosystem, with implications for fisheries, as well as marine conservation. "The 'biological carbon pump' whereby the ocean takes up carbon dioxide from the atmosphere will change with this shift in microalgal communities—most likely becoming less effective—which could in turn feedback to amplify global warming." Sequencing was done at the JGI as part of the Community Science Program project Sea of Change: Eukaryotic Phytoplankton Communities in the Arctic Ocean. "The biogeographic differentiation of algal microbiomes in the upper ocean from pole to pole" is published in Nature Communications on September 16, 2021.
10.1038/s41467-021-25646-9
Biology
How people power can track alien species: study
P. M. J. Brown et al, Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland, Scientific Data (2018). DOI: 10.1038/sdata.2018.239
http://dx.doi.org/10.1038/sdata.2018.239
https://phys.org/news/2018-10-people-power-track-alien-species.html
Abstract Invasive alien species are widely recognized as one of the main threats to global biodiversity. Rapid flow of information on the occurrence of invasive alien species is critical to underpin effective action. Citizen science, i.e. the involvement of volunteers in science, provides an opportunity to improve the information available on invasive alien species. Here we describe the dataset created via a citizen science approach to track the spread of a well-studied invasive alien species, the harlequin ladybird Harmonia axyridis (Coleoptera: Coccinellidae) in Britain and Ireland. This dataset comprises 48 510 verified and validated spatio-temporal records of the occurrence of H. axyridis in Britain and Ireland, from first arrival in 2003, to the end of 2016. A clear and rapid spread of the species within Britain and Ireland is evident. A major reuse value of the dataset is in modelling the spread of an invasive species and applying this to other potential invasive alien species in order to predict and prevent their further spread. Design Type(s) database creation objective • citizen science design • biodiversity assessment objective Measurement Type(s) population data Technology Type(s) longitudinal data collection method Factor Type(s) temporal_interval • body marking • developmental stage Sample Characteristic(s) Harmonia axyridis • British Isles • habitat Machine-accessible metadata file describing the reported data (ISA-Tab format) Background & Summary The invasion process for an alien species involves various stages, notably introduction, establishment, increase in abundance and geographic spread 1 . An alien species that spreads and has negative effects (which may be ecological, economic or social) is termed invasive 2 , 3 . Invasive alien species are widely recognized as one of the main threats to global biodiversity 4 – 6 . There are a number of international agreements which recognize the threat posed by invasive alien species, which are designated as a priority within the Convention on Biological Diversity Aichi biodiversity target 9 ( ) and are relevant to many of the Sustainable Development Goals ( ). An EU Regulation on invasive alien species came into force on 1 January 2015 ( ) and subsequently a list of invasive alien species of EU concern was adopted for which member states are required to take action to eradicate, manage or prevent entry. Rapid flow of information on the occurrence of invasive alien species is critical to underpin effective action. There have been few attempts to monitor the spread of invasive alien species systematically from the onset of the invasion process. Citizen science, i.e. the involvement of volunteers in science, provides an opportunity to improve the information available on invasive alien species 7 . Here we describe the dataset created via a citizen science approach to track the spread of a well-studied invasive alien species, the harlequin ladybird Harmonia axyridis (Coleoptera: Coccinellidae) in Britain and Ireland. This species was detected very early in the invasion process and a citizen science project was initiated and widely promoted to maximize the opportunity to gather data from the public across Britain and Ireland. Harmonia axyridis was introduced between approximately 1982 and 2003 to at least 13 European countries 8 as a biological control agent. It was mainly introduced to control aphids that are pests to a range of field and glasshouse crops. From the early 2000s it subsequently spread to many other European countries, including Britain and Ireland. It is native to Asia (including China, Japan, Mongolia and Russia) 9 and was also introduced in North and South America and Africa 10 . Harmonia axyridis was introduced unintentionally to Britain from mainland Europe by a number of pathways: some were transported with produce such as cut flowers, fruit and vegetables; others arrived through natural dispersal (flight) from other invaded regions 11 . To a lesser extent H. axyridis also arrived from North America 12 . The major pathways of spread to Ireland were probably natural dispersal (from Britain) and arrival with produce. Harmonia axyridis is a eurytopic (generalist) species and may be found on deciduous or coniferous trees, arable and horticultural crops and herbaceous vegetation in a wide range of habitats. It is particularly prevalent in urban and suburban localities (e.g. parks, gardens, and in or on buildings) 13 . Citizen science approaches to collecting species data are becoming increasingly popular and respected 14 . Advances in communication and digital technologies (e.g. online recording via websites and smartphone applications; digital photography) have increasingly enabled scientists to collect and verify large datasets of species information 15 . For a few species groups, including ladybirds, verification to species is possible if a reasonably good photograph of the animal is available. In late 2004, shortly after the first H. axyridis ladybird record was reported, funding was acquired from Defra and the National Biodiversity Network (NBN) to set up and trial an online recording scheme for ladybirds, and H. axyridis in particular. Thus, the online Harlequin Ladybird Survey and UK Ladybird Survey were launched in March 2005. The surveys have been very successful in gaining records from the public since 2005. Innovations such as the launch of a free smartphone application (iRecord Ladybirds) in 2013 helped to maintain the supply of records. The dataset here comprises species records of H. axyridis in various life stages (larva, pupa or adult) from Britain and Ireland over the period 2003 to 2016. A major reuse value of the dataset is in modelling the spread of an invasive species and applying this to other potential invasive alien species in order to predict and prevent their further spread. The time period of the study captures the initial fast spread of H. axyridis (principally from 2004 to 2009) plus a further substantial period (2010 to 2016) in which the distribution of the species altered relatively little, despite many further records being received. Methods This dataset ( Data Citation 1 ) comprises 48 510 spatio-temporal records of the occurrence of H. axyridis in Britain and Ireland, from first arrival in 2003, to the end of 2016. For its type it is thus an unusually substantial dataset. Whilst the records were collated and verified by the survey organizers, the records themselves were provided by members of the public in Britain and Ireland. Uptake to the Harlequin Ladybird Survey was undoubtedly assisted by the pre-existence of the Coccinellidae Recording Scheme (now the UK Ladybird Survey), supported by the Biological Records Centre (within NERC Centre for Ecology & Hydrology) 16 . Reflecting the general diversification of citizen science through innovative use of technology 17 , high levels of public access to the internet and digital photography enabled an online survey form to be established for H. axyridis in Britain and Ireland. The Harlequin Ladybird Survey ( ) was one of the first online wildlife surveys in Britain and Ireland. It was launched in March 2005 in response to the first report of H. axyridis in Britain, in September 2004 18 . The Harlequin Ladybird Survey benefited from high levels of media interest, and members of the public showed great willingness to look for H. axyridis , and to register their sightings with the survey 13 . There are only three records from 2003 and no earlier records have been received, supporting the case that the earliest records in the dataset represent the onset of the invasion process for this species. Indeed H. axyridis has a relatively high detectability (e.g. 19 ) and rapid reproductive rate, so is unlikely to have arrived unnoticed. Each record represents a verified sighting of H. axyridis on a given date (or range of dates) and comprises one or more individual ladybirds observed from one or more life stages (larva, pupa, adult). Records are from Britain (England, Wales and Scotland, including offshore islands), Ireland (both Northern Ireland and the Republic of Ireland), the Isle of Man and the Channel Islands (primarily Guernsey and Jersey) and are mainly from the period 2004 to 2016. The earliest record of H. axyridis in Britain was initially thought to be from 3 July 2004, but three earlier records (from 2003) were received retrospectively. The data records represent species presence and there are no absence data available. The majority of the records were received from members of the public via online recording forms (at or ( Supplementary Figure 1 ) or via smartphone apps (iRecord Ladybirds or iRecord - ( Supplementary Figure 2 ), with some records (especially in earlier years) received by post. Other records, particularly from amateur expert 16 coleopterists and other naturalists, were received in spreadsheets. The spatial resolution of the records is variable. Many include an Ordinance Survey grid reference (converted to latitude and longitude), enabling resolution to 100 metres or less, but many others were derived at 1 km resolution from a UK postal code (UK Government Schemas and Standards, ). The option on the online recording form to enter the location via a UK postal code was provided to make the entry of records easier for members of the public unfamiliar with grid referencing systems. Whilst the resolution is thus reduced for these records, the reduction in user error (e.g. the problem of grid reference eastings and northings being transposed) is an advantage 20 . The postal code method was applicable for sightings of H. axyridis made within 200 metres of a specified postal code, so could not be used for a minority of records where the ladybird was seen in a remote semi-natural habitat. The spatial resolution of the records tended to increase over time, as the number of records received via the smartphone apps increased, and these records generally have GPS-generated latitudes and longitudes. Data Records Repository The dataset is freely available for download from the Environmental Information Data Centre (EIDC) catalogue ( Data Citation 1 ). The dataset is provided as a single tab-delimited text file, with each line representing a single record. Constituents of Species Records Each species record includes 19 fields ( Table 1 ). Table 1 The fields contained in each Harmonia axyridis species record in the database, with a descriptor for each. Full size table Figures and Tables The figures and tables here show a summary of the dataset, notably the number of verified H. axyridis records received by year ( Fig. 1 ), by month ( Fig. 2 ), by vice county ( Fig. 3 and Table 2 (available online only)) and the spread of H. axyridis in Britain, the Channel Islands and Ireland from 2003 to 2016 ( Fig. 4 ). Figure 1 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by year, from 2003 to 2016. Full size image Figure 2 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland by month, from 2003 to 2016. Full size image Figure 3 The number of verified Harmonia axyridis records received for Britain, the Channel Islands and Ireland, split by Vice County, from 2003 to 2016. Full size image Table 2 The number of Harmonia axyridis records in the dataset, listed by vice county and region (England, Wales, Isle of Man, Scotland, Channel Islands or Ireland). Full size table Figure 4: The spread and distribution in 10 km squares in Britain, the Channel Islands and Ireland of Harmonia axyridis from 2003 to 2016. NB where H. axyridis was recorded in a square in multiple time periods, the older time period overlays the newer one(s). Full size image Technical Validation Record Verification Verification of the records was made by the survey organizers (led by HER and PMJB but also including others) on receipt of either a photograph or ladybird specimen. The records received from amateur expert coleopterists and other naturalists are regarded as accurate (i.e. without the survey organizers seeing a photograph or specimen) and have been included in the dataset. Many further online records were received that remain unverified (i.e. no photograph or specimen was sent, or the photograph was of insufficient quality to enable identification) or were verified as another species. All such unverified or inaccurate records are excluded from this dataset. For discussion of these issues (partly relating to our dataset) see 21 . Verified records were regularly uploaded to the NBN Gateway (now the NBN Atlas - ). There the records could be viewed via online maps, which helped to encourage further recording. Recording Intensity Recording intensity by the public was not consistent over time and was influenced by media coverage, publicity events by the survey organizers, and other factors. The number of records in a period is also influenced by weather conditions and seasonality: the main peak in record numbers each year tended to be from late October to early November, the period in which H. axyridis generally moves to indoor overwintering sites (hence this is when many people first notice the species in their homes). There is also spatial variability in recording intensity: more records come from areas with high densities of people ( Fig. 3 ). Across Britain and Ireland there were a number of particularly active local groups or individuals which contributed hotspots of recorder activity, e.g. London. To many recorders, juvenile stages (especially pupae and early instar larvae) were less noticeable and more difficult to identify than the adult stage, thus limiting their recording. The possibility of a reporting bias towards sightings early in the season also exists (i.e. some recorders may have reported their first sighting of H. axyridis , but not subsequent sightings). In order to minimize this effect, the importance of recording multiple sightings was stressed to recorders. The peaks in record numbers observed late in each year also suggest that any effect of this potential bias was minor. There is probably a further minor temporal bias towards recording on some days of the week (e.g. weekend days) more than others. Technical Validation In addition to the expert verification detailed above, each record has also undergone a series of validation checks that are designed to highlight other potential issues with the data. Checks were performed on the date information supplied with the record to ensure that both the start and/or end dates supplied are in recognized formats, are valid dates, are in the past or present (e.g. no future dates), and where both supplied that the start date is prior or equal to the end date. The location information is also checked to ensure that the supplied grid reference is in a recognized format and is a valid grid reference and that the supplied grid reference is from a 10 km and/or 1 km square that contains land. If other location fields were supplied with the grid reference (such as 10 km grid reference, vice county, tetrad or quadrant codes, etc) they were cross-checked to ensure consistency. Additional information How to cite this article : Brown, P. M. J. et al . Spread of a model invasive alien species, the harlequin ladybird Harmonia axyridis in Britain and Ireland. Sci. Data . 5:180239 doi: 10.1038/sdata.2018.239 (2018). Publisher’s note : Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
New research published in the Nature journal Scientific Data shows how the public can play a vital role in helping to track invasive species. The journal has published data from the UK Ladybird Survey which shows how the harlequin ladybird, which is a species native to Asia, has spread across the country. The harlequin ladybird was introduced to mainland Europe in the 1980s to control aphids. It was first reported in the UK in 2003 and is now outcompeting a number of smaller native ladybird species. The new open access study maps 48,510 observations of the harlequin ladybird, submitted by the general public, spanning over a decade. Spreading at over 60 miles per year during the early stage of invasion, the observations show that harlequins are now widespread through England and Wales and increasingly being reported in the south of Scotland. There have been few attempts to monitor the spread of invasive alien species systematically from the onset of the invasion process but the model used by the online UK Ladybird Survey, led by academics from Anglia Ruskin University and the Centre for Ecology & Hydrology, shows the important role that citizen science can play. Rapid flow of information about the occurrence of invasive species is critical in order to take any effective action, and the citizen science approach developed through the UK Ladybird Survey is already being used for surveillance of other invasive non-native species, including the Asian hornet. Lead author Dr. Peter Brown, Senior Lecturer in Zoology at Anglia Ruskin University, said: "All these observations have made major contributions to our understanding of the ecology of the harlequin ladybird in the UK. We are now excited to see how others might use the model and the patterns of data to explore invasions by other species." Co-lead Professor Helen Roy, from the Centre for Ecology & Hydrology, said: "It has been incredible to see the way in which so many people have got involved in tracking this invasion—it is a truly collaborative project. We have been able to answer many important ecological questions using this vast dataset. This would not have been possible without these inspiring citizen science contributions."
10.1038/sdata.2018.239
Biology
DNA structure influences the function of transcription factors
Stefanie Schöne et al, Sequences flanking the core-binding site modulate glucocorticoid receptor structure and activity, Nature Communications (2016). DOI: 10.1038/ncomms12621 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms12621
https://phys.org/news/2016-09-dna-function-transcription-factors.html
Abstract The glucocorticoid receptor (GR) binds as a homodimer to genomic response elements, which have particular sequence and shape characteristics. Here we show that the nucleotides directly flanking the core-binding site, differ depending on the strength of GR-dependent activation of nearby genes. Our study indicates that these flanking nucleotides change the three-dimensional structure of the DNA-binding site, the DNA-binding domain of GR and the quaternary structure of the dimeric complex. Functional studies in a defined genomic context show that sequence-induced changes in GR activity cannot be explained by differences in GR occupancy. Rather, mutating the dimerization interface mitigates DNA-induced changes in both activity and structure, arguing for a role of DNA-induced structural changes in modulating GR activity. Together, our study shows that DNA sequence identity of genomic binding sites modulates GR activity downstream of binding, which may play a role in achieving regulatory specificity towards individual target genes. Introduction Cells can exploit a variety of strategies to ensure that genes are expressed at a specific and well-defined level, including the tight control of the production process of transcripts. The transcription of genes is controlled by the coordinated action of transcriptional factors (TFs), which bind to cis-regulatory elements to integrate a combination of inputs to specify where and when a gene is expressed and how much gene product is synthesized 1 . Signals influencing the level of transcriptional output include the sequence composition of cis-regulatory elements that can, for example, direct the assembly of distinct regulatory complexes (reviewed in refs 2 , 3 ). Other mechanisms that influence the transcriptional output of individual genes include the distance of regulatory elements to the transcriptional start site (TSS) of genes 4 , the chromatin context in which regulatory elements are embedded 5 , DNA methylation 6 , 7 and post-translational modifications of proteins 1 . For the glucocorticoid receptor (GR), a member of the steroid hormone receptor family, the sequence of its DNA-binding site is known to modulate the receptor’s activity. Some studies suggests that depending on the sequence of the GR-binding sequence (GBS), the direction of regulation might be influenced, that is, whether GR will activate or repress transcription 8 , 9 , 10 , 11 . Furthermore, the magnitude of transcriptional activation by GR depends on the exact sequence composition of the GBS, which consists of inverted repeats of two half-sites of 6 base pairs (bp) separated by a 3-bp spacer 11 . Affinity for specific GBSs can explain some, but not all, of the modulation of GR activity by the sequence composition of the GBSs 12 . GR activity can also be modulated by DNA shape, which can serve as an allosteric ligand that fine-tunes the structure and activity of GR without apparent changes in DNA binding affinity 13 . GR can ‘read’ the shape of DNA through non-specific DNA contacts with the phosphate backbone in the spacer region and at other positions within each half-site 11 , 13 . In addition, GR contacts the minor groove just outside the core 15-bp GBS 11 . How the DNA-induced structural changes in the associated protein result in different transcriptional outputs is largely unknown, but requires an intact dimerization interface and may involve sequence-specific cooperation with GR cofactors 11 , 13 . Here we further investigated this question and uncovered that the 2 bp flanking the GBS, which are involved in modifying the shape of the DNA target, influence transcriptional output levels. We first studied if GBS variants can modulate GR activity in a chromosomal context and found that GBS variants can indeed modulate GR activity when integrated at a defined genomic locus. Interestingly, this modulation appears to occur downstream of GR binding as the differences in transcriptional responses cannot be explained by differences in occupancy levels based on chromatin immunoprecipitation (ChIP) experiments. Furthermore, we analysed genome-wide data on GR binding and gene regulation and identified differences in the sequence composition between GBSs associated with genes with strong and those with weak transcriptional responses to GR activation. Using a combination of experiments with atomic resolution and functional studies, we found that the base pairs directly flanking the core 15-bp GBS modulate GR activity and induce structural changes in both DNA and the associated DNA-binding domain of GR. Together, our studies suggest that modulation of GR activity and structure by GBS variation at positions directly adjacent to the core recognition sequence plays a role in fine-tuning the expression of endogenous target genes. Results Genomic GR-binding site sequence affects GR activity Previous studies relied on transiently transfected reporters to show that GBS composition can modulate GR activity 11 , 13 . To determine if GBS variants can also influence GR activity in a chromosomal context, we used zinc finger nucleases (ZFN) to generate isogenic cell lines with integrated GBS reporters 14 . The GBS reporters consist of a GBS variant upstream of a minimal promoter driving expression of a luciferase reporter gene ( Fig. 1a ). Single-cell-derived clonal cell lines with integrated reporters were isolated by flow-activated cell sorting (FACS) and genotyped for correct integration at the AAVS1 locus ( Supplementary Fig. 1A ). Consistent with our expectation, no induction by dexamethasone, a synthetic glucocorticoid hormone, was observed for the reporter lacking a GBS ( Fig. 1b ). For reporters with a single GBS, transcriptional activation was observed with sequence-specific activities ranging from ∼ 17-fold for the Cgt, to ∼ 9-fold for the GILZ and ∼ 2-fold for the SGK2 GBS ( Fig. 1b ). Notably, activation of the endogenous GR target gene TSC22D3 was comparable for all clonal lines ( Supplementary Fig. 1B ), arguing that the GBS-specific activities are not a simple consequence of clonal variation in GR activity. Figure 1: GBS activity and binding in a genomic context. ( a ) Cartoon depicting donor design, GBS sequence and the genotype at the AAVS1 locus after integration of the GBS-reporters. Nucleotides that diverge from the Gilz sequence are highlighted in red for the Cgt and Sgk2 GBSs, respectively. ( b ) Top: transcriptional activation of the integrated luciferase reporters by GBS variants. Clonal lines with integrated reporters as indicated were treated for 8 h with 1 μM dexamethasone (dex) or 0.1% ethanol as vehicle control. Fold induction of the luciferase reporter gene (dex/etoh) was determined by qPCR. Averages±s.e.m. are shown ( n =3). Bottom: GR binding to GBS reporter variants was quantified by chromatin immunoprecipitation followed by qPCR. Average fold enrichment per reporter variant on dex treatment (1 μM dex, 1.5 h), relative to ethanol vehicle control±s.e.m. is shown for at least three clonal lines with reporter integration at the desired locus. Full size image To assess if the GBS-specific transcriptional activities could be explained by differences in GR occupancy, we compared GR recruitment to the GBS variants by ChIP. For all clonal lines, a similar level of hormone-dependent GR recruitment was observed for the endogenous FKBP5 locus, indicating that the ChIP efficiency was comparable between our clonal lines ( Supplementary Fig. 1C ). As expected, the integrated reporter lacking a GBS showed no GR binding, whereas GR was recruited in the presence of a GBS ( Fig. 1b ). However, no clear correlation between the level of transcriptional activity and GR recruitment was observed. For instance, the GILZ GBS, which showed an intermediate transcriptional activity, showed the highest occupancy whereas recruitment was comparable for the GBSs with the highest (Cgt) and lowest (Sgk2) activities ( Fig. 1b ). Together these data show that GBS nucleotide variation can modulate GR activity in a chromosomal context. Furthermore, this modulation appears to occur downstream of recruitment, consistent with the idea that DNA can change the structure and activity of GR. Genome-wide computational analysis of GBS variants The experiments with integrated GBS reporters showed that GBS variants can modulate the activity of GR towards target genes in a chromosomal context. To assess whether GBS variants may indeed play a role in fine-tuning the activity of GR towards individual endogenous target genes, we analysed genomic data to see if the level of GR activity correlates with the presence of specific GBS variants near genes. Therefore, we first grouped genes regulated by GR in U2OS cells 15 , a human osteosarcoma cell line, into strong responders (top 20% with greatest fold induction on dexamethasone treatment, 290 genes) and a control group of weak responders (genes with significant changes in expression, log2-fold change <0.72, 688 genes) ( Fig. 2a ). Next, we associated GR-bound regions, based on ChIP-seq data 15 , with a regulated gene when a ChIP-seq peak was located within a window of 40 kb centred on the TSS of that gene ( Fig. 2a ). The strong GR-responsive genes were associated with 543 peaks. To compare our findings, a control group with similar peak number was generated consisting of 532 peaks that were associated with weak GR responsive genes. For each group of peaks, we conducted a de novo motif search with RSAT peak motifs 16 . For both groups, we identified the GR motif ( Fig. 2a ) and motifs of AP1 and SP1, which are known cofactors of GR 17 , 18 . The core GR motif was similar for both groups ( Fig. 2a ) and closely matches the GR consensus sequence 15 . However, we observed subtle differences in preferred nucleotides at individual positions. For instance, the spacer for GBSs associated with weak responders preferentially contains a G or C at position −1, whereas no such preference is observed for GBSs associated with strong responders. This is consistent with previous studies showing that the sequence of the spacer can modulate GR activity 11 , 13 . Furthermore, we found that the nucleotide flanking each half site (position −8 and +8) exhibited high information content in the strong responders data set, with sequence preferences that were different for peaks associated with strong and weak responder genes ( Fig. 2a ). For GBSs associated with strong GR responsive genes, the flanking nucleotide was preferentially an A or T, whereas for GBSs associated with weak GR responsive genes the flanking nucleotide was preferentially a G or C. Because the motifs discovered by the de novo motif search are not necessarily present at different frequencies in the two groups, we quantitatively compared the occurrences of motif matches flanked by A/T and G/C nucleotides 5′ and 3′ of the core motif, which are associated with ‘strong’ and ‘weak’ peaks, respectively. Consistent with the outcome of the de novo motif search, this analysis showed more motif matches for the A/T flanked motif for strong-responder-associated peaks compared with weak-responder-associated peaks, whereas the opposite was found when we scanned with the G/C flanked motif ( Supplementary Fig. 2 ). Together, this suggests that GBS variants may indeed play a role in modulating GR activity towards endogenous target genes, and hint at a possible role in this process for the base pairs directly flanking the half-sites. Figure 2: Identification and characterization of high-activity GBS variants. ( a ) Overview of the workflow to identify candidate high-activity GBS variants. Genes were grouped into strong (top 20% highest fold induction) and weak (log2 fold change<0.72) transcriptional responders to dexamethasone treatment. Next, ChIP-seq peaks in a 40 kb window centred on the TSS of responder genes were extracted for each group and subjected to de novo motif searches resulting in the depicted motifs. The flank positions (−8 and +8) are highlighted by red (A/T) or blue (G/C) rectangles. ( b ) Transiently transfected luciferase reporter induction of GBS sequences flanked by either A/T or G/C nucleotides. Average fold induction upon 1 μM dexamethasone (dex) treatment relative to ethanol (etoh) vehicle±s.e.m. ( n ≥3) is shown. ( c ) Comparison of transcriptional induction of transiently transfected Cgt and Sgk GBS variants with G/T and A/C ‘mixed flanking sites’ compared with A/T and G/C flanks. Average fold induction on 1 μM dexamethasone (dex) treatment relative to ethanol (etoh) vehicle±s.e.m. ( n ≥3) is shown. Full size image GBS flanking nucleotides modulate GR activity To test the role of base pairs flanking the half-site (position −8 and +8) in modulating GR activity, we generated reporters where we flanked each of five GBS variants (Cgt, FKBP5-1, FKBP5-2, Pal and Sgk) by either A/T or by G/C bp ( Fig. 2b ). These reporters displayed comparable basal activities, whereas the level of induction on dexamethasone treatment varied between the sequence variants ( Fig. 2b ). Consistent with the observations for endogenous GR target genes the A/T flanked GBSs showed higher reporter gene activity than the G/C flanked GBSs for four out of five tested GBS variants, whereas little to no effect of changing the flanks was observed for the Pal sequence ( Fig. 2b ). For example, the activity of A/T flanked Cgt was twice that of the G/C flanked version of this GBS ( Fig. 2b ). Together, these experiments indicated that the proximal flanking nucleotides can indeed modulate GR activity, and from now on we use the term 'flank effect' to refer to the dependency of GR target gene expression on flanking nucleotides of the GBS core motif. Notably, the Sgk and Cgt GBSs showed the greatest flank effect whereas the effect for the Pal and FKBP5-1 GBSs was small. When comparing the sequences of these GBS variants, we observed that the second half-site (position 2–7) forms an ‘imperfect’ palindromic sequence (not matching TGTTCT) for the GBSs with the greatest flank effect (Cgt and Sgk) whereas this sequence is palindromic for Pal and FKBP5-1. To test whether the ‘imperfect’ half-site of Cgt and Sgk is responsible for the flank effect, we generated new luciferase reporter constructs with mixed flanking nucleotides 5′ and 3′ of the core motif (A/C and G/T) ( Fig. 2c ). These experiments showed that the imperfect half-site is indeed mainly responsible for the flank effect, with on average a 98% increase in activity when we change the flank of the imperfect site, whereas this increase was a more modest 18% when we changed the flank of the ‘perfect’ half-site. We focused on the Cgt and Sgk GBS in further experiments as they showed the strongest influence of the flanking nucleotides. To study the role of flanking nucleotides in the chromosomal context, we stably integrated a Sgk-GBS luciferase reporter in U2OS cells at the AAVS1 locus to simulate an endogenous gene environment. Matching what we observed with the transiently transfected reporters, we again found that integrated A/T flanked Sgk showed a ∼ 1.5 times greater reporter activity than the G/C flanked GBS ( Fig. 3a ). At this point, we wondered how the proximal flanks influence GR activity. To determine whether the flank effect might be caused by a change in the intrinsic affinity of the DNA-binding domain (DBD) for GBSs, we conducted electrophoretic mobility shift assays (EMSAs). However, arguing against a role for changes in the intrinsic affinity, we found similar Kd values for both A/T and G/C flanked Cgt and Sgk GBSs ( Fig. 3b ). In a second approach, we also studied GR binding in vivo to A/T and G/C flanked Sgk versions of the stably integrated reporter constructs from the previous experiment by ChIP ( Fig. 3a ). Remarkably, the GR occupancy of G/C flanked Sgk was twice that of the A/T flanked Sgk ( Fig. 3c ), despite the fact that A/T flanked Sgk leads to higher gene activation. Similarly, GR binding was essentially the same when comparing the peak height of all endogenous GR ChIP-seq peaks containing an A/T flanked GBS with those flanked by G/C ( Supplementary Fig. 3 ), showing that peak height and flanking site sequence are independent. Together, we therefore conclude that the flank effect appears not to be a consequence of changes in DNA-binding affinity. Figure 3: Effect of flanking sites on binding and on regulation in a genomic context. ( a ) Transcriptional activation of the targeted integrated luciferase reporters with Sgk GBS flanked by either A/T or G/C nucleotides. Average fold induction of the luciferase reporter gene on 1 μM dexamethasone (dex) treatment relative to ethanol (etoh) vehicle±s.e.m. ( n ≥3) is shown. ( b ) Table of EMSA-derived DNA-binding constant ( K D ) for Sgk and Cgt GBSs with flanking sequences as indicated. S.d. from three independent replicates. ( c ) GR occupancy levels for integrated Sgk-GBS reporters with flanks as indicated was analysed by chromatin immunoprecipitation followed by qPCR for cells treated with either dex (1 μM, 1.5 h) or ethanol as vehicle control. Average relative enrichment at the GBS locus±s.d. for three clonal lines and three independent replicates is shown. Full size image Flanking nucleotides modulate DNA shape Previous studies have shown that the sequence of the spacer influences DNA shape and GR activity 13 . To test whether the local structure of the DNA-binding site is affected by the flanking nucleotides of the GBS, we compared DNA shape features between G/C (75 GBSs) and A/T (83 GBSs) flanked GBSs from peaks associated with weakly and strongly upregulated genes, respectively. The DNA shape features were predicted using a high-throughput method that has been extensively validated based on experimental data 19 . This analysis showed a slight difference in minor groove width between GBSs flanked by G/C and A/T at positions −8 and +8 (proximal flanks) ( Fig. 4a ). More strikingly, at positions −7, +7, −6 and +6 the predicted minor groove width in A/T flanked GBSs is not only narrower than the rest of the GBS but also narrower than at the corresponding position in G/C flanked GBSs ( Fig. 4a and Supplementary Fig. 4A ). Importantly, the overall nucleotide composition (given as A/T content in Fig. 4a ) of the GBS and its surrounding region was comparable for the two groups of sequences, indicating that the effect on the two neighbouring nucleotides is a consequence of changing the sequence of the proximal flanks. We also predicted the propeller twist for the same sets of A/T and G/C flanked GBSs and found that the propeller twist differs between the two groups of sequences, especially at positions −8 and +8 (proximal flanks) ( Supplementary Fig. 4B ). Next, we repeated the DNA shape prediction for individual GBSs, tested previously in the luciferase reporter assays ( Fig. 4b ). Since the first half-site (positions −7 to −2) is identical in all tested GBSs it is not surprising that all GBSs have a similar minor groove width at these positions. Notably, minor groove width of the spacer varies among GBSs, consistent with the known role of the spacer in modulating GR activity 13 . Here we focus on the proximal flank of the second half-site (positions 6–8). For both Cgt and Sgk GBSs, the minor groove width at the flanking position +8 is slightly narrower in the G/C flanked version than in the A/T flanked version. In contrast, the neighbouring positions +6 and +7 exhibit a narrower minor groove width in A/T flanked versions. This result suggests that the crucial structural DNA shape change occurs at positions +6, +7 and +8. For the Pal and FKBP5-1 GBS variants (which do not exhibit a flank effect) the minor groove width is already quite narrow at these positions perhaps explaining why these GBSs do not exhibit a flank effect. Figure 4: Effect of flanking sequences on predicted DNA shape. ( a ) Top: predicted mean minor groove width (MGW) for individual nucleotide positions for group of A/T flanked GBSs associated with strong responder genes (83 GBS) and for group of G/C flanked GBSs associated with weak responder genes (75 GBS). Bottom: A/T content (%) at each position for A/T (red) and G/C (blue) flanked sequences used for the analysis. ( b ) Predicted minor groove width for individual nucleotide positions for different GBSs flanked by either G/C or A/T nucleotides. Full size image GBS flanking nucleotides affect GR-DBD conformation Overall, the predicted changes in DNA structure induced by the flanking nucleotides suggest that DNA shape may serve as an input signal that regulates GR activity. To determine if the flanking nucleotides influence GR structure and/or dynamics, we probed the DBD of GR in complex with flank-site Cgt variants by two-dimensional nuclear magnetic resonance (2D NMR) spectroscopy experiments in which nuclei of protein backbone amines ( 1 H, 15 N) are correlated. The resulting spectra provide one signal for each amide and depict the so-called protein fingerprint region, which is unique for each protein construct and chemical (for example, binding-dependent) environment. As expected, addition of proximal flank Cgt variants resulted in spectral changes when compared with unbound DBD ( Supplementary Fig. 5A ). When we compared the spectra of the complexes between GR DBD and G/C and A/T flanked Cgt oligonucleotides, we found a number of differences between spectra ( Supplementary Fig. 5B ). To study these differences in more detail, we analysed the normalized chemical shift perturbation (CSP) data for each residue as described previously 13 . Interestingly, we do not only observe affected amino acid residues in direct vicinity of the altered base pair but rather affected residues reside throughout the whole DBD indicative of global changes in DBD conformation induced by the proximal flanks 20 , 21 , 22 ( Figs 5 and 6a ). Figure 5: NMR chemical shift difference analysis between GBSs with different flanks. Chemical shift difference of spectra between (top three panels) Cgt flanked by A/T versus G/C; A/T versus A/C; G/C versus A/C for wild-type DBD and (bottom panel) Cgt A/T versus G/C for the dimer mutant DBD (A477T). Horizontal dashed grey lines indicate significance cut-off (average +1 s.d.). Green dashed lines demark amino acid residues with significant shifts when comparing the A/T and A/C sequences. Full size image Figure 6: Influence of flanking nucleotides on GR structure. ( a ) Side view of GR DBD crystal structure (PDB: 3G9J) with chains A and B corresponding to each monomer. Amino-acid residues with significant combined 1 H and 15 N chemical shift differences between A/T- and G/C-flanked Cgt sequences are projected onto this GR DBD structure and coloured in red. ( b ) Side view of GR DBD crystal structure with amino acid residues with significant combined 1 H and 15 N chemical shift differences between A/T and A/C flanked Cgt sequence projected in green onto the GR DBD, chain B. Full size image Next, we selectively changed the flanks at either the ‘perfect’ half-site (chain A) or at the ‘imperfect’ half-site (chain B), which is mainly responsible for the flank effect. These experiments showed that changing the flanking nucleotides of the imperfect half-site (AT/AC; Figs 5 and 6b , Supplementary Fig. 5C ), resulted in CSPs for several residues (T456, R488, N497, N506, K511). Similarly, changing the proximal flank of the perfect half-site (GC/AC, Fig. 5 , Supplementary Fig. 5D ) induced peak shifts for multiple residues. Interestingly, however, the residues affected overlapped for some residues (T456 and Y497), whereas they were flank-specific for others ( Fig. 5 ). As a general rule, NMR spectroscopy is not able to distinguish oligomers with similar conformations or dynamics from one another. During the assignment and CSP calculation though, it became apparent that several residues, which map predominantly to the DNA-recognition helix 1 (G458, C460 and K461), show split peaks meaning more than one signal for a given DBD amino acid ( Supplementary Fig. 6 ). Notably, split peaks were not observed for all residues (example shown for Q520, Supplementary Fig. 6 ) and a comparison of apo and DNA-bound GR DBD spectra ( Supplementary Fig 5a ) showed that the extra peaks are not a simple consequence of having a fraction of GR DBD in our samples that is not DNA-bound. Splitted peak patterns are characteristic for either conformational exchange within each monomer or different chemical environments (that is, conformations or DNA sequence) of the individual monomers within the ternary DNA/DBD complex. Observation of a third peak for C460 on substitution of A/T by G/C nucleotides at the proximal flank positions indicates the possible presence of two distinct conformations for one of the individual monomers. Helix 1 sits in the major groove opposite to the minor groove at positions (−6, −7/+6, +7) where the flanking nucleotides induce a narrowing of that groove. Consequently, the DBD of GR might contact DNA differently, for example, by contacting other nucleotide positions when we change the sequence of the flanks. To test this, we analysed the protein–DNA complex again by NMR spectroscopy but this time by not observing the resonances of the protein but those of the DNA itself. We assigned the imino protons in the 1D spectra for Cgt flanked by either A/T or G/C nucleotides ( Supplementary Fig. 7A ) and titrated both oligonucleotides with increasing amounts of protein to determine whether the proximal flanks influence protein–DNA contacts within the complex ( Supplementary Fig. 7 ). Consistent with the crystal structure of the GR–DNA complex, these experiments indicate that the DBD contacts both half-sites of the motif at positions −6 (G6), −4 (T41), −3 (G40) or +2 (T14), +4 (T16). On protein addition, we observed a progressive uniform line broadening for both DNAs, indicative of similar Kd values, which is in agreement with EMSA experiments. When we compared the base pairs contacted between A/T and G/C flanked DNAs, the same set of residues showed evidence for binding to the DBD of GR. However, the imino proton of G46 (position −9), whose resonance is well-resolved, led to a more pronounced broadening in the case of the A/T-flanked DNA ( Supplementary Fig. 7 ). This base pair located outside the 15-bp consensus sequence interacts with the DBD of GR, in agreement with contacts formed by helix 3 in the crystal structure 10 . This highlights a very subtle difference introduced by the flanking nucleotides on the protein–DNA complexes. Together, our approaches probing changes in the structure indicate that a G/C flank induces several changes in the DBD of GR compared with the GBS with an A/T flank. Flank effect requires an intact dimer interface To investigate how the DBD of GR might recognize the shape of DNA to modulate GR activity, we tested the role of several candidate residues of the DBD that contact the DNA. As candidates we chose R510, which is part of helix 3 and contacts the flanking nucleotide directly according to the crystal structure 11 . Similarly, K511 might contact the flanking nucleotide and thus shows a significant chemical shift in our NMR experiments on changing the flanks ( Fig. 5 ). In addition, we tested K461 and K465, which reside in the DNA recognition helix 1. Based on the crystal structure, K461 makes a base-specific contact with the G at position −6/+6 in the major groove opposite to the position where the flank induces a change in minor groove width, whereas K465 contacts the phosphate backbone 11 , 23 . When we mutated R510, K511 or K465 to alanine, the flank effect was still observed arguing against a role of these residues in ‘reading’ the DNA to modulate GR activity ( Fig. 7a ). Mutating K461 to an alanine resulted in a marked decrease in GR-dependent activation for the A/T and a slight decrease for the G/C-flanked GBS, consistent with decreased activity found for this mutant in previous studies 24 . Interestingly, however, there was still some residual activity for the G/C flanked GBS, the one with the slightly higher affinity ( Fig. 3b ), whereas no activation was seen for the A/T-flanked variant, which is more active for wild-type GR ( Fig. 7a ). Interpretation of this result is complicated by the fact that mutating this charged residue alters the binding energetics and potentially structure of the complex. None the less, our findings suggests that the K461 residue might play a role in interpreting the proximal-flank-encoded instructions and corroborates previous studies 24 that uncovered a role of this residue in interpreting the signalling information provided by GR response elements. Figure 7: Flank effect requires an intact dimer interface. ( a ) Comparison of transcriptional activation of transiently transfected reporters with GBS as indicated flanked by either A/T or G/C sequences between GR wild-type (WT) and GR variants R510A, K511A, K465A and K461A, respectively. Average induction on 1 μM dexamethasone (dex) treatment relative to ethanol (etoh) vehicle±s.e.m. ( n ≥3) is shown. ( b ) Same as a comparing GR wild-type (WT) and dimer mutant (Dim, A477T). ( c ) Zoom-in of 1 H- 15 N-SOFAST-HMQC spectra of selected peaks for residues that show peak-splitting and non-overlapping spectra when comparing GR DBD in complex with either A/T- or G/C-flanked Cgt sequences for (left) wild-type, (middle) A477T dimer mutant DBD and (right) overlay of wild-type and dimer mutant. (Stoichiometry DNA:DBD; 1:2). ( d ) Cartoon depicting (left) how the bases flanking the GBS influence the structure and relative positioning of GR half-sites and (right) how disruption of the dimer interface weakens the effect of the flank on GR structure (and activity). Full size image Prior studies have shown that an intact dimer interface is required to read DNA shape and to direct sequence-specific GR activity when changing nucleotides of either the spacer or of GR half sites 13 . Comparison of the binding affinity for A/T- and G/C-flanked Cgt showed that GR’s affinity was comparable for both sequences for both wild type ( Fig. 3b ) and also for A477T DBD (A/T: 3.1±0.4 μM; G/C: 3.5±1.1 μM) although the affinity was lower for the mutant. To test if the dimer interface plays a role in mediating the flank effect, we tested the impact of disrupting the dimerization interface on proximal-flank-induced modulation of GR activity. As reported previously, mutating A477 of the dimer interface resulted in GBS-specific effects 13 . For the A/T-flanked GBSs Cgt and Sgk, the difference in GR activity between wild type and the A477T mutant was small (Sgk: 8% decrease; Cgt 13% increase, Fig. 7b ). In contrast, for the flank with the lower activity, G/C, the A477T mutation resulted in a more pronounced increase in activity for both GBSs tested (Sgk: 50% increase; Cgt: 69% increase, Fig. 7b ). Consequently, the difference in activity between the A/T and G/C flanked versions of Cgt and Sgk is smaller for the dimer mutant than for wild type GR ( Fig. 7b ), indicating that the dimerization domain is involved in transmitting the flank effect. Strikingly, the dimerization interface lies on the opposite side of the GR monomer relative to the flanking nucleotide position, suggesting that a more global change in GR conformation induced by the flanking nucleotides may occur. To further elucidate the role of the dimer interface in transmitting the flank effect, we studied the impact of the A477T mutation on proximal-flank induced conformational changes of GR by 2D NMR spectroscopy ( Supplementary Fig. 5E,F ). This analysis uncovered two main results. First, several of the residues with significant CSPs for wild type (C460, F464, M505, L507, R511, T512, K514) no longer show a significant shift when we compare the G/C and A/T flanked Cgt for the A477T mutant ( Fig. 5 , Supplementary Fig. 5F ). Second, several peaks that show flank-specific patterns of peak splitting for wild type GR (for example, C460) show an overlapping single peak for the mutated A477T DBD ( Fig. 7c ). This indicates that proximal flanks can only induce alternative conformations of the DBD when the dimerization interface is intact. Together, these functional and structural analyses of the consequences of disrupting the dimer interface, argue for its role in facilitating flank-induced changes in GR conformation and activity. Discussion Specific recognition of DNA sequences by TFs is a consequence of both base readout and shape readout of the DNA-binding site 25 . In addition to specifying which genes are regulated by a particular TF, the binding site sequence can also play a role in fine-tuning the expression level of genes. For example, binding sites might be able to modulate gene expression as a consequence of differences in affinity 12 , 26 , 27 , 28 , where high affinity binding sites induce a higher level of transcriptional activation than low affinity binding sites. However, in vitro affinity and in vivo activity often do not correlate 11 , 29 , 30 , 31 . Accordingly, we find in this study that sequences flanking the core GBS induce changes in activity without apparent changes in affinity derived from in vitro binding studies. One explanation for this apparent disconnect between binding affinity and activity could be that in vitro binding affinity does not reflect binding affinity in vivo . Yet, here we also fail to see a correlation when we compare in vivo occupancy derived from ChIP experiments as a proxy for in vivo affinity. We would like to point out that the interpretation of quantitative comparisons of ChIP efficiencies between binding sites is complicated by possible sequence-specific efficiencies of formaldehyde cross-linking 32 . In this study, we focused on the first flanking nucleotide or ‘proximal flank’. However, when we changed the second flanking position, we found an even more dramatic effect, where depending on the sequence of this position GR could either robustly activate transcription, or completely lack the ability to activate transcription ( Supplementary Fig. 8A ). Again, the modulation of GR activity appears independent of binding affinity, and could be a consequence of conformational changes of the DNA ( Supplementary Fig. 8B,C ). Together, these findings argue that GBSs can modulate GR activity downstream of binding. Structural studies 8 , 11 , 13 , including those presented here, indicate that GBS variants with distinct transcriptional activities induce alternative conformations in the DBD of GR. These structural changes can be induced by changing the sequence of the spacer, of the half-sites, or as we show here of the nucleotides flanking the core-binding site. Based on the structure, the side chain of R510 and K511 can contact the flanking nucleotides and thus serve as potential ‘readers’ that interpret the DNA-encoded instructions and translate these into changes in activity. However, when we change these residues to alanines, the flank effect is still observed. This suggests that direct contacts with the flanking nucleotides are not responsible for the flank effect. Instead, the effects of the proximal flanks might be a consequence of the predicted changes in DNA shape. DNA shape, in turn, could induce structural changes in the associated GR dimer partners. To further understand the molecular basis that gives rise to the aforementioned split peaks in our NMR spectra, we turned to molecular dynamics (MD) to simulate how changing the flanks influences the individual monomers. When we compared the overall trajectories, however, we did not observe significant structural differences for either chain A or chain B when we compared the root mean squared deviation (r.m.s.d.) values between the A/T- and the G/C-flanked Cgt GBS. Similarly, we only observed subtle changes when we compared the root mean squared fluctuation (r.m.s.f.) ( Supplementary Fig. 9 ), a measure of flexibility of the DBD, between the two Cgt flank variants. The changes that do occur, predominantly map to residues at the dimerization interface ( Supplementary Fig 10A ). In addition, the r.m.s.f. values for monomer B when bound to the G/C-flanked GBS show higher values than those observed for the A/T counterpart indicating that chain B’s interaction with the DNA for this sequence is more dynamic ( Supplementary Figs 10A and 9 ). Finally, we compared the median GR-DBD structures (computed from the last 50 ns of the MD simulations) when bound to A/T- or G/C-flanked Cgt. Again, the deviations between these two structures are only small except for the lever arm, which connects the dimerization interface with the DNA recognition helix ( Supplementary Fig. 10B ). Interestingly, however, changing the flanking nucleotides appears to result in a different relative positioning of the dimer-halves as can be seen from the median conformations for both flank-variants when aligned on chain A ( Supplementary Fig. 10C ). Together, our structural approaches showed flank-induced changes in the dynamics and conformation of the dimer partners and in the relative positioning of GR dimer halves. Consistent with previous studies 13 , we find that GR’s ability to ‘read’ DNA-shape encoded instructions, in this case as a consequence of changing the flanks, requires an intact dimer interface. Importantly, the mutation in the dimerization domain we studied (A477T) does not result in an inability of GR to dimerize in vivo 33 . Therefore, our interpretation of the effect of mutating the dimerization interface is that they are a consequence of perturbing an interface important for communication between dimerization partners or for communication between different GR domains of each monomer, rather than a consequence of an inability of the mutant to bind DNA as a dimer. We find that mutating the dimer interface diminishes flank-induced changes in both GR structure and activity. This suggests that the dimer interface prevents the monomers from adopting an optimal positioning in the major groove and consequently the dimer partners switch between different conformational states to accommodate conflicting optimal contacts at the dimer interface and those with the DNA ( Fig. 7d ). This might also explain the high degree of flexibility that the dimer interface and connected lever arm display based on the r.m.s.f. values of the MD experiments ( Supplementary Figs 9 and 10 ). Mutation of the dimer interface might release this stress and allow optimal positioning of both dimer partners for contacting the DNA in the major groove. Similarly, conflicts in the optimal positioning of dimer halves might be relieved when mutating K461, which weakens the interactions between DNA and protein 24 thus favouring optimal positioning of the GR partners for interactions at the dimerization interface. To link the structural changes to variations in transcriptional output, we propose that DNA-shape-induced effects on the conformation, dynamics or relative positioning of GR partners influence its interactions with co-regulators by making or breaking interaction surfaces to ultimately modulate the recruitment or activity of the RNA polymerase machinery. In addition to fine-tuning the activity of TFs, DNA shape also enables paralogous TFs to have distinct DNA-binding preferences 34 , 35 . For example, members of the Hox family of TFs share a similar consensus recognition sequence, yet have distinct functions in vivo . This specificity was explained by Hox-specific DNA shape preferences which enabled the exchange of binding site preferences from one Hox protein to another by swapping shape-recognizing residues 34 . In addition, several other studies have shown a role of nucleotides flanking the core-binding site in guiding TFs to their cognate binding sites 35 , 36 . For GR, several related nuclear receptors share the same DNA binding specificity in vitro yet regulate different physiological processes. For example, the androgen receptor promotes myogenesis 37 whereas chronic GR activation results in muscle wasting 38 . We speculate that DNA shape could also generate specificity for this family of TFs by modulation of TF activity downstream of binding. In this scenario, two TFs might bind at the same target site, yet only one adopts an activation-competent conformation. The possibility that only certain binding events induce activation-competent conformations could also explain, in part, why only a minority of genes show changes in their expression level on binding of TFs to regulatory sequences nearby 39 . The activity of TFs towards individual target genes can be modulated by a variety of mechanisms other than the sequence identity of the binding site. For example, a recent study showed that the number of occupied NF-κB-binding sites associated with a gene correlates with the magnitude of activation 39 . However, in addition to being expressed at higher levels, genes with multiple TF-binding sites might display a greater degree of cell-to-cell variability (transcriptional noise) of gene expression 40 . Therefore, we speculate that it could be beneficial for some GR target genes to be under control of a single, highly-active GBS with little transcriptional noise rather than multiple GBSs which induce greater noise. Another benefit of modulating activity by DNA-shape-induced conformational changes is that this might allow GR to induce different expression levels of a gene from the same binding site depending on its cellular context. This could, for example, occur when a GBS-induced conformation facilitates interaction with a particular co-regulator that is expressed in a cell-type-specific manner. This would be one of several mechanisms that GR can exploit to extract context information from its cellular environment to allow fine-tuning of its activity towards distinct sets of target genes responsible for GR’s role in diverse physiological processes including metabolism, inflammatory response and emotional behaviour. The present study advances our understanding of GBS-mediated regulation of GR activity in several ways. First, we show for the first time that GBSs can modulate GR activity in a genomic context and our in vivo occupancy studies indicate that this modulation occurs downstream of binding. Structural studies indicate that this modulation may be a consequence of GBS-dependent conformational changes of individual monomers and of changes in the relative positioning of dimeric partners. Studies with related hormone receptors that heterodimerize have shown allosteric communication between dimerization partners across the dimerization interface to fine-tune the structure and activity of the complex 41 . Here we propose that GR monomers can change their shape and that the homodimerization partners can change their relative positioning to assemble multiple distinct complexes, effectively allowing a kind of combinatorial regulation of transcriptional output by a single TF. Whether GBSs indeed play a role in modulating the activity of GR towards endogenous GR target genes is still unclear. Arguing in favour of this possibility, we show that GBS sequence features found at GR-bound regions in the genome, specifically the nucleotides flanking the core GBS, show different preferences depending on strength of regulation of the nearby gene. The next step to study the role of GBS composition in the modulation of endogenous target gene expression would be to test the consequences of changing the sequence identity of endogenous binding sites, which, given the recent advances in the ability to edit the genome, has now become within reach. Methods Plasmids Luciferase reporter constructs were generated by inserting a GBS of interest ( Supplementary Table 1 ) by ligating oligonucleotides with overhangs to facilitate direct cloning into the KpnI and XhoI sites of pGL3 promoter (Promega). Mutations of the second flank position ( Supplementary Fig. 8 ) were introduced by site-directed mutagenesis (oligos listed in Supplementary Table 2 ). Expression constructs for wild-type rat GR, GR dim mutant (A477T) and GR R510A mutant have been described previously 11 . GR mutants K465A and K511A were generated by site-directed mutagenesis (oligos listed in Supplementary Table 2 ). Constructs expressing ZFNs against the AAVS1 locus have been described elsewhere 14 , 42 . Donor constructs for luciferase reporter addition to the AAVS1 locus were assembled as described 14 . The donor constructs consisted of regions of homology flanking the position where the ZFNs induce the double strand break, a promoter-less GFP gene and the GBS sequence as indicated upstream of a minimal SV40 promoter driving expression of the firefly luciferase gene derived from the pGL3-promoter plasmid (Promega). Cell lines, transient transfections and luciferase assays U2OS (ATCC HTB-96) and U2OS cells stably transfected with rat GRα 43 , 44 were grown in DMEM supplemented with 5% FBS. Transient transfections were done essentially as described 11 . Luciferase activity was measured using the dual luciferase assay kit (Promega). Electrophoretic mobility shift assays EMSAs were performed as described previously 15 . Briefly, a series of GR DBD dilutions were mixed with 1.25 × 10 −9 M DNA (oligos listed in Supplementary Table 3 ) in 20 mM Tris pH 7.5, 2 mM MgCl2, 1 mM EDTA, 10% glycerol, 0.3 mg ml −1 BSA, 4 mM DTT, 0.05 μg μl −1 dIdC. Reaction mixes were incubated for 30 min to reach equilibrium, loaded onto running native gels and scanned using a FLA 5,100 scanner (Fujifilm) to quantify free [ D ] versus total [ D ]t DNA. Equilibrium binding constants ( K D) were determined by non-linear least squares fitting of the free protein concentration [ P ] versus the fraction of DNA bound ([ PD ]/[ D ]t) to the equation [ PD ]/[ D ]t=1/(1+( K D/[ P ])). Targeted Integration of GBS reporters Cell lines with stably integrated GBS reporters were isolated as described previously 14 . Briefly, cells were transformed with ZFN and donor construct by nucleofection (Amaxa), GFP-positive pools of cells were isolated by flow-activated cell sorting (FACS) and single-cell-derived clonal lines were isolated. To identify clones with a correct integration of the donor construct at the AAVS1 locus, 40 ng of chromosomal DNA was analysed by PCR using a primer targeting the donor construct (Luc-fw: 5′-Tcaaagaggcgaactgtgtg-3′) and a primer targeting the genomic AAVS1 locus that directly flanks the site of integration (R5: 5′-ctgggataccccgaagagtg-3′)( Fig. 1a and Supplementary Fig. 1A ). Chromatin immunoprecipitation ChIP assays were performed as described using the N499 GR-antibody 15 . For each ChIP assay, approximately five million cells were treated with 0.1% ethanol vehicle or 1 μM dexamethasone for 1.5 h. Primers used for quantitative PCR (qPCR) are listed in Supplementary Table 4 . RNA isolation and analysis by qPCR RNA was isolated from cells treated for 8 h with 1 μM dexamethasone or with 0.1% ethanol vehicle using the RNeasy mini kit (Qiagen). The Turbo DNA-free kit (Ambion) was used to remove trace amounts of contaminating chromosomal DNA prior to reverse transcription using random primers and 500 ng of total RNA as input. Resulting cDNA was analysed by qPCR using Rpl19 as an internal control for normalization. Primers used are listed in Supplementary Table 4 . Computational analysis of ChIP-seq and gene expression data Microarray data sets in U2OS cells were taken from ref. 15 (E-GEOD-38971). ChIP-seq data sets from the same study were downloaded as processed peaks from GEO (E-MTAB-2731). The differentially expressed (adjusted P value <0.05) genes in U2OS cells were assigned to two different groups. The first group consisted of the 20% most upregulated genes on hormone treatment (log2-fold change dexamethasone/ethanol vehicle ranging from 1.91 to 7.86; 290 of 1,447 genes). Next, we extracted the ChIP-seq peaks falling in a 40 kb window centred on the transcription start site of each gene (543 peaks in total from 290 genes of this group). For comparison, we extracted a similar number of peaks (532) from genes (688) showing only weak regulation (absolute log2-fold change≤|0.72|). For each group of peaks, we performed de novo motif discovery using RSAT peak motifs (default settings, including dyad-analysis algorithm and the TRANSFAC version 2010.1 motif collection) 16 . Peak motifs automatically compare detected motifs to annotated motif collections, and motifs matching the GR consensus motif (depicted in Fig. 2a ) were manually extracted. To compare ChIP-seq peak heights between GR-bound regions harbouring either A/T or G/C flanked GBSs, GR peaks were first scanned for the occurrence of a GBS-match with RSAT matrix scan (Transfac matrix M00205, P value cut-off: 10 −4 (refs 16 , 45 )). Next, peaks were grouped according to the sequence of the flanks (A/T versus G/C) and median peak height was calculated to produce Supplementary Fig. 3 . To score the enrichment of A/T GBS and G/C-flanked GBSs in the peaks associated with strong and weak upregulation, respectively ( Supplementary Fig. 2 ), RSAT matrix-quality was used to compute normalized weight differences (NWD) 46 . The input motifs for matrix-quality were derived from the above-mentioned matrices corresponding to GR motifs found with peak motifs, enforcing only A/T or G/C at the flank position. DNA shape prediction For DNA shape prediction, we used GBSs associated with weakly and strongly responsive GR target genes. For the weak and strong peak data sets, we extracted the sequence of all GBSs flanked by either G and C (75 GBSs) or A and T (83 GBSs), respectively. The sequences were aligned based on the GBS spacer by setting the centre spacer position to 0. Minor groove width and propeller twist were derived for each position in the aligned sequences using a high-throughput DNA shape prediction approach 19 . To test for differences in DNA shape features between the weak and strong peaks, Wilcoxon test P values were calculated for each nucleotide position separately. NMR Protein expression and purification 15 N-labelled wild-type and A477T mutant rat GR DBD (residues 440–525) were expressed and purified essentially as described previously 13 except that a codon optimized construct for expression in Escherichia coli was used here. In brief, proteins were expressed in E. coli (T7 Express; NEB) using the pET expression system in M9 minimal medium 47 . Expression was induced at an OD 600 of 0.6–0.9 using 0.25 mM IPTG (Amresco). Temperature was lowered from 37 to 25 °C on addition of IPTG and cultures grown overnight. Cells were harvested and lysed followed by protein separation by IMAC and IEX chromatography. The latter was done after extensive dialysis against salt-free buffer. Final dialysis at the end of protein purification was carried out against NMR buffer (20 mM sodium phosphate; 100 mM NaCl; 1 mM DTT; pH 6.7). Protein–DNA complex formation Single-stranded DNA oligos (salt-free and lyophilized) were purchased from MWG and purified as described 13 . Buffer was exchanged to water using NAP10 gravity flow columns (GE Healthcare) and annealed according to a standard protocol. Success of annealing was evaluated using proton-detected 1D NMR spectra. Protein–DNA complexes for 2D NMR were prepared essentially as described 13 by mixing protein solution of either GRα or GRα-dim in onefold NMR buffer with dsDNA oligos. Final concentrations of protein and DNA was 40 μM and 53 μM, respectively, resulting in a molar ratio of 1:1.33. Samples were supplemented with 5% D 2 O the lock. Water and twofold NMR buffer was added to give a final sample volume of 500 μl. Sequence of oligos is described in Supplementary Table 5 . NMR and CSP analysis 1 H- 15 N-HSQC spectra were recorded as SOFAST versions 48 at 35 ° C on a Bruker AV 600 MHz spectrometer (Bruker, Karlsruhe, Germany) equipped with a cryo-probehead. TopSpin (version 3.1, Bruker) was used for data processing, including zero filling and linear prediction. The transfer of previous assignment 13 and general data evaluation were done using the CCPN software package (version 2.1.5) 49 . CSP was calculated using the following formula 50 : where 1 H and 15 N refer to the mathematical difference of individual hydrogen and nitrogen chemical shifts of two distinct peak maxima. Gyromagnetic ratio ( γ i ) of nuclei i , where i is 1 H or 15 N, is used for normalization. DNA assignment NMR experiments were recorded at 700 MHz on an Avance III Bruker spectrometer equipped with a TCI z-gradient cryoprobe. NMR data were acquired at 15 and 20 °C. Solvent suppression was achieved using the ‘Jump and Return’ sequence combined to WATERGATE 51 , 52 , 53 . 2D NOESY spectra were acquired with mixing times of 400 and 50 ms. NMR data were processed using TopSpin and analysed with Sparky software packages (Goddard, T.D. and Kneller, D.G., SPARKY 3, the University of California, San Francisco). 1 H assignments were obtained using standard homonuclear experiments. The resonances found between 10 and 14 p.p.m. are characteristic of protons involved in hydrogen bonds, generally due to the formation of base pairs. The imino proton spectra of A/T- and G/C-flanked DNAs, showed the formation of DNA duplexes. The A:T Watson–Crick base-pairs were discriminated from G:C base-pairs by the strong correlation between the thymine H3 imino proton and the H2 proton of adenine. In a G:C Watson–Crick base-pair, two strong NOEs cross-peaks are observable between the guanine H1 imino proton and the cytosine amino protons. Base-pairings were next established via sequential nuclear Overhauser effects observed in 2D NOESY spectra at different mixing times. DNA–protein titration Proton detected 1D NMR spectra with double WATERGATE sequence for water suppression 54 were used for titration experiments. Inter-gradient delay of WATERGATE sequence was set to 80 μs to obtain a maximum signal intensity of dsDNA-specific hydrogen bonds at ∼ 12 p.p.m. About 500 μl of 50 μM dsDNA in 1 × NMR buffer without protein was used as initial concentration (incl. 5% D 2 O). Unlabelled GRα (1.2 mM stock concentration in NMR buffer) was added stepwise to achieve DNA–sprotein ratios of 0.25; 0.50; 0.75; 1.00; 1.25; 1.50; 1.75; 2.00; 2.50; 3.00, while minimal dilution of dsDNA occurred (final concentration of dsDNA at 1:3 ratio was 45 μM). All titrations experiments were performed at 25 °C, monitoring the imino protons region of 1D spectra. Intensities of imino protons were measured at each point of the titration. Ratios of intensities between bound-DNA and free-DNA were calculated for both A/T DNA and G/C DNA. All peaks showed similar decreases in intensity with increasing DNA–protein ratios, with the exception of G46 which exhibited a more pronounced broadening in the case of the A/T DNA. MD simulations Molecular systems Classical MD simulations were carried out for A/T and G/C flank variants of the Cgt GBS. The initial structure was prepared based on a crystal structure of the GR DNA-binding site in complex with the Cgt-binding site (PDB ID 3FYL 11 ). Position +5 was mutated in silico (C to A). Five and four nucleotides per strand in a perfect B-form were added to the 5′ and 3′ side of the DNA fragment, respectively, resulting in DNA fragments with 24 nucleotides length: 5′-CACCAAGAACATTTTGTACGTCTC-3′ and 5′-CACCGAGAACATTTTGTACGCCTC-3′ for the A/T and G/C Cgt flank variant, respectively. Molecular dynamic simulation The simulations were performed with the program package NAMD 2.10 (ref. 55 ) using CHARMM27 force field 56 . The DNA fragments of initial structures were energy minimized (3,000 steps of conjugate gradient) to remove energetically unfavourable conformations resulting from the addition of the additional nucleotides. The systems were solvated in TIP3P water 57 and a total of 35 sodium ions were placed randomly within a minimum distance of 10.5 Å from the solute and 5 Å between sodium ions to ensure a zero net charge for the solute–solvent–counterion complex. The systems contained ∼ 127,000 atoms. The final complexes were equilibrated by 5,000 steps of energy minimization, followed by a 30 ps MD simulation (time step 1 fs) to heat up the system to 300 K by velocity scaling. Next, a relaxation 200 ps (time step 1 fs) was performed for an NPT ensemble. Periodic boundary conditions were implemented with the particle-mesh Ewald method 58 for electrostatic interactions with cut-off distance 14 Å. Lennard–Jones interactions were truncated at 14 Å. The SHAKE algorithm was applied to constraint all bonds involving hydrogen atoms. Three independent, 100-ns-long MD simulations were performed in constant pressure (1 bar) and constant temperature (300 K) with a 2 fs time step for each A/T and G/C flank GBS. During these simulations, pressure and temperature were maintained constant using langevin dynamic barostat and Nosé−Hoover Langevin thermostat. The terminal base pairs of the DNA fragments were restrained harmonically. A simulation run was further prolonged to 300 ns for complexes with both A/T- and G/C-flanking nucleotides. Data availability Microarray (E-GEOD-38971) and ChIP-seq (E-MTAB-2731) data are deposited in the GEO repository. All other data are available from the corresponding authors upon request. Additional information How to cite this article: Schöne, S. et al. Sequences flanking the core-binding site modulate glucocorticoid receptor structure and activity. Nat. Commun. 7:12621 doi: 10.1038/ncomms12621 (2016). Change history 22 November 2016 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper.
Substances known as transcription factors often determine how a cell develops as well as which proteins it produces and in what quantities. Transcription factors bind to a section of DNA and control how strongly a gene in that section is activated. Scientists had previously assumed that gene activity is controlled by the binding strength and the proximity of the binding site to the gene. Researchers at the Max Planck Institute for Molecular Genetics in Berlin have now discovered that the DNA segment to which a transcription factor binds can assume various spatial arrangements. As a result, it alters the structure of the transcription factor itself and controls its activity. Neighbouring DNA segments have a significant impact on transcription factor shape, thus modulating the activity of the gene. For a car to move, it is not enough for a person to sit in the driver's seat: the driver has to start the engine, press on the accelerator and engage the transmission. Things work similarly in the cells of our body. Until recently, scientists had suspected that certain proteins only bind to specific sites on the DNA strand, directing the cell's fate in the process. The closer and more tightly they bind to a gene on the DNA, the more active the gene was thought to be. These proteins, known as transcription factors, control the activity of genes. A team of scientists headed by Sebastiaan Meijsing at the Max Planck Institute for Molecular Genetics have now come to a different conclusion: The researchers discovered that transcription factors can assume various shapes depending on which DNA segment they bind to. "The shape of the bond, in turn, influences whether and how strongly a gene is activated," Meijsing explains. Consequently, transcription factors can bind to DNA segments without affecting a nearby gene. As in our car analogy, the mere presence of a "driver" is evidently not sufficient to set the mechanism in train. Other factors must also be involved in determining how strongly a transcription factor activates a gene. Glucocorticoid receptor is also a transcription factor One example is glucose production in the liver. If the blood contains too little glucose, the adrenal glands release glucocorticoids, which act as chemical messengers. These hormones circulate through the body and bind to glucocorticoid receptors on liver cells. The receptors simultaneously act as transcription factors and regulate gene activity in the cells. In this way, the liver is able to produce more glucose, and the blood sugar level rises again. "Sometimes glucocorticoid receptor binding results in strong activation of neighbouring genes, whereas at other times little if anything changes," Meijsing reports. The scientists found that the composition of DNA segments to which the receptors bind help determine how strongly a gene is activated. However, these segments are not in direct contact with the receptors acting as transcription factors; they only flank the binding sites. Yet, that is evidently enough to have a significant influence on the interaction. "The structure of the interface between the transcription factor and genome segments must therefore play a key role in determining gene activity. In addition, adjacent DNA segments influence the activity of the bound transcription factors. These mechanisms ultimately ensure that liver cells produce the right substances in the right amounts," Meijsing says. Medical applications The findings could also have medical applications. Many DNA variants associated with diseases belong to sequences that evidently control the activity of transcription factors. "Scientists had previously assumed that these segments exert an effect by inhibiting the binding of transcription factors, thus impeding the activity of neighbouring genes," Meijsing says. "Our findings have now shown that some of these segments may not influence the contact directly but nevertheless reduce the activation state of the associated transcription factor."
10.1038/ncomms12621
Other
Research: A country's degree of gender equality can affect men's ability to recognize famous female faces
Maruti V. Mishra et al. Gender Differences in Familiar Face Recognition and the Influence of Sociocultural Gender Inequality, Scientific Reports (2019). DOI: 10.1038/s41598-019-54074-5 Rankin W. McGugin et al. Race-Specific Perceptual Discrimination Improvement Following Short Individuation Training With Faces, Cognitive Science (2010). DOI: 10.1111/j.1551-6709.2010.01148.x Journal information: Scientific Reports , Cognitive Science
http://dx.doi.org/10.1038/s41598-019-54074-5
https://phys.org/news/2019-12-country-degree-gender-equality-affectmen.html
Abstract Are gender differences in face recognition influenced by familiarity and socio-cultural factors? Previous studies have reported gender differences in processing unfamiliar faces, consistently finding a female advantage and a female own-gender bias. However, researchers have recently highlighted that unfamiliar faces are processed less efficiently than familiar faces, which have more robust, invariant representations. To-date, no study has examined whether gender differences exist for familiar face recognition. The current study addressed this by using a famous faces task in a large, web-based sample of > 2000 participants across different countries. We also sought to examine if differences varied by socio-cultural gender equality within countries. When examining raw accuracy as well when controlling for fame, the results demonstrated that there were no participant gender differences in overall famous face accuracy, in contrast to studies of unfamiliar faces. There was also a consistent own-gender bias in male but not female participants. In countries with low gender equality, including the USA, females showed significantly better recognition of famous female faces compared to male participants, whereas this difference was abolished in high gender equality countries. Together, this suggests that gender differences in recognizing unfamiliar faces can be attenuated when there is enough face learning and that sociocultural gender equality can drive gender differences in familiar face recognition. Introduction Gender differences in cognitive performance and its origins have important implications for models of cognitive abilities as well as society. Consistent gender differences have been reported in visuospatial tasks such as mental rotation 1 , visual working memory 2 , visual motion processing 3 , sustained attention 4 , emotion recognition 5 , face recognition 6 , and episodic memory recollection 7 , with females showing superior performance over males in most of the tasks except for visuospatial attention tasks where males perform better than females. Though it is debated whether these differences are driven by biological or socio-cultural factors 8 , 9 , many studies emphasize the impact of the latter 10 , 11 , 12 , 13 , 14 . The aims of the current study were twofold; first, we sought to understand gender differences in face recognition beyond “unfamiliar” face recognition (the rapid learning of previously unfamiliar faces) to “familiar” face recognition (recognizing faces that one has semantic knowledge about and previous exposure). Second, we used a large, multi-country sample to probe for any modulation of gender differences by socio-cultural gender equality. Previous studies on gender differences in face processing have focused on the perception and recognition of unfamiliar faces. These differences were observed specifically in within-task learning and recognition paradigms 15 , 16 , 17 or simultaneous perceptual matching paradigms 6 , 18 , 19 , with females showing better performance than males. Further, superior recognition of unfamiliar faces in females has shown to be highly robust and invariant to face view 20 , gaze direction 21 , face-race 22 , 23 as well as duration of presentation 15 , 24 . Studies have also reported own-gender biases, with females being consistently better at recognizing female than male faces 6 , 24 , 25 and less consistently reported a male own-gender bias 26 , 27 . These effects were also supported by multiple eye movement 28 and electrophysiological studies 26 , 29 , 30 . Notably, two recent studies suggest that female superiority in face recognition can be reduced when there is sufficient face learning 31 or prior experience 32 with faces or face categories used. For example, Heisz et al . 31 , conducted a four-day face recognition study for unfamiliar faces, where faces were repeated each day, and showed that the female advantage in response accuracies on the first day was eliminated on the fourth day with repeated face learning. Despite the extensive literature on gender differences in learning and recognizing unfamiliar faces, no study to date has closely examined gender differences in recognizing familiar faces. Though unfamiliar face stimuli are easier to manipulate and control in laboratory settings, in real-world situations we are typically required to identify familiar faces that are learned over many instances and for whom detailed semantic knowledge is available. Because of this enhanced learning, familiar faces have shown to be processed more efficiently than unfamiliar faces, reflected by faster, and more accurate recognition 33 , 34 , 35 . For example, severe image degradation and image distortion has very little effect on the ability to recognize familiar faces, whereas this severely disrupts recognizing unfamiliar faces 36 , 37 . To study the role of familiarity in face recognition, a common approach has been to recall the identity of famous faces. The recollection of semantic (e.g., name, profession) and/or episodic information required by these tasks is quite different from typical matching and recognition tasks used for unfamiliar faces. In particular, most unfamiliar face recognition tasks do not present semantic information along with the face (though see Sperling et al . 38 ) and recognition judgments may rely more on ‘familiarity’, i.e., feeling of knowing, rather than recollecting specific contextual and semantic details 39 , 40 , 41 . Further, the extent or degree of familiarity is also dependent on frequency of prior exposure and subsequent learning. Previous famous faces recognition studies 42 , 43 , 44 have not reported or examined gender differences. Famous face recognition has shown to involve distinct processing from unfamiliar faces 34 , 45 , including extended face learning through repeated exposure, acquiring semantic and episodic knowledge associated with the face, and more reliance on recollection than familiarity 39 , 46 . Given these processing differences between unfamiliar and familiar faces, it is essential to understand to what extent previous theories supporting female superiority in unfamiliar face recognition are generalizable and influenced by face learning and familiarity. Socio-cultural factors such as ethnicity and in-group/out-group effects have also shown to influence face processing, but there have been limited investigations on how they contribute to gender differences in face recognition performance 47 . Previous studies have examined how socio-cultural gender equality affects gender differences in mathematics performance 48 , episodic memory 10 , and attention 4 . Further, it is also reported that these differences depend on the degree of gender equality, existing at the country level 4 , 11 , 14 , 48 . For example, Riley et al . 4 reported greater gender differences in sustained attentional control in countries with low gender equality, in comparison to countries with high equality. Notably, these effects were driven primarily by changes across countries in female rather than male participants. Whether and how socio-cultural factors influence gender differences for familiar face recognition has not been addressed previously and was one of the motivations of the current study. To answer these questions, we used a web-based online study that allowed us to measure face recognition across a large sample spanning different countries. Given the potential influence of the degree of fame of celebrities (fame scores) on recognition accuracy, we also examined accuracy after regressing out fame. Further, previous studies have reported an episodic and recognition memory advantage in females in general 31 , 49 and specifically in unfamiliar face recognition 22 , which may predict a female advantage at recollecting semantic information associated with famous faces. Thus, based on studies of unfamiliar face recognition tasks and the female advantage in episodic memory, we expected to observe gender differences in famous face recognition, with superior performance for females in overall face recognition ability and an own-gender bias that is stronger for females than for male participants. Considering that gender equality across countries relates to cognitive abilities and strategic biases 4 , 11 , 12 , 14 , we also sought to explore whether different levels of gender equality in countries would moderate gender differences in famous face recognition. Based on previous studies 4 , we hypothesized that greater gender equality in a country would be associated with reduced gender differences and reduced own-gender biases in famous face recognition. Alternatively, it is also possible that since the famous faces are highly familiar, we might not observe gender differences 31 or a gender-based interaction. General Methods Participants Participants voluntarily visited the TestMyBrain.org website ( ), that was openly available for anyone, during 2014–2015. A total of 2,770 participants were included in this study (age range = 18–50 years). For each analysis section below, we provide the separate details about participants’ age and gender. Before starting the test, participants provided online informed consent in English, irrespective of their language or country of origin. Each volunteer was given a unique electronic ID and had a unique IP address of the computer from which they ran the task, that was recorded to identify the country where the task was performed. Participants were provided with individualized feedback after completing the task. All the experiments performed here followed the guidelines approved by the Institutional Ethics Committee on the Use of Human Subjects at Harvard University. TestMyBrain.org is a citizen science website that people can visit voluntarily to participate in a variety of neurocognitive tasks in exchange for personalized feedback. Data from TestMyBrain.org has been shown to be of comparable high quality and reliability when compared with data gathered in a laboratory setting 50 and has been extensively used to study population dynamics across various cognitive, perceptual and neuropsychological tests and experiments 4 , 51 , 52 , 53 . Stimuli and Design The stimuli consisted of 69 front-view faces of famous celebrities taken from google images advanced searches (publicly available and free to use, share or modify as described in the usage rights at the google image advanced search database, under the CC-BY-SA-3.0 license, ) that were included in three famous face tests (FFMT1–27 faces, FFMT2–40 faces, FFMT3–26 faces). The faces were cropped to remove extra facial features like hair, ears and area below the jawline. This study was designed as a web-based study that can be run on a PC/Desktop/iPad/mobile phone. For small screen devices the participants were instructed to rotate the screen to the maximum display width. In accordance with this, the size of face image was made to scale up/down depending on the size of the screen used, maintaining the aspect ratio, and thus keeping the image size constant. The visual angle for all the face images were ~ 5.5° × 7°. The faces belonged to people from various professions including actors/actresses, politicians, musicians and sports personalities (for the list of faces used please refer to Table S1 , Supplementary materials). An independent t-test for age of the faces, calculated using each celebrity’s date of birth, showed that males (n = 43, M age = 56.23, SD = 15.77) were significantly older than females (n = 26, M age = 42.35, SD = 14.66) ( t (67) = 3.64, p < 0.001, Cohen’s d = 0.90). Procedure For each face presented at the center of the screen (Fig. 1 ), participants were asked to make their best guess about the identity of the person by typing in the box provided and click ‘submit’, or to click a button that said “I don’t know”. For example, if the face shown was of Tom Cruise, and they could not remember the name but typed that he was the “Top Gun actor” OR “actor Cruise”, they were instructed to self-score their responses as correct (Fig. 1d ). After they entered a response, the correct answer/name of the person was displayed on the screen and they were prompted to click on either of the following: “ I got it Right ”; “ I got it wrong and I am familiar with this person ”; or “ I got it wrong and I am not familiar with this person ”. If the participant did not enter a guess but left the answer field blank (Fig. 1b ) and chose the option “ I do not know” , they were then provided with the answer on the next screen and asked to choose from either “ I am familiar with this person ” OR “ I am not familiar with this person ”. The experiment took approximately 10–15 minutes. There were three famous face tests (FFMT1–27 faces, FFMT2–40 faces, FFMT3–26 faces) and test assignment was randomized across participants. Each test has the faces presented only once. The task across the three version of the test was identical. Each version of the test included a different subset from a total pool of 69 faces, with 24 faces co-occurring in two versions of the test but never repeated in any test or within a participant. This co-occurrence was due primarily to one test (FFMT1) being an earlier version of another test (FFMT2). There were no significant differences in either fame scores ( p = 0.44) or accuracy ( p = 0.33) between stimuli that co-occurred in two tests versus one test. After the task, participants were asked to provide demographic information, such as ethnicity and education. Then, feedback was provided about their performance on the test. Figure 1 Trial structure for famous faces recognition task . Example of four types ( a–d ) of possible trial structure from the experiment. The possible choices made by the participant are highlighted with red box. ( a ) A single face (representative image) is shown at the center of the screen (first row) with a response box and two choices. Once they respond, the second screen (second row) displays the choices. Once they select the required option, the next image ( b ), first row) is displayed on the screen. The task was self-paced. (The modified image was adapted from , available under CC-BY-SA-3.0 license, ). Full size image Analysis approach Data preprocessing Given that the data were obtained based on participants' self-score, before analysis, we screened the data for three types of erroneous trials, where: 1) a correct answer was typed but the participant scored themselves as being ‘incorrect’, 2) an incorrect answer was typed and they scored themselves as being ‘correct’, and 3) there was no response typed but still the participant scored themselves as ‘correct’. In all such cases, if more than 50% of the trials showed this pattern in any participant, the entire case was removed from further analysis (2.71% of participants). For those showing less than 50% of such trials, we eliminated these trials (3% of trials) rather than the participants. As we were interested in assessing participants’ face recognition performance on only faces that they had exposure to and were familiar with, we removed face items where participants indicated they were ‘ not familiar with this person’ . Removal of these trials is consistent with numerous studies that have used famous faces to diagnose face recognition deficits 55 . The total percentage of trials used (only familiar trials) in each test is provided in the supplementary Fig. S7 . During prescreening of the raw data, we did not consider spelling or typing errors as a rejection criterion. We favor the above self-scoring method as it allows misspellings of the correct answer to be scored as correct, and thus produce accurate face recognition responses 54 . Data processing Statistical analysis To increase power, we collapsed the data across the three famous face tests. Before doing so, we confirmed that the three tests did not significantly differ in the participant gender x face gender interactions (3-way interaction F (1, 2122) = 0.84, p = 0.43). Considering this, we collapsed the data across tests and our main statistical analysis approach was to do a two-way mixed ANOVA, with the participant gender as a between-subject variable and the face gender as a within-subject variable. Statistical significance was examined at alpha = 0.05 significance level. As we were interested in comparing participants’ responses on male and female faces separately, Bonferroni corrected planned comparisons were performed for any significant interaction effect. Effect sizes such as using partial eta squared values (η p 2 ) and Cohen’s d are reported for F -tests and t -tests, respectively. These analyses were executed using an open source statistical software JASP 0.9.2.0 ( ) and planned comparisons were done using online tool of GraphPad posttest calculator ( ). Using fame normalization approach It is plausible that famous people, whose faces are used in this study, would have differences in frequency of exposure in media, or that famous males might be generally more famous than famous females due to certain sociocultural factors. Additionally, we also found that the famous males were significantly older in age than famous females, suggesting that participants may have had more exposure to famous males than females. In order to account for bias arising from either of these factors, apart from the standard raw accuracy scores statistical analysis, we also sought to calculate accuracy after controlling for fame. Recently, using sophisticated computational meta-analysis 56 , various famous figures in history have been ranked and given fame scores, that have been successfully applied that use large datasets 57 , 58 , 59 . Using the fame score for the celebrities that were used in our study, first we tested for fame differences and found that the fame scores were significantly different for famous males and females. An independent t-test showed that famous males ( M = 5.51, SD = 0.99) had significant higher fame scores ( t (67) = 2.66, p = 0.010, d = 0.66), than famous females ( M = 4.89, SD = 0.82). Further, we also report that the fame scores and age of the celebrities correlated significantly ( r = 0.33), suggesting that older celebrities are more famous. It should be noted that after regressing out fame, there is still leftover variation in how distinctive the face is, how typical the image is of the person, etc. that could explain variability in accuracy. We normalized for fame by calculating the residual scores For this, we correlated the identification accuracy of famous faces with their fame scores (Supplementary Table S1 ), where in the identification accuracy correlated significantly (Supplementary Figs. S1 – S3 , scatter plots (c) & (d)). The mean accuracy score for each face (the scores for any duplicate faces were averaged) across all the three tests were plotted as a function of fame scores 56 , and the resulting linear regression equation (Fig. S1(d) ) was used to calculate the predicted score for each individual participant. This is because each participant showed variability in face categorization accuracy. For each participant, the fame scores were used for only those faces that were either correctly recognized (score of ‘1’) or familiar (score of ‘0’). These were then separately averaged based on gender of the faces, to get a separate gender-based fame normalized predicted values. Later, the average response for male and female faces from each participant were subtracted from the predicted values to get the residual scores for the two face genders, that we used in statistical analysis. By this, we attempted to remove the effect of fame from the accuracy scores. This normalization was separately done for the different country groups, with their resulting face identification accuracy scores. Again, for reference purposes, we also report the fame normalized analysis for all the trials (familiar and unfamiliar), that is not a part of our main analysis, in the supplementary section (Figs. S1 – S3 , bar plot (f)). Additional supplementary analyses Though we did not use the trials whenever the respective famous faces were unfamiliar to any of the participants, for reference purposes we report the raw analyses using all the face trials, irrespective of whether familiar or unfamiliar to each participant, in the supplementary materials (see Figs. S1 – S3 , (e) & (f) bar plots). We have also provided the percentage of trials used in each test for each country to calculate face recognition accuracy (Fig. S7 ). We also provide the original data and graphs for proportion of familiar faces (that is used in the main analysis to calculate proportion of correct responses) and unfamiliar faces for all the trials, for each gender of the face and participant gender (Supplementary Figs. S1 – S3 , bar plots (a) & (b)) along with the distribution of individual participant response scores (Supplementary Figs. S4 – S6 ) for all three countries using sinaplots in R software. Results Analysis 1: Gender differences in famous face recognition among the USA sample Our primary objective here was to understand whether there are gender differences and own-gender biases in recognition of male and female famous faces. We selected USA adults 18–50 years old because most of the faces were US celebrities and because 18–50 years old is when face recognition is typically at its best 60 . Participants A total of 2,295 USA adults (Age range = 18 to 50 years) were included in this analysis. After exclusions using data preprocessing, and no responses in any trials, the total number of analyzed subjects was 2,128 (FFMT1–238 Males and 494 females, FFMT2–255 males and 466 females, FFMT3–217 males and 458 females). Overall there were 710 males ( M age = 29.49, SD = 9.24) and 1,418 females ( M age = 29.49, SD = 9.76) with very similar age distributions. Results A two-way mixed ANOVA on face recognition accuracy for participant gender and famous face gender (Fig. 2a , Table 1 ) showed a main effect of face gender, F (1, 2126) = 238.4, p < 0.001, η p 2 = 0.10, where male famous faces were recognized more accurately than female faces. Participant gender showed no main effect, F (1, 2126) = 3.17, p = 0.08; but there was a significant participant gender x face gender interaction, F (1, 2126) = 142.0, p < 0.001, η p 2 = 0.06. Importantly, for famous female faces, planned comparisons showed that female participants performed significantly better (mean difference = 0.07, 95% CI [−0.08, −0.054], t (2126) = 11.87, d = 0.26) than males. Conversely, male participants performed significantly better (mean difference = 0.03, 95% CI [0.015, 0.045], t (2126) = 5.16, d = 0.124) in recognizing famous male faces. Further, we calculated an own-gender bias in male participants by subtracting face recognition performance obtained for male faces vs. female faces and in a similar way for female participants (female vs. male faces performance). A significant own-gender bias was observed only for male participants (mean difference = 0.114, 95% CI [0.097, 0.13], t (2126) = 16.98, d = 0.45); but not for female participants. Figure 2 USA face recognition accuracy . Bar plot of accuracy scores for famous faces. ( a ) raw scores ( b ) fame normalized values. Error bars represent standard error of mean. * p < 0.05. Full size image Table 1 Mean values for all 3 countries. Full size table Fame normalization (see Methods section, Fig. 2b , Supplementary Fig. S1d ) showed similar results. In particular, a two-way mixed ANOVA found a main effect for face gender, F (1, 2126) = 98.70, p < 0.001, η p 2 = 0.044, a significant interaction, F (1, 2126) = 142.67, p < 0.001, η p 2 = 0.063; but no difference in participant gender, F (1, 2126) = 3.38, p = 0.07. Post-hoc comparisons (Table 1 ) revealed that female participants had significantly better recognition accuracy for famous female faces (mean difference = −0.07, 95% CI [−0.085, −0.06], t (2126) = 12.04, d = 0.27) compared to male participants. In contrast, male participants showed significantly better recognition accuracy for male famous faces (mean difference = 0.03, 95% CI [0.014, 0.044], t (2126) = 4.99, d = 0.12), though this effect was smaller in magnitude. Again, a significant own-gender bias was shown for male participants (mean difference = 0.09, 95% CI [0.074, 0.108], t (2126) = 13.56, d = 0.36) but not for female participants (mean difference = −0.008, 95% CI [−0.020, 0.004], t (2126) = 1.68). Result summary. In contrast to findings of unfamiliar face recognition 6 , 24 , our results demonstrated no significant overall accuracy differences between male and female participants and a significant own-gender advantage in only male participants. We did find an accuracy advantage for recognizing famous male vs. female faces, even after regressing out fame. Interestingly, the male vs. female participant difference was much larger (Cohen’s d = 0.26) when recognizing female famous faces, compared to the male vs. female difference when recognizing male famous faces (Cohen’s d = 0.12). In the next set of analyses, we sought to examine potential cultural effects on these gender differences; focusing on countries with greater gender equality (Analysis 2a) and less gender equality (Analysis 2b) compared to the USA sample. The objective criteria to select the relevant countries in the two groups were decided based on a previous study conducted in our laboratory 4 . We were interested to know whether the greater male vs. female participant difference for female faces compared to male faces was due to cognitive biases arising because of moderate gender inequality in the USA. In other words, it could be that cultural/institutional gender inequality in America (e.g., males are in more positions of power than females) could have led male participants to be biased to ‘attend-to’ and ‘individuate’ male faces more than female faces, resulting in reduced performance on female famous faces. Analysis 2: Does the gender based difference depend on gender inequality existing in different cultural societies? We wanted to explore whether cultural factors were influencing the observed gender differences in famous face recognition performance. In order to address this, we compared the results of countries that show either higher levels of gender equality than the USA (e.g., Norway, Finland, Sweden, Denmark and Netherlands, according to the United Nations Gender Inequality index (GII), ) or countries that show lower gender equality than the USA (e.g., India, Pakistan, Brazil, Egypt, and Indonesia). The higher the GII index, the higher the gender inequality in those countries. The average gender inequality ratio in the year 2014–15, across the three country groups was 0.05 for high gender equality countries, 0.21 for the USA and 0.49 for low gender equality countries. We hypothesized that we would observe reduced male vs. female participant accuracy differences and reduced own-gender biases in countries with higher sociocultural gender equality while greater gender differences in accuracy and greater own-gender biases in countries that show reduced gender equality. Analysis 2a: Countries with high gender equality Participants We selected the Scandinavian countries (Sweden (n = 59), Denmark (n = 20), Netherlands (n = 43), Norway (n = 39) and Finland (n = 25) from our dataset that had previously shown to have the highest gender equality 4 . We grouped the data from these five countries to achieve enough power to detect effect sizes similar to the USA analysis. A total of 203 Scandinavian adults (18–50 years) were included in this analysis. Post data screening, there were 183 participants (FFMT1–24 males and 35 females, FFMT2–23 males and 37 females, FFMT3–29 males and 35 females) with a total of 76 males ( M age = 30.88, SD = 7.91) and 107 females ( M age = 30.96, SD = 9.67) having similar ages. To calculate the fame-normalized residual scores, we used the exact same procedure as in previous analysis except that we used the regression equation (Fig. S2(d) ) from the Scandinavian dataset predicting accuracy from fame scores. This was because the fame-to-accuracy relationship may be slightly different for Scandinavian countries due to less exposure to American celebrities than the USA sample. Results A two-way mixed ANOVA for accuracy (Fig. 3a ) showed a main effect of face stimulus gender, F (1, 181) = 63.02, p < 0.001, η p 2 = 0.26, where famous male faces were recognized more accurately than famous female faces (Table 1 ). Though we did not find any significant main effect of participant gender, F (1, 181) = 0.193, p = 0.66, there was a significant interaction, F (1, 181) = 9.68, p = 0.002, η p 2 = 0.05. In contrast to the USA sample, there was no significant difference (mean difference = −0.027, 95% CI [−0.07, 0.02], t (181) = 1.31) between the male and female participants in recognizing female faces. However, for famous male faces, a significant difference was observed (mean difference = 0.064, 95% CI [0.02, 0.12], t (181) = 3.10, d = 0.23), between male and female participants. We also observed a significant own-gender bias for male participants (mean difference = 0.162, 95% CI [0.11, 0.22], t (181) = 7.245, d = 0.54), with males performing better at male faces than female faces; but no own-gender bias for female participants. Figure 3 High gender equality countries face recognition accuracy. Bar plot of accuracy scores for ( a ) raw famous face accuracy and ( b ) fame-normalized accuracy. Error bars represent standard error of the mean. * p < 0.05. Full size image Normalizing for fame did not change the results (Fig. 3b , Table 1 ). That is, there was a main effect of face gender, F (1, 181) = 35.99, p < 0.001, η p 2 = 0.17, a significant interaction, F (1, 181) = 11.00, p = 0.001, η p 2 = 0.06; but no difference in participant gender, F (1, 181) = 0.16, p = 0.69. Importantly, planned comparisons for famous female faces did not show any significant difference (mean difference = −0.03, 95% CI [−0.08, −0.02], t (181) = 1.49) between the male and female participants. However, there was a significant difference for famous male faces (mean difference = 0.07, 95% CI [0.02, 0.11], t (181) = 3.19, d = 0.24) between males and females. There was also a significant own-gender bias only for male participants (mean difference = 0.136, 95% CI [0.08, 0.19], t (181) = 6.08, d = 0.45) but not for female participants. Result summary Our analyses of Scandinavian countries that are reported to have very high socio-cultural gender equality, showed a largely similar pattern to the USA sample. There were no overall participant gender differences in accuracy, participants performed better on male than female faces, and there was an own-gender bias only for male participants. Interestingly, in contrast to the USA sample, there was no gender differences in recognizing female famous faces. This suggests that greater gender equality in a country may lead to more similar male and female performance on famous female face recognition. Though it is unclear why there was not a similar reduction in gender differences for famous male faces, we sought to further investigate this difference by examining performance in countries with low gender equality. We predicted that there would be significant differences between male and female participants on female face recognition. Analysis 2b: Countries with lowest gender equality Participants We next selected five countries that had lowest gender equality as reported previously 4 : India (n = 205), Brazil (n = 23), Egypt (n = 10), Pakistan (n = 19), and Indonesia (n = 31) and combined them to provide us with a sufficient sample size. A total 275 adult participants (18–50 years) were included from these countries. After data prescreening, there were 234 participants (143 males and 91 females: FFMT1 = 63 males, 37 females; FFMT2 = 36 males, 25 females; FFMT3 = 44 males, 29 females), with very similar age range (males, M = 26.87, SD = 7.15; females, M = 25.75, SD = 6.94). Results The two-way ANOVA for accuracy responses (Fig. 4a ) showed a main effect of face gender, F (1, 232) = 5.646, p = 0.018, η p 2 = 0.021, where male faces were recognized more accurately (mean difference = 0.05, SE = 0.015, t = 3.44, d = 0.23, p < 0.001) than female faces. There was no main effect of participant gender, F (1, 232) = 0.14, p = 0.71, but again a significant interaction was observed, F (1, 232) = 27.18, p < 0.001, η p 2 = 0.11. Planned comparisons showed better performance by females participants in recognizing female faces (mean difference = −0.09, 95% CI [−0.14, −0.04], t (232) = 4.53, d = 0.25), while males performed significantly better (mean difference = 0.06, 95% CI [0.008, 0.108], t (232) = 2.92, d = 0.18) at recognizing male faces. Further, a significant own-gender bias was observed for male participants (mean difference = 0.108, 95% CI [0.064, 0.152], t (232) = 6.16, d = 0.33) but not for female participants. Figure 4 Low gender equality countries face recognition accuracy . Bar plot of accuracy scores for famous faces. ( a ) raw scores. ( b ) fame normalized values. Error bars represent standard error of mean. * p < 0.05. Full size image Normalizing for fame (Figs. 4b , S3 .(d), Table 1 ) did not show a main effect of face gender, F (1, 232) = 0.122, p = 0.73, or participant gender, F (1, 232) = 0.25, p = 0.62, but a significant interaction between the two, F (1, 232) = 26.138, p < 0.001, η p 2 = 0.10. As we predicted, planned comparisons revealed that female participants performed significantly better (mean difference = −0.094, 95% CI [−0.14, −0.04], t (232) = 4.73, d = 0.26) than males in recognizing famous female faces. No significant difference (mean difference = 0.05, 95% CI [0.00, 0.10], t (232) = 2.5) was observed between male and female participants for recognizing male faces. Additionally here, an own-gender bias was observed for both male (mean difference = 0.067, 95% CI [0.023, 0.11], t (232) = 3.82, d = 0.21) and female participants (mean difference = −0.077, 95% CI [−0.132, −0.022], t (232) = 3.50, d = 0.22). Results summary Again, the raw accuracy analyses showed similar results to the USA sample. That is, male faces were recognized more accurately than female famous faces and again only males showed an own-gender bias in face recognition. Further, as predicted, countries with lower sociocultural gender equality showed significant and pronounced gender differences in famous female face recognition ( d = 0.25), similar to the USA ( d = 0.26) for both raw and fame regressed analyses. This contrasts the results from high gender equality countries where we observed no gender differences in female famous faces recognition. Analysis 3. Comparing Performance between USA, high gender equality, and low gender equality countries We next sought to determine if the patterns observed across the different countries were significantly different (see Fig. 5 ). It should be noted that this comparison is not ideal because our main dependent measure of interest—fame-normalized accuracy—cannot be compared directly across cultures due to using separate fame vs. accuracy regression equations. Still, we were able to examine the overall raw accuracy. We first performed a three-way mixed ANOVA, 3 (Country) × 2(Participant gender) × 2(Face gender), showing main effects of face gender, F (1, 2539) = 126.99, p < 0.001, η p 2 = 0.05 and country, F (1, 2539) = 62.454, p < 0.001, η p 2 = 0.05, but not participant gender. There was only a trend towards a three-way interaction, F (1, 2539) = 1.842, p = 0.159, suggesting that overall accuracy pattern across countries did not differ significantly. Figure 5 Cross country comparison for famous face recognition . Raw accuracy values for familiar trials plot against the country performance for ( a ) male participants ( b ) female participants, separately shows variable face recognition accuracy in females compared to males in different country groups. Error bars represent SEM. Full size image Interactions across countries for each participant gender were also examined separately, since previous studies 4 , 10 , 12 , 14 , 48 suggested that factors like less employment, gender inequality and loss of equal opportunity for females in any given society affected performance in female participants more than males. Here, the two-way ANOVA, country (3) × face gender (2), for males showed main effect of face gender (own-gender bias), F (2, 926) = 176.05, p < 0.001, η p 2 = 0.16, but only a trend towards a significant interaction between gender of the faces and countries, F (2, 926) = 2.3, p = 0.097. On the other hand, the ANOVA with female participants did not show a main effect of face gender, F (2, 1613) = 3.14, p = 0.076, but did show a significant interaction between face gender and countries, F (2, 1613) = 9.801, p < 0.001, η p 2 = 0.012. Our results suggest that female participants performed better on male faces than female faces for both USA and Scandinavian countries, while this pattern was reversed for low gender equality countries where female participants showed an own-gender bias (Fig. 5 ). This suggests that, for familiar face recognition, socio-cultural gender equality particularly affects accuracy in female participants. General Discussion Gender differences in face recognition have previously been reported only with unfamiliar faces, where identity recognition depends on short-term learning and familiarity matching. However, it was unclear if these differences were present for more well-learned familiar/famous faces. Previous studies have also never investigated sociocultural influences (e.g., gender equality in a country) on gender differences in face recognition. We investigated these outstanding questions by having a large web-based sample ( N > 2000) of participants from countries with differing levels of gender equality perform male and female famous face recognition. Our results show three important findings: a) across all countries there were no overall significant participant gender differences in famous face recognition, and faces of famous males were generally recognized better than famous females, b) we observed significant own-gender biases for male but not female participants, and c) gender equality across countries significantly affected performance on famous female faces, where there was less of a difference between male and female participants in high gender equality countries compared to low gender equality countries. These findings have important implications for models of gender differences in face recognition as well sociocultural effects on cognition. Contrary to previous studies of unfamiliar faces 27 , 61 , for familiar face recognition we did not find any evidence of overall accuracy differences between male and female participants. Thus, though past studies show that males generally perform worse than females with unfamiliar faces and may be slower to learn faces, once they learn a face, they are able to identify it as accurately as females. This suggests that specialized mechanisms for efficient, robust identification of familiar faces are equally engaged by males and females. Our findings are consistent with a recent eye-tracking study that used multiple exposures for faces and showed that there was an initial female-over-male participant recognition advantage for recognizing unfamiliar faces that was abolished as faces were learned over a period of four days 31 . Our results extend these findings and show that prior experience and learning reduce gender differences in face recognition. The similar male/female familiar face recognition performance is also consistent with the observation of a similar incidence of developmental prosopagnosia in males and females 62 , which is often diagnosed by deficits in familiar face recognition. Why is there an advantage in recognizing famous male vs. famous female faces? We consistently found, across all countries, that there was an advantage for recognizing famous males vs. females faces. Our findings did not change even after regressing out fame for USA and high gender equality countries, though in low gender equality countries there was no male/female residual accuracy difference. Our results contradict previous findings from unfamiliar face recognition studies 6 . In fact, in one study 24 where cropped and full unfamiliar faces were used, an advantage for recognizing female faces was observed. This was driven by females being better at female faces, while males performed equally well on male and female faces. One possible explanation could be that, in the current study, famous males had more exposure in media and thus to the participants, leading to effects of prior experience 32 , 35 that might account for better identification accuracy. Related to this overall male famous face advantage, we also found that there was a consistent own-gender bias only for male participants but not in females, though both male and female participants showed own-gender biases in low equality countries after regressing out fame. Further, females performed equally well for both male and female faces. These results are again opposite to those observed in unfamiliar face recognition studies that report a stronger own-gender bias in females 6 , 15 , 17 , 22 , 23 , 24 , 27 . Apart from the fact that the previous studies used unfamiliar faces, an own-gender bias in familiar or famous faces has never been previously reported. Though it is likely that the male own-gender bias in our study was driven by the main effect of participants performing overall better at male famous faces, additional studies would be useful to confirm this finding. Does socio-cultural gender inequality modulate gender differences in face recognition? A novel finding from our study is sociocultural gender equality does affect face recognition, but only for female famous faces. Specifically, we found that, for recognizing famous female faces, male participants were substantially worse than female participants in the USA and lowest gender-equality countries, while there were no participant gender differences in countries with high gender equality. This pattern of results remained even after controlling for fame in each analysis. Interestingly, we did not find an effect of cultural gender equality on male famous face accuracy, with all cultures showing a similar pattern of male participants outperforming female participants. This finding fits with previous research showing that culture can differentially affect cognitive processes 63 , 64 , 65 , where differential performance is observed based on the socio-cultural background of the participants. Further, the cross-country analysis showed that female participants significantly varied in famous face recognition across countries while the performance of males was relatively stable irrespective of the cultural background. A possible explanation for the observed sociocultural effects is that there are different gender roles in the social structure of societies. For example, in certain countries (such as India, Bangladesh or Egypt) males more often go out and participate in larger social networks while females participate in smaller social networks and are mostly indoors. This may lead to differences in perceptual learning experiences 18 , 66 . It could also be that in such societies, compared to countries with higher equality, female faces are more outgroup members to males and ingroup to females, which may lead male participants to individuate female faces less than female participants. Though plausible in lower equality countries, this explanation cannot account for the USA results, as both males and females equally participate in large social groups. Another explanation for the cultural effect on female face recognition could be that in lower gender-equality countries and the USA, male participants could be biased to process females in a less individuated manner compared to female participants. Further, it could be that females from lower gender-equality countries individuate famous females more than men, while males from lower gender equality countries have more of a propensity to categorize females rather than individuate them. It is notable that the female face advantage for female participants is present despite only including trials that participants reported to being familiar with. This suggests that all participants had some familiarity with the famous females faces but that, in lower-equality countries and the USA, female participants were better able to recollect individuating information (e.g., name, professional details, etc.) about the faces compared to male participants. This explanation fits with dual process accounts of recognition memory 41 suggesting that judgments are based on either recollection (the retrieval of contextual and semantic details about an item), or familiarity (the feeling that an item has been experienced previously without retrieval of additional information). Previous studies examining the other-race effect have found that subjects rely more on recollection memory for own-race compared to other-race faces 67 , which could be driven by more effortful and semantic encoding of own-race faces 68 . Similarly, in countries with lower gender equality or the USA, female participants may put forth more effort and semantically encode famous female faces compared to male participants. It is notable that this effect is abolished in high gender-equality countries, suggesting that male and female participants equally encode and retrieve semantic information about famous female faces. Though these results are intriguing, they would be more convincing if we found a similar effect of cultural gender equality on male famous face accuracy. That said, previous literature on gender differences in cognition have often reported that female participants’ performance mostly drives and impacts gender differences, and is more affected (improved or reduced) by cultural norms like labor force, education, and employment 10 , 11 , 69 . Our results are consistent with these studies and extend these female-driven differences to face recognition. Though not often applied to the face literature, several theories have been proposed to explain gender disparity arising due to cultural differences like the gender similarity theory 70 , gender stratification (lack of equal opportunities to both genders) theory 71 , and socio-cultural theory 69 . Together, they suggest that, the greater the difference in power and status between men and women in a culture, the greater would be the gender difference in psychological or cognitive domains (e.g., math performance 48 , 69 ). Indeed, along with our study, a few of the other major studies do show that gender inequality increases the gap in psychological variables such as math performance 48 and sustained attention 4 , and that it is specific to female participants. Though the results of the current study are compelling, there are a few limitations. First, even though we focused on the particular faces participants reported being familiar with, we did not account for the differential degree of prior exposure of semantic knowledge for each face. Another limitation is that some of the individuals may be more well-known by their ‘faces’ and other may be better known by their ‘ names’ (e.g., actors vs. musicians or historical figures). Additionally, it is likely that other-race effects reduced accuracy in the low gender-equality countries since most of the faces used were Caucasian. Though it is unclear whether this would bias the results, replicating the study with own-race faces in low equality countries would be useful. We would also like to note that our results are limited to the dichotomous (male vs. female) nature of gender classification rather than considering it as a continuous spectrum. To conclude, by utilizing a set of famous faces in a large cross-cultural sample, we demonstrate that male and female participants have a similar capacity for familiar face recognition but vary in their attention to and expertise with male and female famous faces. Results from high gender equality countries suggest that, encouragingly, sociocultural context can decrease at least some of these gender differences in face recognition. These results help set the stage for future investigations examining the complex interactions between culture, gender, and cognition. Data availability The datasets generated during the current study are available from the corresponding authors on request.
Our ability to recognize faces is a complex interplay of neurobiology, environment and contextual cues. Now a study from Harvard Medical School suggests that country-to-country variations in sociocultural dynamics—notably the degree of gender equality—can yield marked differences in men's and women's ability to recognize famous faces. The findings, published Nov. 29 in Scientific Reports, reveal that men living in countries with high gender equality—Scandinavian and certain Northern European nations—perform nearly as well as women in accurately identifying the faces of female celebrities. Men living in countries with lower gender equality, such as India or Pakistan for example, fare worse than both their Scandinavian peers and women in their own country in recognizing female celebrities. U.S. males, the study found, fall somewhere in between, a finding that aligns closely with United States' mid-range score on international metrics of gender equality. The results are based on scores from web-based facial recognition tests of nearly 3,000 participants from the United States and eight other countries and suggest that sociocultural factors can shape the ability to discern individual characteristics over broad categories. They suggest that men living in countries with low gender equality are prone to cognitive "lumping" that obscures individual differences when it comes to recognizing female faces. "Our study suggests that whom we pay attention to appears to be, at least in part, fueled by our culture, and how and whom we choose to categorize varies by the sociocultural context we live in." said study senior investigator Joseph DeGutis, Harvard Medical School assistant professor of psychiatry and a researcher at VA Boston Healthcare System. "Our findings underscore how important social and cultural factors are in shaping our cognition and in influencing whom we recognize and whom we do not," said study first author Maruti Mishra, Harvard Medical School research fellow in psychiatry in DeGutis's lab. "Culture and society have the power to shape how we see the world." The team's findings showed that men living in the United States—a country that ranks midrange on the United Nations' Gender Inequality Index—performed better when asked to identify famous male politicians, actors or athletes than when they were asked to identify famous female politicians, actors or athletes. And they fared worse than women in identifying famous female celebrities. Men from Scandinavian countries, such as Norway, Denmark and Finland—all places with a high level of gender equality—performed equally well in recognizing famous male faces and famous female faces. On the other hand, men living in countries with low gender equality—India, Brazil and Pakistan, among others—performed worse than U.S. men and worse still than Scandinavian men in identifying famous women. The Gender Inequality Index measures the level of a country's gender inequality by taking into account things like the status of women's reproductive health, education, economic status, and participation and attainment of high-level positions in the workforce. The algorithm scored the United States in the mid-range in 2014-2015 with a score of 0.21—a higher score denotes greater degree of gender inequality—compared with 0.05 for Scandinavian countries, and 0.49 for countries such as India, Pakistan or Egypt. Famous faces For the study, the researchers asked nearly 2,773 adults, ages 18 to 50, to look at a series of famous faces online and identify them. Participants included 2,295 U.S. men and women; 203 men and women from Denmark, the Netherlands, Finland and Norway; and 275 men and women from India, Egypt, Brazil, Pakistan and Indonesia. The celebrity faces were almost exclusively those of U.S. politicians, actors, athletes and performers. The researchers point out that the faces shown were exclusively those of U.S. celebrities. To ensure that U.S. participants didn't have unfair advantage in facial familiarity over their foreign peers, the researchers only analyzed results from international participants who had indicated they were familiar with or had seen the celebrities' faces before. Overall, male celebrity faces were better recognized than female celebrity faces by both men and women, regardless of where they lived. On average, male faces were recognized with 8 percent greater accuracy than female faces. The one notable exception were women from countries with lower gender equality, who performed better at identifying female celebrities than at identifying male celebrities. But the truly intriguing differences emerged when researchers analyzed the accuracy of recognizing famous female celebrities by participant gender. In the U.S. sample, female participants had, on average, 7 percent more accurate scores than their male counterparts in recognizing the faces of famous women. Gender differences were also pronounced among participants from Pakistan, India, Brazil and Egypt. In those countries, women scored, on average 10 percent higher on female celebrity recognition than men. In contrast, test score differences in recognizing famous women's faces were minuscule (less than 2 percent difference) among participants from the Netherlands, Norway, Finland and Denmark. The researchers say the pronounced own-gender bias among males—the tendency to recognize more accurately famous male over famous female faces —is a variation of other forms of perceptual bias that have been documented in past research. For example, research shows that people tend to overlook interpersonal variations in the faces of people from races that differ from their own—the so-called "other race" effect. Another manifestation of this tendency is the bias toward noticing the interpersonal variations in individuals who are higher on the workplace hierarchy but obscuring interpersonal differences among those who rank lower on work totem pole. The classic example would be forgetting the name or other individual characteristics of a lower-rung coworker or an intern but remembering the name or distinguishing characteristics of someone higher up. "All these biases stem from a tendency to categorize rather than individualize," DeGutis said. Self-awareness is the first step to combating own-gender bias, the researchers said. For example, previous research into the other-race effect suggests that practicing to individualize members of other racial groups rather than lump them into categories can seriously mitigate the other-race effect. "Own-gender bias is a form of unconscious bias," DeGutis said. "But by becoming aware of it, we can overcome it or at least minimize it." The researchers acknowledge the study has a few limitations, including the use of binary gender designations rather than a continuous gender spectrum.
10.1038/s41598-019-54074-5
Medicine
Neuroscientists gather new insight about the genetic risk of developing schizophrenia
Andrew E. Jaffe et al. Profiling gene expression in the human dentate gyrus granule cell layer reveals insights into schizophrenia and its genetic risk, Nature Neuroscience (2020). DOI: 10.1038/s41593-020-0604-z Developmental and genetic regulation of the human cortex transcriptome illuminate schizophrenia pathogenesis, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0197-y Leonardo Collado-Torres et al. Regional Heterogeneity in Gene Expression, Regulation, and Coherence in the Frontal Cortex and Hippocampus across Development and Schizophrenia, Neuron (2019). DOI: 10.1016/j.neuron.2019.05.013 Kristen R. Maynard et al. Transcriptome-scale spatial gene expression in the human dorsolateral prefrontal cortex, (2020). DOI: 10.1101/2020.02.28.969931 Journal information: Nature Neuroscience , Neuron
http://dx.doi.org/10.1038/s41593-020-0604-z
https://medicalxpress.com/news/2020-03-neuroscientists-insight-genetic-schizophrenia.html
Abstract Specific cell populations may have unique contributions to schizophrenia but may be missed in studies of homogenate tissue. Here laser capture microdissection followed by RNA sequencing (LCM-seq) was used to transcriptomically profile the granule cell layer of the dentate gyrus (DG-GCL) in human hippocampus and contrast these data to those obtained from bulk hippocampal homogenate. We identified widespread cell-type-enriched aging and genetic effects in the DG-GCL that were either absent or directionally discordant in bulk hippocampus data. Of the ~9 million expression quantitative trait loci identified in the DG-GCL, 15% were not detected in bulk hippocampus, including 15 schizophrenia risk variants. We created transcriptome-wide association study genetic weights from the DG-GCL, which identified many schizophrenia-associated genetic signals not found in transcriptome-wide association studies from bulk hippocampus, including GRM3 and CACNA1C . These results highlight the improved biological resolution provided by targeted sampling strategies like LCM and complement homogenate and single-nucleus approaches in human brain. Main Extensive effort has been spent over the past 10 years to more fully characterize the human brain transcriptome within and across cell types and to better understand changes in RNA expression associated with brain development and aging, developmental or psychiatric brain disorders, and local genetic variation. Large consortia have primarily focused on molecular profiling of RNA extracted from homogenate or bulk tissue from different brain regions across tens or hundreds of individuals 1 , 2 , 3 , 4 , although single-cell expression approaches are increasingly available. We have previously identified extensive gene expression associations in human brain with schizophrenia and its genetic risk 5 , development and aging, and local genetic variation in the dorsolateral prefrontal cortex (DLPFC) 6 and, more recently, the hippocampal formation 7 . While expression quantitative trait loci (eQTLs) in these two brain regions were highly overlapping, in line with previous work across many tissues in the body 4 , there were distinct, region-specific expression profiles associated with brain development that were subsequently dysregulated in schizophrenia. We further identified stronger effects of schizophrenia diagnosis in the DLPFC than in the hippocampal formation, with an order of magnitude more genes differentially expressed. While these differences in signatures across brain regions are likely related to the unique cell types underlying each region, particularly for changes across development 7 , the specific cell types in which these signals act within and across brain regions are largely unknown. To better understand expression within and across individual cell types, there has been a dramatic shift to RNA-seq approaches that profile tens or hundreds of thousands of cells or nuclei from a few individuals. While these single-cell (scRNA-seq) or single-nucleus (snRNA-seq) approaches have cataloged dozens of transcriptionally distinct cell classes in the human brain 8 , 9 , 10 , 11 , 12 , the limited number of individuals and the high cost have largely prevented use of these approaches for association with genetic variation and human traits. Furthermore, rarer cell populations have lower probabilities of ascertainment and subsequent characterization in these analyses. An alternative strategy for cell type enrichment expression involves isolating specific cell populations with nuclear-specific antibodies followed by flow cytometry or LCM of cell bodies for cells clearly defined by morphology. While previous research has used LCM for expression profiling of precise anatomical regions in the human and primate brains 13 , 14 or layer-specific analysis of human cortex using microarray technologies 15 , there have been few efforts to transcriptionally profile individual cell populations with this technique. We therefore sought to evaluate LCM followed by RNA-seq (LCM-seq) as a tool for cell-type-enriched expression analysis in human brain tissue. Here we profiled the granule cell layer of the dentate gyrus subfield (DG-GCL) in the hippocampal formation, which has a critical role in neurogenesis 16 and a neuromodulatory role controlling information flow from the entorhinal cortex to CA3, and downstream targets including CA1 and the prefrontal cortex. This layer primarily contains the cell bodies of granule neurons, the primary excitatory neuronal cell type in the dentate gyrus. Single-nucleus sequencing studies of human hippocampus estimated that these cells constitute ~5–10% of the hippocampus 17 . The dentate gyrus has a role in pattern separation and its downstream target CA3 has a role in pattern completion in both rodent and human systems 18 , 19 . Deficits in activity in these granule neurons have previously been associated with bipolar disorder and schizophrenia, but many of the previous findings linking this important cell type to these debilitating disorders have been based on animal models 20 , induced pluripotent stem cell (iPSC)-based approaches 21 or low-resolution functional imaging 22 , 23 , 24 . We additionally selected this cell population to permit direct comparisons to bulk hippocampus RNA-seq data from a largely overlapping set of individuals 7 . Results LCM-seq strongly enriches for target cell populations We performed LCM to extract the DG-GCL in postmortem human hippocampal tissue from 263 individuals, including 75 donors with schizophrenia, 66 donors with bipolar disorder, 29 donors with major depression and 93 neurotypical controls, all with genome-wide genotype data and between the ages of 16 and 84 years ( Methods , Extended Data Fig. 1 and Supplementary Table 1 ). Furthermore, 112 individuals also had bulk hippocampus RNA-seq data available (from 333 total hippocampal samples 7 ), which were obtained from the contralateral hemisphere. We first demonstrated that the LCM-seq procedure generates high-quality RNA-seq data, finding that the LCM-seq data and bulk hippocampus RNA-seq data generated with the same Illumina RiboZero Gold library types had similar mitochondrial chromosome (Extended Data Fig. 2a ) and genome (Extended Data Fig. 2b ) mapping rates, albeit with slightly lower exonic mapping rates (Extended Data Fig. 2c ) and correspondingly lower RNA integrity numbers (RINs) (Extended Data Fig. 2d ) for the LCM-seq data. We then used the paired DG-GCL and hippocampus RNA-seq data on 112 individuals to confirm that the LCM-seq sample was enriched for neuronal cells as compared to bulk tissue (Extended Data Fig. 3a and Methods ). We identified 1,899 genes differentially expressed in DG-GCL as compared to bulk tissue at a conservative Bonferroni-adjusted P < 1% ( P < 4.65 × 10 –7 ; Supplementary Table 2 ). As expected, the top enriched genes (more highly expressed in DG-GCL) were KCNK1 (3.9-fold up), CAMK1 (4.5-fold up) and GABRD (6.3-fold up), whose expression is relatively specific to neurons, and the top depleted genes (more highly expressed in bulk tissue) were MOBP (53.7-fold down) and MBP (11.4-fold down), whose expression is highly enriched in non-neuronal cells (Fig. 1 ). Other significant and expected differentially expressed genes included the dentate gyrus-associated gene PROX1 , whose expression was enriched (5.0-fold up), and the astrocyte gene SOX9 , whose expression was relatively depleted (2.8-fold down). As a set, the genes most enriched for preferential DG-GCL expression were related to neuronal processes and localization (Supplementary Table 3 ), demonstrating that the LCM-seq data are highly enriched for neuronal cells and represent high-quality data when compared to RNA-seq data derived from homogenate brain tissue. Fig. 1: LCM-seq confirms expected strong cell type enrichment in DG-GCL. a , Micrographs of 30-μm frozen hippocampal sections taken from the original block at the midbody. ML, molecular layer; PL, polymorphic layer. b , Magnified view of the boxed region in a showing cytoarchitectural distinction between dentate gyrus laminae with phase contrast. c – e , DG-GCL-enriched PROX1 ( c ), astrocyte-specific SOX9 ( d ) and oligodendrocyte-specific MBP ( e ) normalized expression (log 2 scale). f , Volcano plot of differential expression between DG-GCL and bulk hippocampus; the genes with significantly different expression from c – e are indicated. FC, fold change. g , Gene set enrichment analysis demonstrating significant association of neuronal functions with the DG-GCL and non-neuronal functions with the homogenate hippocampus (gene ratio: fraction of gene set that was differentially expressed). h – k , RNA deconvolution showing enrichment of excitatory neurons ( h ) and depletion of inhibitory neurons ( i ), oligodendrocytes ( j ) and astrocytes ( k ) following the LCM procedure. Gray lines indicate RNA-seq samples from the same donor. All boxplots show median and interquartile range (IQR), with whiskers representing 1.5× IQR and all data points overlaid. Full size image We lastly confirmed the predicted neuronal cellular enrichments of the DG-GCL by using RNA deconvolution from snRNA-seq data generated from human postmortem hippocampal tissue across eight data-defined cell types ( Methods ). This algorithm showed that the DG-GCL samples were highly enriched for excitatory neurons (approximately threefold, P = 1.6 × 10 –78 ; Fig. 1h ) and depleted for inhibitory neurons ( P = 3.2 × 10 –28 ; Fig. 1i ), oligodendrocytes ( P = 5.3 × 10 –50 ; Fig. 1j ) and astrocytes ( P = 3.9 × 10 –47 ; Fig. 1k ). These data further highlight the strong enrichment for granule cell neurons when using LCM. DG-GCL-specific signatures of aging in the hippocampus Because the hippocampal formation undergoes considerable cellular alterations with advancing age, we sought to identify genes with unique patterns of expression associated with postnatal aging, ranging from 16–84 years, in the DG-GCL against a background of aging associations in the bulk hippocampus. Here we expanded our comparisons to the full cohort of DG-GCL samples ( n = 263 individuals) combined with age- and library-matched bulk hippocampus samples 7 ( n = 333 individuals; Methods ) on 21,460 expressed genes (Extended Data Fig. 3b–d ). By using linear modeling that adjusted for observed and latent confounders ( Methods ), we first identified genes associated with age within each dataset (Fig. 2a and Supplementary Table 4 ). In the DG-GCL, we identified 1,709 genes whose expression was significantly associated with age (at false discovery rate (FDR) < 5%), of which 833 genes increased in expression and 876 genes decreased in expression, with a median 3.7% change in expression per decade of life. In the bulk hippocampus, we identified 1,428 genes significantly associated with age (at FDR < 5%), of which 733 genes increased in expression and 695 genes decreased in expression, with a median 3.0% change in expression per decade of life (interquartile range (IQR), 2.1–4.4%). While the overlap between datasets was statistically significant (372 genes, odds ratio (OR) = 4.93, P < 2.2 × 10 –16 ), there were over 1,000 unique age-associated genes in each dataset (DG-GCL, 1,337 genes; bulk hippocampus, 1,056 genes). We assessed age-related directional changes in expression for the 2,765 genes significant in either the DG-GCL or hippocampus. While most genes showed directionally consistent age-related changes across the datasets ( n = 2,353; 85.1%), including 369 of the 372 genes that were significant in both, there were 412 genes that showed opposite directionality, that is, increased expression with age in the DG-GCL and decreased expression with age in the hippocampus, or vice versa. These data indicate that aging in the DG-GCL has cell-specific patterns not represented in bulk hippocampal formation tissue. Fig. 2: Cell-type-specific changes in gene expression associated with aging. a , DG-GCL versus homogenate hippocampus t statistics (age) revealing both shared (gray) and differential (red, bulk hippocampus; blue, DG-GCL) expression by age. b – e , Example genes with stable expression in hippocampus (red) but age-dependent expression in DG-GCL (blue), including USH2A ( b ), KCNQ5 ( c ), AR ( d ) and TYRO3 ( e ). f , Heat map of association with age-dependent gene set enrichment terms. Cells are colored according to –log 10 ( P value). Reg., reulation; pos., positive. Full size image There were several individual gene expression patterns among the 2,765 genes showing age-related changes in DG-DGL that were missed when sequencing bulk hippocampus. For example, USH2A showed high expression in the DG-GCL early in life, with expression decreasing across the lifespan, as compared to relatively low and stable expression in the bulk hippocampus (Fig. 2b ). Another gene, KCNQ5 , which encodes a voltage-gated potassium channel subunit, showed similar expression levels in the two datasets but only changed in expression across the lifespan in the DG-GCL (decreased expression; Fig. 2c ). The AR gene encoding androgen receptor also showed higher expression in the DG-GCL that significantly decreased across the lifespan in both sexes (Fig. 2d ). A further pattern of age-related changes in the DG-GCL was shown by TYRO3 , encoding a tyrosine protein kinase, which had similar expression levels in the two datasets but increased in expression across the lifespan only in the DG-GCL (Fig. 2e ). We next performed gene set enrichment analyses to assess whether the age-related genes in the DG-GCL and hippocampus converged on the same biological functions ( Methods ). We found more enrichment for predefined gene sets in the DG-GCL ( n = 245 sets) than in bulk hippocampus ( n = 130 sets; Fig. 2f ), with largely shared pathways for genes increasing in expression across the lifespan, but no overlapping gene sets for genes decreasing in expression across the lifespan. The most divergent gene sets were enriched for many processes related to the structure and function of neurons (Fig. 2f and Supplementary Table 5 ), including decreased expression of genes in the DG-GCL, but not bulk tissue, that were associated with ‘behavior’ (GO:0007610: 56/630 genes, DG-GCL q = 8.52 × 10 –7 , hippocampus q = 0.186), ‘anterograde trans-synaptic signaling’ (GO:0098916: 55/630 genes, DG-GCL q = 4.03 × 10 –4 , hippocampus q = 0.997), ‘G-protein-coupled receptor signaling pathway’ (GO:0007186: 53/630 genes, DG-GCL q = 4.56 × 10 –4 , hippocampus q = 0.997) and ‘regulation of neuron projection development’ (GO:0010975: 42/630 genes, DG-GCL q = 0.0042, hippocampus q = 0.26). Conversely, many of the gene sets associated with increasing expression over the lifespan, regardless of dataset (and cell specificity), were related to immune cells and their processes, at least as represented in peripheral inflammatory cells (that is, neutrophils and leukocytes), with some specific classes of cell types showing preferential enrichment in DG-GCL (cytokines, myeloid cells and erythrocytes) and bulk hippocampus (T cells). Lastly, we analyzed the effects of age within individuals across brain regions by using linear mixed-effects modeling with an interaction between age and region/dataset. We found 406 genes with significantly different expression trajectories with aging when contrasting the DG-GCL with hippocampus. The majority of these genes ( n = 247; 60.8%) were differentially expressed by age in either the DG-GCL or hippocampus (as in Fig. 2b–e and Supplementary Table 4 ). The remaining significant genes in this interaction model were genes that did not show significant changes with aging in either dataset but had significantly different age associations when contrasting the DG-GCL with hippocampus in a single statistical model. These associations did not seem to be driven by changes in cellular composition across the lifespan, as our estimated RNA fractions were relatively stable across the lifespan in both the DG-GCL and bulk hippocampus (Supplementary Table 6 ). These results highlight the value of generating cell-type-enriched expression data to better characterize aging-related expression phenotypes that may be masked in homogenate tissue. Cell-type-enriched eQTLs We next assessed the potential for cell-type-enriched genetic regulation of expression by using eQTL analysis. We first calculated local ( cis )-eQTLs in the DG-GCL dataset ( n = 263) at different expression features: genes, exons, junctions and transcripts ( Methods ). We identified widespread cell-type-enriched genetic regulation of transcription. There were 8,988,986 significant single-nucleotide polymorphism (SNP)–feature pairs in the DG-GCL, which were driven by exon-level signals (4,734,526 pairs; Supplementary Data 1, available at ). The majority of expressed genes showed association with a neighboring SNP in at least one feature type at a genome-wide significance level of FDR < 1% ( n = 17,683; Fig. 3a ). The exon, junction and transcript eQTL features were annotated back to 71,346 specific GENCODE transcripts, and half had support from at least two feature types (Extended Data Fig. 4 ). Lastly, we identified 5,853 exon–exon splice junctions associated with nearby genetic variation that were only partially annotated (1,622 exon-skipping and 4,231 alternative exonic boundary events) to 3,086 unique genes. Of these, 162 showed association only with unannotated sequence and not with any other annotated feature in the gene, potentially suggesting new DG-GCL-specific transcripts for a small number of genes. Fig. 3: Extensive DG-GCL-enriched eQTLs. a , Venn diagram showing cis -eQTL feature overlap in DG-GCL. b , c , Heritability estimates from DG-GCL versus hippocampus ( b ) and DLPFC versus hippocampus ( c ). Solid and dashed lines indicate 0.1 and 0.2 differences in heritability, respectively. d – g , Example SNPs with strong cis -eQTL gene signals in DG-GCL but not homogenate hippocampus, including rs9573533 ( d ), rs969784 ( e ), rs2429080 ( f ) and rs3775950 ( g ). All boxplots show median and IQR, with whiskers representing 1.5 × IQR and all data points overlaid. Full size image We then assessed the cell type specificity of these eQTLs and the extent that would have been missed in homogenate sequencing of hippocampus tissue. We calculated corresponding eQTL statistics for all ~9 million eQTL SNP–feature pairs in the 333 homogenate hippocampus samples, as well as the region-by-genotype interaction effects in the joint set of 596 samples. The majority of eQTLs identified in the DG-GCL were directionally consistent in bulk hippocampus (93.67%; Supplementary Table 7 ), of which 73.0% showed marginal eQTL associations (at nominal P < 0.05) and 54.5% were genome-wide significant (FDR < 0.01). Thus, while many eQTLs were shared by the two datasets, in line with eQTL sharing across different brain regions 7 and tissue types 4 , 3,420,833 SNP–feature pairs significant in DG-GCL (38.5%) showed different eQTL effects in the bulk hippocampus when using interaction modeling (at FDR < 5%), including 1,501,540 pairs with hippocampus eQTL P > 0.05. These effects were largely consistent across feature types (Extended Data Fig. 5a ), and higher replication was associated with more expression-related SNPs (eSNPs) (Extended Data Fig. 5b ) and higher statistical significance (Extended Data Fig. 5c ). We looked more broadly at the cis heritability of 24,283 expressed genes in the DG-GCL as compared to bulk hippocampus and DLPFC 7 . We identified 6,055 genes with significant cis heritability in DG-GCL (at FDR < 0.05), of which 1,845 genes (32.2%) were not heritable in the hippocampus (at P > 0.05). Among these expressed genes, we further found three times more genes with differences in cis heritability when comparing cell populations within a region than two different brain regions. There were 9,236 genes with heritability differences >10% and 6,714 genes with heritability differences >20% when comparing DG-GCL to hippocampus (Fig. 3b ) versus 3,915 and 1,625 genes when comparing DLPFC to hippocampus, respectively (Fig. 3c and Extended Data Fig. 5d ). There were additionally 13,732 SNP–feature eQTL pairs that were genome-wide significant in both DG-GCL and hippocampus (each at FDR < 1%) in 159 genes with opposite eQTL directionality (Fig. 3d–g ), which perhaps suggests different mechanisms of genetic regulation across different cell types for a small subset of expressed features. Log-transformed fold changes per allele copy (that is, effect sizes) were also larger in DG-GCL than in hippocampus when constraining to genome-wide eQTL significance in both datasets (4,781,015 at FDR < 1%) on average (85.6% pairs), even among the subset of eQTLs that were more statistically significant in hippocampus (70.7% of pairs). Together, these results underscore that genetic regulation of gene expression can differ remarkably across specific cell populations and that bulk tissue eQTL analyses do not capture the complexity of genetic regulation in the brain 4 . Cell-type-enriched eQTLs for schizophrenia Hippocampus is one of the brain regions prominently implicated in the pathogenesis of schizophrenia 25 , 26 ; we therefore performed focused eQTL analyses around schizophrenia risk variants from the latest genome-wide association study (GWAS) 27 . We profiled 6,277 proxy SNPs (linkage disequilibrium (LD) R 2 > 0.8) from 156 loci with lead ‘index’ SNPs present and common in our sample (from 179 loci/index SNPs identified in the GWAS; 87.5%; Methods ) and performed cis -eQTL analyses within the DG-GCL ( n = 263), hippocampus ( n = 333) and joint interaction ( n = 596) datasets. We identified 100 of 156 loci (64.1%) with significant eQTLs in either the DG-GCL or hippocampus dataset (FDR < 1%), with 60 loci having significant eQTLs in both datasets (Supplementary Table 8 ). Most eQTLs in the DG-GCL dataset were supported by multiple feature summarizations, with most loci identified at the exon level (Supplementary Table 7 and Extended Data Fig. 6a ). Combining eQTL data from the DLPFC 7 identified an additional 11 loci with significant (FDR < 1%) signal ( n = 111/156), and reducing the stringency of the significance threshold (FDR < 5%, controlling only within these GWAS-centric analyses) found eQTL evidence for 136 of 156 (87.2%) of the tested loci across all three datasets. There was high overlap of significant loci across the three brain regions, with the most unique signal in DLPFC (Extended Data Fig. 6b ). Combining data across multiple brain regions and cell types, even in a relatively limited number of brain specimens, has therefore identified expression of a transcript feature associated with almost every current schizophrenia risk locus. Notably, there were many loci with significant eQTLs only in DG-GCL without corresponding marginal significance in bulk hippocampus, including seven loci where the index SNPs themselves were associated with the expression levels of gene features: PSD3 (Fig. 4a ; DG-GCL P = 1.51 × 10 –6 ), MARS (Fig. 4b ; DG-GCL P = 1.63 × 10 –6 ), NLGN4X (Fig. 4c ; DG-GCL P = 1.79 × 10 –5 ), GRM3 (Fig. 4d ; DG-GCL P = 2.65 × 10 –5 ), SEMA6D (Fig. 4e ; DG-GCL P = 5.85 × 10 –5 ), MMP16 (Fig. 4f ; DG-GCL P = 1.47 × 10 –4 ) and THEMIS (Fig. 4g ; DG-GCL P = 2.75 × 10 –4 ). Integrating nearby proxies further identified association with SATB2 (Fig. 4h ; DG-GCL P = 2.81 × 10 –6 ), CACNA1C (Fig. 4i ; DG-GCL P = 4.27 × 10 –6 ), KCTD18 (Fig. 4j ; DG-GCL P = 6.34 × 10 –6 ), PRKD1 (Fig. 4k ; DG-GCL P = 8.07 × 10 –5 ), HDAC2-AS2 (Fig. 4l ; DG-GCL P = 1.16 × 10 –4 ), IGSF9B (Fig. 4m ; DG-GCL P = 1.18 × 10 –4 ), TMPRSS5 (Fig. 4n ; DG-GCL P = 1.38 × 10 –4 ) and TNKS (Fig. 4o ; DG-GCL P = 1.53 × 10 –4 ). Two eQTL associations of particular interest involved genetic regulation of GRM3 and CACNA1C , as ion channels (for example, CACNA1C ) and G-protein-coupled receptors (for example, GRM3 ) are classic therapeutic drug targets and eQTLs for these genes in bulk tissue have been elusive. We explored the tissue specificity of these eQTLs by using data from the BrainSeq DLPFC 6 , 7 , CommonMind Consortium DLPFC 2 and Genotype–Tissue Expression (GTEx) 4 bulk tissue projects. The GWAS index SNP that associated with GRM3 expression (rs12704290) in DG-GCL was not significantly associated with any expression features in the BrainSeq or CommonMind Consortium DLPFC datasets or in the GTEx dataset across any brain region or body tissue. The top proxy SNP that associated with CACNA1C expression levels (rs7297582) did show association in GTEx, exclusively in the cerebellum ( P = 6.6 × 10 –7 ), with no significant association in any other tissue or in the BrainSeq dataset. Fig. 4: Schizophrenia risk eQTLs reveal feature associations in DG-GCL not seen in homogenate hippocampus. a – g , Example eQTLs with PGC2 index SNPs ( R 2 = 1). h – o , Example eQTLs with less stringent LD ( R 2 > 0.7). The x axis for each plot shows schizophrenia risk SNP genotype, and the y axis shows cleaned and residulaized log 2 -transformed expression levels from DG-GCL or homogenate hippocampus. All boxplots show median and IQR, with whiskers representing 1.5 × IQR and all data points overlaid. Full size image Cell-type-enriched TWAS In light of the unique DG-GCL eQTLs, we performed a transcriptome-wide association study (TWAS) 28 after constructing SNP weights across the four feature summarizations (gene, exon, junction and transcript) using the DG-GCL dataset, and then applied these weights to the summary statistics from the entire collection of schizophrenia GWAS summary statistics from above ( Methods ). We identified 3,092 features (231 genes, 1,666 exons, 734 junctions and 461 transcripts; Supplementary Table 9 ) significantly associated with schizophrenia risk at TWAS FDR < 5% that were annotated to 1,069 unique genes, of which 471 features (53 genes, 212 exons, 125 junctions and 81 transcripts) in 170 genes were significant following more conservative Bonferroni adjustment (at <5%; Table 1 ). As the TWAS approach combines GWAS and eQTL information, it is possible for some GWAS signals that do not reach GWAS genome-wide significance ( P < 5 × 10 –8 ) to still achieve TWAS transcriptome-wide significance. We therefore annotated the strongest GWAS variant for each significant TWAS feature back to the clumped GWAS risk loci and found that 77.7% of the TWAS Bonferroni-significant features ( n = 366) mapped to the published GWAS risk loci. We confirmed that these TWAS-significant features included GRM3 (TWAS gene P = 7.28 × 10 –6 ) and CACNA1C (TWAS gene P = 1.65 × 10 –10 ), which were both associated with decreased expression with increased schizophrenia genetic risk (in line with the eQTL directionality above). While a small fraction of Bonferroni-significant TWAS features were outside GWAS risk loci, a much larger fraction of FDR-significant TWAS features identified potentially new genes within corresponding GWAS loci implicated in schizophrenia ( n = 2,130 features in 799 genes; 68.9%). Table 1 TWAS analysis in the DG-GCL integrated with the PGC2 + CLOZUK schizophrenia GWAS Full size table We compared these TWAS weights and corresponding schizophrenia associations to those from the bulk hippocampus 7 . Only half of the genes, and one-third of the features overall, in the DG-GCL dataset had heritable expression in the hippocampus and were considered for integration with GWAS data (Table 1 ). Among the TWAS-significant features in the DG-GCL, fewer than half showed even marginally significant association in the hippocampus, largely in line with the eQTL analysis above focused on schizophrenia risk variants. We performed secondary TWAS analyses on the gene level after removing the heritability filtering step of the procedure to generate directly comparable TWAS statistics across DG-GCL, hippocampus and DLPFC ( n = 20,130 unique genes). While the overall test statistics were correlated between the three datasets (Extended Data Fig. 7a ), we still found relatively unique signatures of association, particularly for FDR-associated TWAS genes (FDR < 5%; Extended Data Fig. 7b ), but also for Bonferroni-associated genes (Bonferroni P < 10%; Extended Data Fig. 7c ). Taking these findings together, profiling of the DG-GCL transcriptome identified unique schizophrenia-associated signal that was missed in homogenate tissue, and these genetic associations with expression highlight that the potential pathogenic role of these genes in schizophrenia is mediated in a cell-type-enriched context. Unique signatures of illness in the DG-GCL We lastly explored unique illness-state-associated gene expression differences in the DG-GCL as compared to the bulk hippocampus. Our previous work suggested that the hippocampus had fewer genes differentially expressed between samples from patients with schizophrenia and controls than were differentially expressed between patients and controls in the DLPFC (both identified using ribosomal-depletion sequencing methods) 7 . In the DG-GCL, we similarly found relatively small numbers of genes associated with diagnosis, particularly in comparison to aging and eQTL effects. We identified 26 genes in schizophrenia, 20 genes in bipolar disorder and 7 genes in major depression (all at FDR < 10%; Supplementary Table 10 ). While only a single gene, RPL12P20 , was significant in more than one disorder (schizophrenia and bipolar disorder), we found a significant correlation between these two disorders across the entire transcriptome (Pearson correlation ( ρ ) = 0.53; Extended Data Fig. 8a ), with less correlation between schizophrenia and major depression ( ρ = 0.39) and between bipolar disorder and major depression ( ρ = 0.35), as previously reported 29 . We compared the effects of schizophrenia in the DG-GCL to those in bulk hippocampus, which was the only diagnostic group shared by the two datasets. There were only two significant genes shared by these datasets: GMIP and ZNF766 (at FDR < 10%), which showed larger log-transformed fold changes in the DG-GCL (−0.45 versus −0.17 and 0.14 versus 0.08, respectively). There was also less global correlation for schizophrenia effects between datasets ( ρ = 0.197) than shared diagnosis effects within the DG-GCL (Extended Data Fig. 8b,c ). These results highlight the cell type enrichment of schizophrenia-associated differential expression analysis. Antipsychotic and antidepressant drugs have been associated with changes in dentate gyrus gene expression and levels of hippocampal neurogenesis 30 . We therefore compared the effects of two different treatments on differential expression specificity in the DG-GCL. First, we tested for differences in expression between patients with major depression treated with ( n = 18) and not treated with ( n = 8) selective serotonin reuptake inhibitors (SSRIs) as compared to unaffected controls negative for SSRIs ( n = 93, with 63 explicitly testing negative for SSRIs) and found 31 genes significantly different between patients with major depression on SSRIs and unaffected controls. These genes were largely non-overlapping with the above seven genes associated with major depression overall, when ignoring SSRI status (only two genes, RAD18 and DCAF16 , were shared). None of these genes have been associated with adult neurogenesis, an effect of SSRIs that has been prominently hypothesized 31 . We performed an analogous analysis within patients with schizophrenia stratifying by antipsychotics status at the time of death (49 with antipsychotics and 25 without) in comparison to 94 controls (of whom 55 explicitly tested negative for antipsychotics). We found 110 genes differentially expressed between patients on antipsychotics and unaffected controls (at FDR < 10%), as compared to 0 genes different between patients not on antipsychotics and unaffected controls. Here there was more overlap with the 27 genes associated with overall effects identified above (23/27), and only 2 genes ( SRR and GRN ) were previously associated with adult neurogenesis. We further integrated TWAS statistics in the DG-GCL dataset at each gene and found no association between differential expression for schizophrenia diagnosis and association with genetic risk for schizophrenia ( ρ = 0.002; Extended Data Fig. 8d ). Only a single gene ( GMIP ) was FDR significant in both TWAS and differential expression analysis, with predicted decreased expression in schizophrenia. Both analyses are in line with previous observations that gene expression differences between patients and controls likely largely reflect treatment effects and other consequences of illness 6 . Discussion We performed LCM followed by RNA-seq to generate the transcriptional landscape of the granule cell layer of the dentate gyrus in human hippocampus. This approach identified widespread cell-type-enriched aging and genetic effects in the DG-GCL that were either missing or directionally discordant in corresponding bulk hippocampus RNA-seq data from largely the same individuals. We identified 1,337 genes with expression that only associated with age across the lifespan in the DG-GCL, and these genes were enriched for diverse neuronal processes. We further identified ~9 million SNP–feature eQTL pairs in the DG-GCL, of which 15% were not even marginally significant ( P > 0.05) in bulk hippocampus. By using these eQTL maps, we identified new schizophrenia-associated genes and their features that were completely missed in bulk brain tissue, including associations with expression of GRM3 and CACNA1C . We lastly found a small number of genes differentially expressed in the DG-GCL in patients with schizophrenia, bipolar disorder or major depression as compared to neurotypical individuals that were largely missed in bulk tissue. These results together highlight the importance, and biological resolution, of exploring cell-type-enriched gene expression levels by using targeted sampling strategies like LCM. Cellular neuroscience has rapidly evolved via the development of innovative tools that enable the characterization of single-cell transcriptional profiles after dissociation or cell sorting of fresh tissue 32 , 33 , 34 or single nuclei from frozen tissue 8 , 9 , 17 , 34 , 35 . However, while these approaches have the power to characterize the cell type composition within a given area of the brain, there are typically few individuals profiled and few genes characterized per cell, making disease- or trait-related associations within individual cell types difficult. Moreover, most of these approaches involve library preparations that capture only 3′ transcript fragments and have relatively low sensitivity. The other extreme in large-scale neurogenomics has involved profiling of homogenate tissue, mixing and potentially diluting cell-type-enriched associations across individuals 1 , 2 , 36 . Although some disease-relevant targets can be readily assigned to a single cell class from homogenate tissue (for example, C4 and AS3MT ) 37 , 38 , the cellular specificity of most other transcripts requires further interrogation 2 , 38 , 39 . Our data—deeply profiling gene expression from a relatively homogeneous cell population—illustrate a balance between these two extremes of bulk RNA-seq and snRNA/scRNA-seq that permits cell-type-enriched inference of age, genotype and neuropsychiatric illness status. As proof of principle, we used the granule cell layer of the well-characterized hippocampal dentate gyrus as a practical intermediate between homogenate tissue and sorted individual nuclei. The anatomically distinct granule neuron layer enables direct comparison between deep sequencing of homogenate tissue and a highly enriched single cell type, which we validated with snRNA-seq data. By using the unique tissue resource of contralateral cerebral hemispheres, we directly compared transcript diversity between homogenate and laser-captured granule neuron cell bodies. Our most striking finding involved the extensive cell-type-enriched genetic regulation of expression, with highly significant eQTLs in the DG-GCL without any corresponding signal in homogenate hippocampal tissue. While larger consortia like GTEx 4 , psychENCODE 40 and our BrainSeq 6 , 7 have saturated the landscape of homogenate tissue eQTLs in human brain, here we demonstrate that much of the genetic regulation within individual cell populations is masked. Only one other study, to the best of our knowledge, has highly enriched for a selective cell population from human brain tissue to identify cell-type-enriched eQTLs, namely in dopamine neurons from the substantia nigra in 84 individuals, and only a small number of eQTLs (3,461 SNPs to 151 expressed sequences) were reported 41 . This previous report furthermore did not assess the cellular or regional specificity of these associations, and many of the reported SNPs show association in both the DG-GCL and bulk hippocampus. For example, the top disease-related SNP (rs17649553) in that report showed strong association with 11 nearby genes in our DG-GCL data (including 8 genes at P < 1 × 10 –20 ), of which 9 were also significant by FDR in bulk hippocampus (including 5 genes at P < 1 × 10 –20 ). Other studies have profiled human brain tissue with LCM 15 , 42 , 43 , 44 , but have all focused on comparisons of illness state, which showed the least amount of signal in the DG-GCL in comparison to age and genotype. In addition to the extensive genome-wide cell-type-enriched eQTL associations, we found many new schizophrenia-associated eQTLs that were not identified in homogenate hippocampus or DLPFC. Two of the most long-standing schizophrenia risk genes, GRM3 (ref. 45 ) and CACNA1C 46 , finally show molecular evidence using this DG-GCL dataset, rather than merely identifying variants proximal to these genes . GRM3 and CACNA1C have been especially alluring as schizophrenia gene targets given the druggability of the encoded G-protein-coupled receptor and ion channel, respectively. These cell-type-specific associations are heuristic in terms of generating models for therapeutic discovery, as cell and animal models with targeting of GRM3 and CACNA1C might focus on dentate gyrus granule cell experimental models. In this regard, it should be noted that the risk-associated alleles for both of these genes were associated with relatively reduced expression. Previous associations in the cerebellum in GTEx could likely be related to the presence of granule cell neurons in this brain region 47 , but integrative TWAS analyses of this brain region did not identify association with schizophrenia genetic risk 48 . Furthermore, previous work has been inconsistent in suggesting the directionality of risk association with expression of CACNA1C in neocortical samples 49 , 50 , and the widely held assumption that CACNA1C antagonism is a therapeutic translation of the risk association may have to be reexamined on the basis of the current data. Overall, we demonstrate that the LCM-based enrichment strategy detects signals unique to the granule cell layer that were completely masked in at least currently available homogenate tissue and generates TWAS evidence of new schizophrenia risk-associated loci that also were dependent on gene expression data in the DG-GCL in contrast to bulk hippocampus tissue. This strategy of deeply sequencing target cell populations in postmortem human brain provides a powerful balance between unbiased single-nucleus and homogenate tissue sequencing that can identify cellular and spatial associations with common molecular and clinical traits. Methods Human postmortem brain tissue collection Postmortem human brain tissues were collected at the Clinical Brain Disorders Branch (CBDB) at the National Institute of Mental Health (NIMH) through the Northern Virginia and District of Columbia Medical Examiners’ Office according to NIH Institutional Review Board guidelines (protocol 90-M-0142) and the Lieber Institute for Brain Development (LIBD) according to a protocol approved by the Institutional Review Board of the State of Maryland Department of Health and Mental Hygiene (12–24) and the Western Institutional Review Board (20111080). Details of the donation process, specimen handling, clinical characterization, neuropathological screening and toxicological analyses have been described previously 51 , 52 . Each individual was diagnosed retrospectively by two board-certified psychiatrists, according to the criteria in the DSM-IV. Brain specimens from the CBDB were transferred from the NIMH to the LIBD under a material transfer agreement. Briefly, all individuals met DSM-IV criteria for a lifetime axis I diagnosis of schizophrenia or schizoaffective disorder ( n = 75), bipolar disorder ( n = 66) or major depression ( n = 29), and neurotypical control individuals ( n = 93) were defined as individuals with no history of significant psychological problems or psychological care, psychiatric admissions or drug detoxification and with no known history of psychiatric symptoms or substance abuse, as determined by both telephone screening and medical examiner documentation, as well as negative toxicology results. Additional selection criteria included high-integrity RNA in each sample from previous studies of other brain areas, age matching that of control samples and a broad age range. Because of the relatively large sample set, we further attempted to use sex and ancestry diversity as inclusion rather than exclusion criteria. A majority of the cases were of European ancestry ( n = 169). A total of 263 hippocampal samples (Supplementary Table 1 ) were used for the LCM described below. No statistical methods were used to predetermine sample sizes, but our sample sizes are similar to those reported in previous publications 6 , 7 ; investigators were not blinded to group allocation because the study was observational. Brain tissue processing After removal from the calverium, brains were wrapped in plastic and cooled on wet ice. A detailed macroscopic inspection was performed of the brain, meninges, attached blood vessels and, when possible, the pituitary and pineal glands. Brains were then hemisected, cut into 1.5-cm coronal slabs, flash frozen in a prechilled dry ice–isopentane slurry bath (−40 °C) and stored at −80 °C. The time from when the tissue was stored at −80 °C to when the RNA was extracted was considered the freezer time (mean + s.d., 43.8 + 2.8 months). A block of the lateral superior cerebellar cortex hemisphere was cut transversely to the folia. pH was measured by inserting a probe into the right parietal neocortex and again into the right cerebellar hemisphere. For the purpose of this study, the slab containing the hippocampus at the level of the midbody was identified by visual inspection. An approximately 2 × 2 × 1 cm 3 block was then taken from the medial temporal lobe, encompassing the hippocampal formation, entorhinal cortex and adjacent white matter. This block was kept frozen at all times and was then mounted for sectioning for LCM. Dentate gyrus laser capture microdissection The DG-GCL was isolated from neighboring polymorphic and molecular layers by using LCM (Extended Data Fig. 1 ). The midbody of the dentate gyrus was exposed by gross block dissection followed by 30-micron cryosection onto 20 glass slides coated with precharged PEN membrane (Zeiss Microscopy). To enhance signal in the densely packed granule cell layer, sections were briefly stained with the nucleic acid-intercalating agent Acridine Orange (Molecular Probes, A3568) for 1 min in ethanol before LCM. Emitted green light from excitation with blue light was used to distinguish the granule cell layer from the adjacent polymorphic layer and subgranule zone. The limits of the granule cell layer were defined manually for each section and entered into PALM Robo software for LCM using laser pressure capture (Zeiss). LCM caps (MMI) were stored on dry ice immediately after collecting fragments until lysis and RNA extraction by RNeasy Micro kit (Qiagen). Over 100 ng of total RNA was isolated from the pooled fragments for each donor. This relatively high output quantity from LCM enables more accurate steady-state mRNA quantification and avoids the amplification bias and computational confounders associated with RNA preamplification 53 . The quantity and integrity of the RNA were determined by NanoDrop and BioAnalyzer (Agilent). RNA-seq data generation and processing RNA was converted into RNA-seq libraries with the Illumina RiboZero Gold library preparation kit and sequenced on an Illumina HiSeq 2000 sequencer. Samples were balanced across diagnoses within each processing batch. Raw sequencing reads were quality checked with FastQC 54 , and leading bases were trimmed from the reads with Trimmomatic 55 , as appropriate. Quality-checked reads were mapped to the hg38/GRCh38 human reference genome with splice-aware aligner HISAT2 version 2.0.4 (ref. 56 ), with an average overall alignment rate of 92.5% (s.d. = 3.8%). Feature-level quantification based on GENCODE release 25 (GRCh38.p7) annotation was run on aligned reads by using featureCounts (subread version 1.5.0-p3) 57 with a mean of 27.2% (s.d. = 4.0%) of mapped reads assigned to genes. Exon–exon junction counts were extracted from the BAM files with regtools 58 v0.1.0 and the bed_to_juncs program from TopHat2 (ref. 59 ) to retain the number of supporting reads (in addition to returning the coordinates of the spliced sequence, rather than the maximum fragment range) as described in ref. 6 . Annotated transcripts were quantified with Salmon version 0.7.2 (ref. 60 ). For an additional quality-control check of sample labeling, variant calling on 740 common missense single-nucleotide variants was performed on each sample with bcftools v1.2 and verified against the genotype data described below. We generated strand-specific base-pair-coverage BigWig files for each sample with bam2wig.py v2.6.4 from RSeQC 61 and wigToBigWig v4 from UCSC tools 62 to calculate quality surrogate variables (qSVs) for hippocampus-susceptible degradation regions 7 . We retained 21,460 expressed genes with reads per kilobase of transcript per million mapped reads (RPKM) > 0.5 when using the number of reads assigned to genes as the denominator (not the number mapped to the genome). For secondary analyses, we retained 358,280 expressed exons with RPKM > 0.5 (using assigned genes as the denominator), 241,957 exon–exon splice junctions with reads per 10 million spliced (RP10M) > 0.5 that were not completely unannotated, and 95,027 annotated transcripts with transcripts per million (TPM) > 0.5. Integration with hippocampus RNA-seq data We further integrated existing RNA-seq data from the bulk hippocampus on the same 21,460 genes, which have been described previously 7 , on 333 samples from individuals over the age of 17 years (200 control individuals and 133 individuals with schizophrenia) that were sequenced with the RiboZero HMR kit, resulting in a joint dataset of 596 samples across 484 unique donors. The bulk hippocampus dissections here consisted of the entire hippocampal formation, including all of the CA subfields, being dissected from the medial temporal lobe under visual guidance with a handheld dental drill. The anterior half of the hippocampal formation was included in this dissection, beginning just posterior to the amygdala. Sensitivity analyses related to library preparation kits were performed with an additional 17 hippocampus RNA-seq samples (total of 129) that were sequenced with the same RiboZero HMR kit; these were excluded from subsequent analysis (Extended Data Fig. 2 ). Principal-component analysis (PCA) demonstrated that the combined samples separated by dataset (Extended Data Fig. 3 ). DNA genotyping and imputation Cerebellar DNA was extracted and genotyped for all 484 unique donors across the 596 samples as described previously 6 , which, in brief, involved phasing and imputation to the 1000 Genomes Phase 3 reference panel with SHAPEIT2 (ref. 63 ) and IMPUTE2 (ref. 64 ). We retained 6,521,503 SNPs that were well imputed (missingness < 10%) and common (minor allele frequency (MAF) > 5%, Hardy–Weinberg equilibrium (HWE) > 1 × 10 –6 ) across the 263 DG-GCL samples for eQTL analysis. The same sets of SNPs were extracted from the 333 bulk hippocampus samples. We used independent SNPs (according to LD) to calculate the top five multidimensional scaling (MDS) components as a measure of quantitative ancestry. Cell-type-enrichment expression analysis We performed differential expression analysis for the 112 donors with both hippocampal and DG-GCL samples ( n = 224) by using linear mixed-effects modeling with the limma voom approach 65 . We adjusted for the mitochondrial chromosome mapping rate, the rRNA assignment rate, the overall genome mapping rate and the exonic assignment rates to account for differences in library preparation and other technical factors, and we used subject as a random intercept in the modeling with the duplicateCorrelation() argument. snRNA-seq data generation and RNA deconvolution model We performed snRNA-seq on hippocampal tissue from one donor by using 10x Genomics Single-Cell Gene Expression v3 technology. Nuclei were isolated with the ‘Frankenstein’ nuclei isolation protocol developed by Martelotto et al. for frozen tissues 9 , 66 , 67 , 68 . Briefly, ~40 mg of frozen hippocampal tissue was homogenized in chilled EZ Nuclei Lysis Buffer (MilliporeSigma) in a glass dounce with ~15 strokes per pestle. Homogenate was filtered through 70-μm strainer mesh and centrifuged at 500 g for 5 min at 4 °C in a benchtop centrifuge. Nuclei were resuspended in EZ Lysis Buffer, centrifuged again and equilibrated to nuclei wash/resuspension buffer (1% BSA and 0.2 U μl –1 RNase inhibitor in 1× PBS). Nuclei were washed and centrifuged in nuclei wash/resuspension buffer three times before labeling with DAPI (10 μg ml –1 ). Samples were then filtered through a 35-μm cell strainer and sorted on a BD FACSAria II flow cytometer (Becton Dickinson) at the Johns Hopkins University Sidney Kimmel Comprehensive Cancer Center (SKCCC) Flow Cytometry Core. Gating criteria hierarchically selected for whole, singlet nuclei (by forward/side scatter) and then for G0/G1 nuclei (by DAPI fluorescence). A null sort was additionally performed from the same preparation to ensure that the nuclei input was free of debris. Approximately 8,500 single nuclei were sorted directly into 25.1 μl of reverse-transcription reagents from the 10x Genomics Single-Cell 3′ Reagents kit (without enzyme). Libraries were prepared according to the manufacturer’s instructions (10x Genomics) and sequenced on the NextSeq (Illumina) at the Johns Hopkins University Transcriptomics and Deep Sequencing Core. We processed data with the 10x Genomics CellRanger pipeline using human reference genome GRCh38 to generate UMI/feature-barcode matrices. We used the R package Seurat 69 for feature/barcode quality control, dimensionality reduction and clustering (with the default Louvain approach, taking a computed k -nearest-neighbors graph as input). Clusters were annotated with well-established cell type markers for cell type identity 70 . We also used Seurat’s implementation of nonlinear dimensionality reduction techniques, t -SNE and UMAP, simply for visualization of the high-dimensional structure in the data, which complemented the clustering results (Extended Data Fig. 9 ). To assess cell type composition in the RNA-seq data from both our DG-GCL and hippocampal samples, we used an RNA deconvolution algorithm 71 , 72 to estimate the RNA fractions of known cell types in hippocampus. Here we used the above hippocampal snRNA-seq data and looked at the unbiased cluster-driving genes (Seurat’s FindAllMarkers() function), as opposed to the cell type marker genes used merely for annotation. We used the top 20 genes for each of 8 clusters (yielding a set of 216 unique cell-type-specific genes) to form the signature matrix of the deconvolution model and performed deconvolution for all 596 DG-GCL and hippocampus samples to assess the estimated RNA fractions from each cell type. We further scaled the RNA fractions to sum to 1 for visualization. Differential expression across age and diagnosis We next modeled age and diagnosis effects by using linear regression analysis, adjusting for exonic assignment rate, sex, mitochondrial chromosome mapping rate, five MDS ancestry components and qSVs, with different sample subsets of the combined 596 RNA-seq samples. We first calculated qSVs 73 from the 488 significant degradation-susceptible regions of the hippocampus by extracting library-size-normalized read coverage across all 596 samples and performing PCA to retain the top nine principal components as the qSVs. We fixed the qSVs for the different models below for comparability and ran four differential expression analyses with limma voom 65 : 1. DG-GCL only ( n = 263): we assessed the significance of the age main effects and diagnosis main effects separately with the topTable() function. 2. Hippocampus only ( n = 333): we assessed the significance of the age main effects and diagnosis main effects separately with the topTable() function. 3. Age interaction model ( n = 596): we further adjusted for brain region and the interaction between brain region and age and treated donor as a random intercept in linear mixed-effects modeling. We assessed the significance of the interaction term. 4. Schizophrenia interaction model ( n = 596): we further adjusted for brain region and the interaction between brain region and schizophrenia diagnosis and treated donor as a random intercept in linear mixed-effects modeling. We performed two sensitivity analyses for drug effects within the DG-GCL: one for SSRIs among individuals with major depression and another for antipsychotics among individuals with schizophrenia. Here we recoded the diagnosis effect to be control < case (no treatment) < case (treatment) and extracted the main effects of case (treatment) versus control and case (no treatment) versus control, adjusting for the same model as in analysis (1) above. Controls without toxicology data were still included, as detailed chart review suggested that these individuals were neurotypical, but patients without definitive toxicology data were excluded. Gene set testing We used the clusterProfiler Bioconductor package 74 to perform gene set enrichment analyses with sets of significant genes as the input, along with the Entrez IDs corresponding to the 21,640 expressed genes as the gene universe. This package uses the hypergeometric test to assess enrichment, and gene set results were corrected by Benjamini–Hochberg correction for multiple testing. We used the MANGO database 31 to test for enrichment of adult neurogenesis genes, which uses mouse Entrez gene IDs as identifiers. We mapped these IDs to human Ensembl orthologs, resulting in matches for 257 of the 259 genes in GENCODE v25, with 172 of the 259 genes expressed in the DG-GCL and hippocampus datasets; these genes formed the gene set for enrichment analyses. Expression quantitative trait locus mapping We performed eQTL mapping with the 263 DG-GCL samples, by using the MatrixEQTL package 75 with linear regression and adjusting for diagnosis, sex, the five MDS components and then overall expression principal components estimated by the num.sv function in the sva package 76 (genes, 22; exons, 28; junctions, 46; transcripts, 35), using 500 Mb as the window size for cis -eQTL detection. We retained eQTLs with empirical significance at FDR < 0.01 for testing in other datasets ( n = 8,988,986 SNP–feature pairs) and tested their effects in the hippocampus dataset ( n = 333) by using analogous statistical models with dataset-specific principal components (genes, 24; exons, 28; junctions, 32; transcripts, 26). We also computed the interaction between genotype and dataset by using the full set of 596 samples for all ~9 million SNP–feature pairs with the LinearCROSS argument in MatrixEQTL, again adjusting for an analogous statistical model with dataset-specific principal components (the top 25 feature-specific principal components). We merged DG-GCL-significant eQTLs across the three datasets to create a single database. We also performed more focused eQTL analyses with the latest schizophrenia risk variants 27 and their highly correlated proxies identified by rAggr 77 (rather than all common SNPs, R 2 > 0.8) using the same statistical models as above. Corrected P values were recalculated in these analyses as they were relative to those SNPs and features proximal to the GWAS risk loci. Transcriptome-wide association study analysis We constructed TWAS weights following the guide at , which was derived from the README of the published TWAS approach 28 , adapted for hg38 coordinates (rather than hg19). First, three sets of SNPs were homogenized in terms of coordinates, names and reference alleles: (1) the LD reference set from , (2) the SNPs used to calculate the TWAS weights, that is, the same set of SNPs used in the DG-GCL eQTL analyses and (3) the GWAS summary statistic SNPs from PGC2 and the Walters Group Data Repository 27 . Feature weights were then computed for the DG-GCL for genes, exons, exon–exon junctions and transcripts by running the weight-computing TWAS-FUSION R script based off of Gusev’s work 28 . After the functional weight was computed for each feature, two more TWAS-FUSION scripts were run (adapted from FUSION.assoc_test.R and FUSION.post_process.R), which applied the weights to the GWAS summary statistic SNPs and calculated the functional GWAS association statistics. We reclassified variants as proxies if they had GWAS P < 5 × 10 –8 , even if they were not in strong LD ( R 2 < 0.8) with the index SNP. Weights from the hippocampus were obtained from a larger sequencing study described in Collado-Torres et al. for comparability 7 . We computed TWAS FDR- and Bonferroni-adjusted P values within each feature type separately. For cross-dataset analyses at the gene level without reliance on heritability calculations, we set the --hsq_p option to 1.0001 in the FUSION.compute_weights.R step and evaluated models top1, BLUP, lasso and enet. Heritability estimates were calculated with the GCTA software package by using SNPs within 500 kb up- and downstream of each gene 78 . We note that the TWAS algorithm forces both the convergence of heritability calculations and resulting positive heritability, which are hard-coded in the lasso procedure, resulting in TWAS statistics for 13,791 genes in DG-GCL, 16,525 genes in DLPFC and 15,573 genes in the hippocampus, rather than every single expressed gene. General statistical reporting Sample sizes were 224 samples (112 DG-GCL and hippocampus matched pairs from the same donors) for Fig. 1 , 263 samples (across 263 donors) for all other analyses of DG-GCL and 333 samples (across 333 donors) for all other analyses of the hippocampus. All box plots shown in the main and supplementary figures display the median as the center, IQR (25th–75th percentile) as the box range and 1.5 times the IQR as the whiskers. All reported P values are two sided, and all P values were adjusted for multiple testing with Benjamini–Hochberg correction unless otherwise indicated. Distributions of the residuals of our many linear models were assumed to be normally distributed across all genes and models, but this was not formally tested. All points were used in all analyses: for example, outliers were not removed. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw sequencing reads are available through SRA accession code SRP241159 and BioProject accession code PRJNA600414 . Processed data are available at our website: . Code availability Code is available at , archived via Zenodo at .
Past research in the field of neuroscience suggests that specific types of cells can contribute to the development of psychiatric disorders, including schizophrenia. However, identifying the types of cells that may play a role in schizophrenia can be quite challenging, particularly when using some of the most conventional techniques for the analysis of human tissue. Researchers at the Lieber Institute for Brain Development and the Astellas Research Institute of America have recently carried out new study aimed at investigating gene expression in an important type of neuron that could be associated with schizophrenia. In their paper, published in Nature Neuroscience, they profiled gene expression in a region of the brain that has been found to be linked to schizophrenia, namely the dentate gyrus of the hippocampus. Over the past few years, the same team of researchers conducted several studies with the goal of better understanding molecular correlates of schizophrenia by analyzing human brain tissue collected post-mortem. These experiments were carried out on homogenate brain tissue, which contains a complex mixture of a variety of cell types. While they gathered important insight, the use of homogenate brain tissue seemed far from ideal, as it made it harder to focus investigations on specific cell types hypothetically associated with gene expression signals in schizophrenia. "Previous research had implicated the dentate gyrus in psychiatric illness and this subregion of hippocampus plays an important role in memory," Daniel Hoeppner, one of the researchers who carried out the study, told Medical Xpress. "In our study, we leveraged the distinct morphological appearance of the granule cell layer, using laser capture microdissection to cut this layer out of the surrounding hippocampus tissue." The experimental design that the researchers used in their recent work has several important advantages. One of its key strengths is that it involves the use of RNA sequencing (RNA-seq) data from both hemispheres of the same brains; the bulk hippocampus region from one hemisphere and the dentate gyrus granule cell layer from the other. By analyzing this data, the researchers were able to identify gene expression signatures specific to the granule cell layer of the dentate gyrus (DG-GCL) and others that appeared to be shared with other parts of the hippocampus. These contrasts in the cellular specificity of different parts of the hippocampus were the primary focus of the researchers' analyses. "From a methodological standpoint, many researchers have moved from homogenate brain tissue directly to individual nuclei using so-called single nucleus RNA sequencing (snRNA-seq)," Thomas Hyde, another researcher involved in the study, told Medical Xpress. "However, these evolving methods still shallowly profile gene expression, particularly from less abundant cell populations. The use of laser capture microdissection allowed us to focus on morphologically—or spatially—defined cell populations and use existing well-established sequencing technologies to deeply profile their transcriptomes." Using laser capture microdissection combined with RNA sequencing, the researchers were able to identify far more cellular specificity for genes found in genome-wide association study (GWAS) risk loci than those characterized in previous studies. In other words, they identified cell types and genetic effects in the DG-GCL brain region that could be associated with the risk of developing schizophrenia. The researchers identified approximately 9 million gene expression features in the DG-GCL, 15% of which were unique to this brain region and absent in other parts of the bulk hippocampus. This 15% included 15 expression loci that were previously highlighted as potential schizophrenia risk variants. By analyzing these findings, the researchers were able to unveil genetic signals associated with schizophrenia that were never identified before, including a decreased expression of genes GRM3 and CACNA1C. "Identifying novel risk gene associations specifically in the dentate gyrus could ultimately motivate functional experiments to generate hippocampal granule cell neurons from induced pluripotent stem cells (iPSCs) and alter the expression of these risk genes to better understand biological mechanisms of risk," Mitsuyuki Matsumoto, another researcher who carried out the study, told MedicalXpress. This recent report highlights the vast potential of using targeted sampling strategies, such as laser capture microdissection, to investigate specific cellular patterns in the human brain. The findings gathered by Hoeppner, Hyde, Matsumoto and their colleagues also provide new valuable insight about gene expression patterns that may be associated with the risk of developing schizophrenia. "Our work suggests that diving deeper into specific cell types of the human brain might be more fruitful for risk gene discovery than additional brain regions of homogenate tissue," Andrew Jaffe said. "The Lieber Institute for Brain Development will thus continue developing laser capture microdissection strategies to profile additional specific cell populations in human postmortem brain tissue. In parallel, we have developed strategies for spatial transcriptomics analyses of human postmortem brain tissue and are now adapting these approaches to study the human hippocampus."
10.1038/s41593-020-0604-z
Earth
Researchers urge reduced use of PFAS chemicals in consumer products
Ian T. Cousins et al, The concept of essential use for determining when uses of PFASs can be phased out, Environmental Science: Processes & Impacts (2019). DOI: 10.1039/c9em00163h
http://dx.doi.org/10.1039/c9em00163h
https://phys.org/news/2019-06-urge-pfas-chemicals-consumer-products.html
Abstract Because of the extreme persistence of per- and polyfluoroalkyl substances (PFASs) and their associated risks, the Madrid Statement argues for stopping their use where they are deemed not essential or when safer alternatives exist. To determine when uses of PFASs have an essential function in modern society, and when they do not, is not an easy task. Here, we: (1) develop the concept of “essential use” based on an existing approach described in the Montreal Protocol, (2) apply the concept to various uses of PFASs to determine the feasibility of elimination or substitution of PFASs in each use category, and (3) outline the challenges for phasing out uses of PFASs in society. In brief, we developed three distinct categories to describe the different levels of essentiality of individual uses. A phase-out of many uses of PFASs can be implemented because they are not necessary for the betterment of society in terms of health and safety, or because functional alternatives are currently available that can be substituted into these products or applications. Some specific uses of PFASs would be considered essential because they provide for vital functions and are currently without established alternatives. However, this essentiality should not be considered as permanent; rather, constant efforts are needed to search for alternatives. We provide a description of several ongoing uses of PFASs and discuss whether these uses are essential or non-essential according to the three essentiality categories. It is not possible to describe each use case of PFASs in detail in this single article. For follow-up work, we suggest further refining the assessment of the use cases of PFASs covered here, where necessary, and expanding the application of this concept to all other uses of PFASs. The concept of essential use can also be applied in the management of other chemicals, or groups of chemicals, of concern. This article is part of the themed collections: PFAS and Best Papers 2019 – Environmental Science: Processes & Impacts Environmental significance PFASs are manmade organic contaminants that can be found everywhere in the global environment, largely as a result of their high persistence and wide use. Based on concerns regarding their high persistence and other hazardous properties, it has been argued that the production and use of PFASs should be limited to essential uses only. In this paper, we translate the concept of “essential uses” or “essentiality” into three criteria to determine when uses of PFASs are essential, or not, and demonstrate how the criteria can be applied to different use cases of PFASs. This approach can inform and encourage manufacturers, retailers and end users to consider phasing out and substituting uses of PFASs. Thus, the uses and related emissions of PFASs can be systematically limited and the long-term harm to human health and the environment can be avoided. Introduction Per- and polyfluoroalkyl substances (PFASs) are a group of more than 4700 substances 1 that have been produced since the 1940s and used in a broad range of consumer products and industrial applications. 2 The multiple uses of PFASs have been well-illustrated by the FluoroCouncil. 3 PFASs can be broadly divided into low molecular weight and high molecular weight (polymeric) substances. The polymeric PFASs can be further subdivided into side-chain fluorinated polymers, fluoropolymers and perfluoropolyethers. 2 The review of Buck et al. 2 and the FluoroCouncil website 3 should be consulted for a detailed description of the structures, classes and uses of low and high molecular weight PFASs as that background will not be provided here. Since 2000 there have been a number of voluntary industry phase-outs and regulatory actions to cease the manufacture and use of long-chain perfluoroalkyl acids (PFAAs; defined as including perfluoroalkane sulfonic acids (PFSAs) with perfluoroalkyl chains containing 6 carbons or more, and perfluoroalkyl carboxylic acids (PFCAs) with perfluoroalkyl chains containing 7 carbons or more) and their precursors, which can transform in the environment or within organisms to long-chain PFAAs. The most common replacements for the above defined long-chain PFAS chemistries are shorter-chain PFASs, e.g. PFAAs with fewer fluorinated carbons than long-chain PFAAs, and perfluoroether-based substances (PFASs with perfluoroalkyl segments joined by ether linkages). 4 Although some of these replacement PFASs are less bioaccumulative, they are all similarly highly persistent in the environment as their predecessors. 5,6 PFAAs which are considered short-chain and non-bioaccumulative may also lead to high internal concentrations if people are continuously exposed to high levels. Moreover, short-chain PFAAs, such as perfluorobutanoic acid (PFBA) and PFHxA, tend to be highly mobile and to move readily into ground and surface waters once released to the environment where they can reside for decades to centuries. 7–10 As a result of their high environmental persistence, widespread use and release of any PFAS, even polymeric PFASs, 11 will lead to irreversible global contamination and exposure of wildlife and humans, with currently unknown consequences. 12–14 Based on concerns regarding the high persistence of PFASs and the lack of knowledge on chemical structures, properties, uses, and toxicological profiles of most PFASs currently in use, it has been argued by more than 200 scientists in the Madrid Statement that the production and use of PFASs should be limited. 12 Indeed, in the textile sector, some brand names and retailers have recognized the problems associated with PFASs and have already taken significant steps to phase out all uses of PFASs in their consumer products. 15–18 It is neither practical nor reasonable to ban all uses of PFASs in one step. Some specific applications may serve a critical role for which alternatives currently do not exist. However, if some uses of PFASs are found not to be essential to health, safety or the functioning of today's society, they could be eliminated without having to first find functional alternatives providing an adequate function and performance. Elimination of non-essential uses of PFASs could form a starting point for a process that leads to a global phase-out ( e.g. through the Stockholm Convention on Persistent Organic Pollutants). To critically evaluate the idea that PFASs are essential in modern society, the essentiality of PFASs should be carefully tested against the available evidence for each of their uses. Given the thousands of PFASs on the market and their many uses, this is a formidable but necessary task. Before proceeding in this task, a definition of essentiality, or essential use, is needed. If PFASs are considered non-essential in a given use, then a phase-out of PFASs from that use can be implemented. The aims and structure of this paper are therefore to: (1) define the concept of essential use or essentiality, (2) apply the concept to various use categories of PFASs to determine the feasibility of limiting use, as showcases of the concept, and (3) outline the remaining challenges for phasing out uses of PFASs in society and provide recommendations for further work. It is not our intention to conduct conclusive assessments for our selected use cases of PFASs on the individual use level. Follow-up work may be needed to cover each use case in more detail, where necessary, and to expand the application of the concept to all other uses of PFASs. The concept of ‘essential use’ This approach is based on the example of the Montreal Protocol, which phased out the use of ozone-depleting chlorofluorocarbons except for certain ‘essential’ uses, and which defined the concept of ‘essential use’ in Decision IV/25. 19 The two elements of an essential use are that a use is “necessary for health, safety or is critical for the functioning of society” and that “there are no available technically and economically feasible alternatives”. To identify uses of PFASs that are non-essential, we combine the definition of essentiality with several categories of PFAS uses. Overall, this leads to the three categories summarized in Table 1 . Table 1 Three essentiality categories to aid the phase out of non-essential uses of chemicals of concern, exemplified with PFAS uses Category Definition PFAS examples (1) “Non-essential” Uses that are not essential for health and safety, and the functioning of society. The use of substances is driven primarily by market opportunity Dental floss, water-repellent surfer shorts, ski waxes (2) “Substitutable” Uses that have come to be regarded as essential because they perform important functions, but where alternatives to the substances have now been developed that have equivalent functionality and adequate performance, which makes those uses of the substances no longer essential Most uses of AFFFs, certain water-resistant textiles (3) “Essential” Uses considered essential because they are necessary for health or safety or other highly important purposes and for which alternatives are not yet established a Certain medical devices, occupational protective clothing a This essentiality should not be considered permanent; rather, a constant pressure is needed to search for alternatives in order to move these uses into category 2 above. For uses in category 1 (“non-essential” uses), a phase-out via a ban or restriction of PFASs can be prepared because these uses are not necessary for the betterment of society in terms of health, safety and functioning. The technical function of the PFAS (if it has one) in the use case could be considered “nice to have” ( e.g. non-stick frying pans) but it is not essential. In many cases the “nice to have” function can be fulfilled through substitution with fluorine-free alternatives. Even where there are no alternatives to PFAS for providing the “nice to have” function, the use case can be banned or phased out because it is not essential. Uses in category 2 (“substitutable” uses) fulfill important functions but are assessed to be non-essential because there are alternatives available that can be substituted into these products or applications and provide the necessary technical function and performance. It may be needed to make the alternatives more well-known and more easily available, but there is no fundamental obstacle to removing PFASs from these uses. Upon increased market uptake, the costs can be expected to decrease. 20,21 Uses in category 3 (“essential” uses) are considered necessary and currently have no established alternatives to PFASs that provide the necessary technical function and performance. Innovative research and development may be needed to identify chemical or engineering alternatives and to make them technically and economically feasible. By identifying these opportunities, strong market incentives will be created for industry to develop such alternatives. In support of this approach research and innovation funding could be made available specifically for this purpose, and to support start-up companies that intend to develop and market new alternatives. Implementation of this conceptual framework could give rise to ‘grey zones’ where it may not be straightforward to assign a use to a particular category. For example, a grey zone might appear between categories 1 and 2 because some uses of PFASs may be considered as nice-to-have by some (stain-proof and waterproof outdoor jacket for everyday use) and as necessary by others. Similarly, a grey zone could turn up between categories 2 and 3 because the availability and performance of alternatives is being debated ( e.g. AFFFs used by the military for extinguishing fuel fires). In order to avoid/minimize such ‘grey zones’ in the implementation of this conceptual framework, clear criteria and relevant processes need to be pre-defined. This would require follow-up work that is beyond the scope of the present paper. Technical performance standards may play a role in defining whether the use of PFASs is or is not considered “essential” in certain cases. Technical performance standards are detailed specifications concerning how a product should perform in certain circumstances and are often voluntary. However, they may be used to define whether a product is of sufficient quality to be placed on the market or to be purchased through public procurement. For example, some European Union product-related legislation sets so-called “essential requirements” for certain products and then delegates the task of defining how to meet those requirements to European standard-setting bodies, such as the European Committee for Standardization (CEN). The International Standardization Organization (ISO) and national bodies such as the German Technischer Überwachungsverein (TÜV) may also set certification requirements that may be important in the design of the product performance, and how to demonstrate it. The case studies below provide several examples of how technical standards may affect whether a use of PFASs is “essential” or not. Case studies of uses of PFASs Below we provide descriptions of several ongoing uses of PFASs. We discuss whether the uses of PFASs are essential or non-essential based on the categorization in Table 1 . Personal care products and cosmetics PFASs have been found in a range of different cosmetics and personal care products including hair products, powders, sun blocks, and skin creams. 22 The fluorinated ingredients in some of the products that have been chemically analyzed are listed in Schultes et al. 22 and include a range of fluorosurfactants and in some cases the fluoropolymer, polytetrafluoroethylene (PTFE). The use of certain PFASs in these products may lead to direct human exposure and potential health effects following dermal or oral uptake. It is not clear whether any technical function provided by the PFASs is truly necessary. After a recent campaign by a Swedish NGO publicizing the presence of PFASs in certain cosmetics, it was relatively easy for several major retailers and brands of cosmetics to quickly announce phase outs of PFASs, for example, L’Oréal, H&M, Lumene, The Body Shop, Isadora, and Kicks. 23 If PFASs in these products were needed for their technical function (possibly liquid repellency and/or to aid spreading over and into the skin) then drop-in alternatives appear to have been readily available given the rapid phase out by retailers. The use of PFASs in personal care products falls under category 1 in Table 1 . Ski waxes Whereas most skiers use hydrocarbon-based glide waxes, fluorinated glide waxes are also available, though much more expensive. The fluorinated waxes are favored by competitive skiers because they are highly water repellent and result in better glide compared to hydrocarbon-based waxes. The PFASs used in fluorinated ski waxes are diblock semifluorinated n -alkanes (SFAs) mixed with normal paraffins. 2 PFCAs, including perfluorooctanoic acid (PFOA), have also been found in fluorinated ski waxes provided as solids or in powder form. 24 The presence of SFAs in snow and soil samples from a ski area in Sweden was recently demonstrated 25 and professional ski wax technicians working for the Swedish national cross-country ski team were shown to be highly exposed to PFCAs. 26 From July 2020 onwards, PFOA and related substances ( e.g. substances which might form PFOA in the environment) will be banned in all products sold in the EU, including ski waxes, due to its recent addition to the REACH Annex XVII list of restricted substances (entry 68). No essential use of PFASs in ski waxes was found in the restriction process and this use category is therefore clearly non-essential. Functioning hydrocarbon-based ski waxes were in use before the fluorinated waxes were introduced. The development of fluorinated waxes was driven by their exceptional technical performance and market opportunity. Fluorinated waxes provide a “nice to have” function that is not essential, and therefore this use case falls under category 1 in Table 1 . However, European ski teams are continuing to use fluorinated waxes. The exception is Norway which in Oct 2018 announced that it has banned the use of fluorinated ski waxes in U16 categories in national competitions. 27 Fire-fighting foams Class B firefighting foams are formulated to extinguish fires of flammable liquids, such as liquid hydrocarbon fuels. Those currently available are either; (i) aqueous film-forming foams (AFFF), fluoroprotein foams (FP), or film-forming fluoroprotein foams (FFFP), all of which contain fluorosurfactants ( i.e. they contain PFASs) and (ii) fluorine-free class B foams (F3) using proprietary mixtures of hydrocarbon or silicone surfactants. 28 PFAS-containing AFFFs historically contained long-chain PFAAs (and their precursors), 29 but since 2015 30 the foam manufacturers have eliminated long-chain PFAAs (and their precursors) from their products. Current fluorotelomer-based AFFF formulations contain fluorosurfactants that may transform to short-chain PFAAs (primarily PFHxA and shorter-chain PFAAs) in the environment, which are thought to be less bioaccumulative and less toxic than their longer-chain predecessors. However, short-chain PFAAs are extremely persistent and mobile, and if clean-up of soil or water is later needed, it will be extremely expensive and time-consuming, if at all possible. 13,31 Fluorine-free class B foams were first developed in the early 2000s by the 3M Company and since then many other companies have marketed fluorine-free class B foams. 28 Many of the currently available fluorine-free foams meet the standard firefighting performance certifications applicable to PFAS-containing AFFF and related foams. 28 Though some debate continues concerning whether PFAS-containing foams remain necessary for certain scenarios, e.g. , fires at refineries or involving very large fuel tanks, in recent years, a number of commercial airports, chemical industry facilities, oil and gas platforms, fire brigades and some national defense forces around the world have switched to using fluorine-free foams based on demonstrated operational performance in extinguishing fuel fires. However, US military forces are currently prevented from switching to fluorine-free foams because the applicable technical standard MIL-F-24385F(SH) – though revised in 2017 to reduce PFOA and PFOS in AFFFs – still requires fluorinated chemistry in addition to setting a performance-based requirement. Note that in October 2018, the US Congress enacted a bill 32 permitting civilian airports across the US to use non-fluorinated alternatives. Hydrocarbon-based foams have been shown to be biodegradable with only localized, short-term problems associated with their release during extinguishing fires or spillages. The silicone-based foams may contain low residual amounts of cyclic siloxanes ( e.g. decamethylcyclopentasiloxane or D5), which have been judged to be persistent and bioaccumulative. 33 Both D5 and D4 (octamethylcyclotetrasiloxane) are listed as Substances of Very High Concern under REACH, primarily because of their vPvB (very persistent, very bioaccumulative) properties. 34 In summary, the fluorine-free foams that have been developed and improved since the early 2000s are promising from an operational perspective 35–37 and also from an environmental and human health perspective. Some military maintain that only PFAS-containing AFFF can provide the necessary performance requirements, particularly in the case of large fuel fires. Because of ongoing debate, this use category therefore currently falls under category 2 or 3 in Table 1 . Durable water and stain repellency in textiles Liquid repellency in textile products can range from an optional “nice-to-have” property in leisure jeans to an essential protection needed in occupational protective clothing. 38 The textile sector often refers to these chemistries as durable water repellents (DWRs), but the leading market technology repels more than just water. Since their introduction in the 1950s, the highest level of repellency for both oil/stain and water has been achieved with side-chain fluorinated polymers. Substitution to ‘short-chain’ side-chain fluorinated polymers (typically C 6 or C 4 perfluoroalkyl chains) has taken place in recent years. However, there is concern regarding the extreme persistence and lack of human health data for short-chain PFAAs. A variety of new non-fluorinated DWR alternatives has been developed to create repellent textile surfaces, with a variety of polymer architectures, including linear polyurethanes, hyper-branched polymers and nanoparticles. 38 The functional moieties in terms of liquid repellency consist of either saturated alkyl chains ( i.e. hydrocarbons) or polydimethylsiloxane (PDMS) chemistry ( i.e. silicone polymers). 38 Although hazards associated with non-fluorinated DWRs are not yet fully understood, the development of biodegradable alternatives is an important step. Similar to the silicone-based surfactants used in fire-fighting foams, the silicone-based DWRs may contain residual amounts of persistent cyclic siloxanes ( e.g. D4 and D5). Non-fluorinated DWRs have been shown to provide high water repellency equal to short-chain fluorinated polymers and are suitable substitutes for consumer outdoor clothing. 39 Indeed, a number of leading brands already provide water-repellent outdoor jackets marketed as e.g. “fluorine-free”. However, in the case of both non-polar and polar liquids with very low surface tension (such as olive oil or gastric fluid), so far only short-chain fluorinated polymers have been shown to provide effective protection. 40 Such protection may be important in certain occupational settings where a specified level of performance is required. Medical textiles are an example of where technical standards to protect human lives require a certain performance that may be difficult to meet without the use of PFASs. The European standard EN 13795 defines how the essential requirements set forth in the EU Medical Devices Directive (93/42/EEC) 41 should be met with respect to surgical gowns, drapes and clean air suits. Along with setting performance requirements aimed at preventing the transmission of infectious agents between patients and medical staff, EN 13795 also stipulates the test methods for evaluating whether the performance requirement is met. The test method EN 20811 42 – resistance to liquid penetration – measures the pressure at which water will penetrate the fabric and is used to determine whether the fabric will provide sufficient protection against contamination from penetration by e.g. bodily fluids. Current non-fluorinated DWRs may not provide sufficient liquid repellency for non-polar bodily fluids with low surface tension. An alternative is to use surgical gowns coated with a plastic laminate, which offer sufficient protection against biological fluids containing potentially harmful viruses and bacteria but may not be sufficiently breathable for longer operations. Similarly, performance standards set by the US National Fire Prevention Association for protective clothing for firefighters and other emergency responders for water repellency, oil/stain repellency and breathability are currently not possible to meet without fluorinated chemistry. Other types of occupational clothing, e.g. in the oil and gas sector, may require a similar combination of water and oil/stain repellency as well as breathability. At least for now, these uses of PFASs may be considered essential and are, therefore, in category 3, until effective and safer alternatives are available. In summary, non-fluorinated DWRs are available that provide good water repellency (and certain stain repellency) meeting consumer requirements and expectations for most outdoor apparel, casual wear, and business attire (category 2). In some cases, the use of fluorinated DWRs in textiles is “nice to have” ( e.g. water-repellent surfer shorts), but is non-essential and falls under category 1. Only a few uses of PFAS in textiles, e.g. the occupational protective clothing market, where repellency of a wider range of liquids as well as breathability are necessary, fall under category 3 in Table 1 . In those cases, innovative solutions are needed to provide non-fluorinated alternatives. Food contact materials Food contact materials (FCMs) cover a range of materials that at some stage come into contact with food. This includes (industrial) food-production equipment and machinery, food packaging, and kitchen utensils like non-stick forms and pans. Growing consumer concern over environmental and health impacts of plastic packaging has led to an increasing market pressure for alternative packaging, including paper. 43 This may result in increasing exposures to PFAS-containing paper-based materials. The types of fluorochemistry used to protect paper and board have changed over time. 44 Initially, long-chain PFASs were used and were phased out in the 2000s. 44 Current fluorinated paper and board products are largely based on “short-chain” fluorotelomer-based polymeric products, which are side-chain fluorinated polymers containing perfluoroalkyl side chains, typically with six perfluorinated carbons, 44 and poly- and perfluoropolyethers. 45–48 Despite reassurances by the chemical manufacturing industry that short-chain fluorinated products are safe, there is concern that PFASs will migrate into food and cause harm to human health. 44 Non-fluorinated alternatives have subsequently entered the market in recent years. For example, COOP Denmark A/S, a Danish consumer goods retailer, has succeeded in completely removing PFASs from all its products since September 2014. 49 Although the current polymer chemistry used in paper and board in food contact materials is similar to that used in textiles, paper and board are often made for single use, whereas textiles ( e.g. outdoor jackets) need to be durable over the lifetime of a garment. However, some paper and board products need to provide repellency to oil for weeks to months ( e.g. butter wrappers), whereas others ( e.g. fast-food wrappers) only require oil repellency for a matter of minutes. The substitution strategies for paper and board are therefore different than for DWRs in textiles given the difference in materials and performance requirements, and may even be different among food contact applications. There are generally two types of barriers against grease or fat for paper and board, a physical or a chemical barrier. 44 A physical barrier preventing penetration of a liquid into the paper may be sufficient in certain types of single use applications. The chemical barrier, which is the approach used in fluorinated products, repels the grease in the food due to the very weak physico-chemical interaction between grease and paper surface. Two of the most common types of paper that provide a physical barrier against grease are Natural Greaseproof paper 50 and vegetable parchment, 51 providing a dense cellulose structure that prevents the grease from soaking into the paper. There are also various non-fluorinated chemical barriers that can provide similar repellency to grease as fluorinated repellents, including hydrocarbon- and silicone-based alternatives. 52 A third alternative is to add physical barriers such as aluminum or plastic coatings to the paper to provide protection. 53 In food production, PFASs are mainly used as non-stick fluoropolymer ( e.g. PTFE) coatings of (metal) surfaces to lower friction (which protects the equipment from abrasion), to minimize adhesion (which allows better cleaning of surfaces), as non-stick- or heat- and acid-resistant fluoroelastomer membranes on conveyor belts, and as lubricant oils and greases in machinery. 54–57 Many of the same uses exist in household kitchen utensils and appliances. These uses are described in industry patents and commercial materials, 54 but the levels and types of PFASs have been studied only to a limited extent. 58,59 Non-stick kitchenware is normally produced by either spraying or rolling layers of PTFE onto the surface of the kitchenware. One could argue that the non-stick is a “nice to have” function rather than an essential function given that it is possible to cook food without the non-stick functionality. If the non-stick coating is considered an essential function in a modern society, then other possible non-stick coatings are available, including: enamelled iron-, ceramic-, and anodized aluminium coatings. 60 In summary, non-fluorinated alternatives have been historically available for all applications of paper-and-board food packaging and the use of fluorinated protective coatings has never been essential (category 1). For example, COOP, a major grocery retailer in Denmark, has found alternatives for all products that previously used PFASs. 49,61 For non-stick cookware there are also non-fluorinated non-stick alternatives which work well in households and this is also not an essential function (category 1). In the food production industry non-fluorinated conveyor belts, lubricants and greases exist, but it is not clear currently whether functional alternatives to fluoropolymer protection against abrasion exist (categories 2 or 3). Medical devices Another use of fluoropolymers is as coatings in catheters, stents and needles to reduce friction and improve clot resistance and to provide protein-resistance in filters, tubing, O-rings, seals, and gaskets used in kidney dialysis machines and immunodiagnostic instruments. 3,54,62 The safety evaluation of these devices for use in humans was discussed by Henry et al. (2018). 63 After review, multiple regulatory agencies have concluded that the use of PFASs in these products, including in devices implanted into patients' bodies, does not pose an appreciable risk because the fluoropolymers are not bioavailable. 63–65 It is however unclear whether impurities of fluoropolymer processing aids such as PFOA and HFPO-DA were included in the regulatory reviews. In summary, the inclusion of fluoropolymers into medical devices confers several benefits and does not appear to pose substantial health risks to those who are exposed to these devices through procedures or who have received implants. However, the production and disposal of these devices will continue to lead to the release of PFASs into the environment unless steps are taken to eliminate environmental releases. The use of PFASs in medical devices falls under categories 1–3 in Table 1 (depending on specific use). However, due to limited information in the public domain, it is currently unclear if all medical devices need fluoropolymers or only certain types of medical devices need fluoropolymers. Pharmaceuticals There are a wide range of fluorine-containing pharmaceuticals. 66 Since the first fluorine-containing drug was approved by the U.S. Food and Drug Administration (FDA) in 1955, nearly 150 fluorinated drugs have reached the market and about 30% of newly approved drugs contain fluorine constituents including fluoroalkyl groups (a smaller subset can be defined as PFASs). According to Zhou et al. (2016), 66 fluorinated drugs encompass all therapeutic areas, are structurally diverse, and are among the most-prescribed and/or profitable in the U.S. pharmaceutical market. Fluorination of pharmacological agents is often used to enhance their pharmacological effectiveness, increase their biological half-life, and improve their bioabsorption. 66 Some agents are analogous to the long-chain PFASs, such as several types of artificial blood formulations and drugs for the lungs of prematurely born children (for example: perfluorooctyl bromide, an eight-carbon bromine-substituted PFAS 67 ). However, most fluorine-containing pharmaceuticals have only one or two fluorine atoms. A smaller number of drugs contain one or two trifluoromethyl groups (–CF 3 ), or the perfluoroalkyl moiety C n F 2 n +1 as defined by Buck et al. (2011). 2 As these agents become more widely produced, prescribed, and used, disposal of these fluorinated drugs ( e.g. through municipal wastewaters) is likely to lead to increasing environmental releases of various PFASs. A transformation product of nearly all of the anesthetics is trifluoroacetic acid (TFA or CF 3 COOH), which can arise from several metabolic or atmospheric degradation pathways 68 and has been a cause of environmental concern. 69–71 In summary, the addition of 1–3 fluorine atoms or trifluoromethyl groups to various pharmaceutical agents has improved their efficacy, half-lives, and bioabsorption and does not appear to pose substantial health risks to those who take them, relative to analogous non-fluorinated drugs. However, their production and disposal will continue to lead to the release of PFASs into the environment unless steps are taken to eliminate environmental releases. Releases of human metabolic excretion products may pose an additional environmental concern (contamination of water and greenhouse gases) as these drugs become more widely used. The uses of –CR 2 F, –CRF 2 , and –CF 3 groups in pharmaceuticals should not be evaluated for essentiality as a single group, as specific applications will likely fall under either categories 2 or 3 in Table 1 ; there are functional non-PFAS alternatives for some pharmaceutical applications, whereas for other uses the pharmaceuticals have life-saving functions. Laboratory supplies, equipment and instrumentation PFAS-containing products, in particular fluoropolymers, are also ubiquitous in laboratories, laboratory supplies and analytical instrumentation. Initially this caused major concerns regarding PFAS contamination of environmental and biological samples during PFAS analysis and maintaining quality control in PFAS analysis. 72,73 The PFASs are used because they have high resistance to chemicals and heat, weak interaction with other substances and low permeability, which prevent chemicals/analytes from being adsorbed to the surface and absorbed into the material. In the laboratory, there are easily identifiable fluoropolymer ( e.g. PTFE) and fluoroelastomer-based products ( e.g. Viton). Examples include the use of fluoropolymer-based vials, caps and tape, and fluoropolymers in the solvent degassers of liquid chromatography (LC) instruments. Non-PFAS replacements may be available, depending on the purpose. Personal protective equipment can also contain PFASs, including protective gloves and protective mist/anti-fog coatings of glass ( e.g. PFPE). These applications can in general be substituted without major loss of functionality or performance; recommendations for PFAS-free alternatives are often provided as part of guidance to prevent cross-contamination when sampling or analyzing environmental matrices for PFAS. 74–76 As part of field or laboratory collection of particles of different sizes, some filters are made of or are coated with PFASs to minimize sorption of compounds to the filter itself, such as glass fiber filters, or ultrafiltration filters. As an alternative plastic filters/vials with a low solid surface energy can be used ( e.g. polypropylene (PP), polytetramethylene oxide (PTME) and polyamide (nylon)). 46,77 More difficult to replace are fluoropolymer and fluoroelastomer seals (O-rings), and fluoropolymer-based tape within internal components of existing instrumentation. As a result of advances in analytical instrumentation, in particular ultra high-performance liquid chromatography (UHPLC), the use of fluoroelastomers is widespread as seals and membranes and PTFE as inert surfaces inside analytical instruments and in some cases as tubings. The tubing can be replaced by polyetheretherketone (PEEK) or stainless steel tubing without a loss of performance in most applications. Some applications rely on fluorinated solvents ( e.g. , trifluoroethanol) and acids (trifluoroacetic acid, pentafluorobutanoic acids etc. ) added to reversed phase LC-MS solvents, and specialty LC-columns are based on fluorinated materials. Non-fluorinated alternatives exist for both these uses. Perfluoropolyether-based lubricants are also used as oils and greases in pumps and equipment; this can cause laboratory background contamination. Oil-free pumps exist and are reducing the laboratory background contamination, which is beneficial for both the analyses and workers' health. To address concerns related to instrument contamination by PFASs, manufacturers offer a delay column to keep the instrument-borne PFASs from eluting with target analytes during the same time window. For the vast majority of laboratory applications, PFAS alternatives have been used historically or have been newly developed. Therefore, most applications fall within categories 1–2 in Table 1 and i.e. , they are non-essential and replaceable. A small number of current laboratory applications may fall within category 3 as being essential and without appropriate alternatives, and thus further innovation for effective substitution is required. Perfluorosulfonic membranes These are fluoroelastomers that exist in many forms and are used in a wide range of chemical synthesis and separation operations and in analytical instrumentation. These membranes are often used in processes that displace less efficient historical methods that use more energy and/or generate hazardous materials and byproducts. 78,79 Nafion® (CAS Number 66796-30-3) is the brand name for a perfluorosulfonic acid membrane from Chemours (formerly DuPont) that consists of a perfluorosulfonic acid copolymer with pendant sulfonic acid groups. It is stable in strongly oxidizing conditions and high temperatures. The density of sulfonic acid groups can be controlled during synthesis to select for variable ion exchange capacity, electrical conductivity, and various mechanical properties. One of the earliest principal uses of Nafion was as a membrane in the chlor-alkali process, which is the large-scale industrial process that uses brine and electricity to produce the common chemical feedstocks, chlorine gas and sodium hydroxide. 80 Historically these high-volume chemical commodities were prepared with brine in either asbestos diaphragm cells or mercury electrode cells. Both methods generate substantial quantities of hazardous wastes through either the mining and the fabrication of suitable asbestos membranes or the release of aqueous and volatile mercury wastes. Use of Nafion copolymer as a membrane in the electrochemical cell allows for excellent conductance of ions necessary for the process, while maintaining separation of the two parts of the cell under highly caustic conditions. Perfluorosulfonic acid membranes are also used in high-efficiency fuel cells where, in one example, hydrogen and oxygen are pumped into different chambers within a cell that are separated by the membrane, giving rise to a continuous supply of electricity for various specialty applications. Perfluorosulfonic acid membranes are also used as an acid catalyst in a wide range of chemical conversions leading to decreased energy inputs and higher-purity products. While it can be argued that perfluorosulfonic acid membranes have made many chemical preparation processes more efficient and cleaner, it is also important to acknowledge that the impacts from their production and use are still poorly understood. Research at one fluorochemical production site in Bladen County, North Carolina has documented that Nafion-related wastes have been released into the nearby Cape Fear River since at least 2012. 81 Moreover, the relatively advanced drinking water treatment plant in the city of Wilmington, North Carolina, has been unable to remove these Nafion-related wastes 82,83 giving rise to a situation where approximately 99% of the residents of Wilmington now have measurable concentrations of Nafion Byproduct 2 in their blood. 84 No human health data are currently available for Nafion Byproduct 2, and the human half-life of this material is likely to be on the order of months to years. 83 The production of perfluorosulfonic acid membranes has provided great utility by improving the efficiency of large-scale chemical syntheses while also reducing the emissions of other known hazardous byproducts (asbestos and mercury), but the current production process leads to the release of at least one persistent byproduct with near universal exposure in a downstream community. The use of perfluorosulfonic acid membranes is currently judged to be category 3 (essential) in the chlor-alkali process. Before the use of Nafion, there were concerns for worker safety and the environment associated with mercury and asbestos. The use of Nafion as an alternative was the direct result of the chlor-alkali industry addressing these concerns. In the case of the use as a proton exchange membrane (PEM) in fuel cells, there are alternatives to perfluorosulfonic acid membranes, 85 but these are under development and not used as commonly as Nafion (category 2). Although there is a lack of functional alternatives for certain applications, it is reasonable to insist that emissions of persistent and potentially toxic wastes from the production and use of perfluorosulfonic acid membranes be quantitatively determined and minimized. Discussion The Montreal Protocol has provided a successful blueprint to assess the essentiality of a class of widely used persistent chemicals found to have significant human and environmental health risks. Because of their extreme environmental persistence, and increasing data on their adverse effects including human health-related endpoints, PFASs are a prime opportunity for applying a similar approach to protect human health and the environment through the removal of these chemicals from non-essential uses. Our review of several key uses of PFASs demonstrates that currently a global phase-out of PFASs will be complicated, but it also indicates a number of starting points. In particular, different phase-out strategies will be required for each essentiality category. The essentiality of PFASs in the different use categories, based on our three categories in Table 1 , is summarized in Table 2 . Within a few of the larger use categories ( e.g. textiles) certain uses of PFASs appear to be easier to phase out ( e.g. leisure rain jackets) than others (occupational protective clothing) due to different technical performance requirements. Table 2 Essentiality of PFASs in selected use categories Use Table 1 Category a Personal care products including cosmetics 1 Ski waxes 1 Fire-fighting foams (commercial airports) 2 Fire-fighting foams (military) 2 or 3 Apparel (medical: long operations) 3 Apparel (protective clothing oil and gas industry) 3 Apparel (medical: short operations, everyday) 2 Apparel (military: occupational protection) 2 or 3 Waterproof jacket (general use) 2 Easy care clothing 1 Food contact materials 1, 2 or 3 Non-stick kitchenware (fluoropolymers) 1 or 2 Medical devices (fluoropolymers) 1, 2 or 3 Pharmaceuticals 2 or 3 Laboratory supplies, equipment and instrumentation 1, 2 or 3 Perfluorosulfonic membranes in fuel cells 2 Perfluorosulfonic membranes in chlor-alkali process 3 a Note that the categories in the above table represent the current evaluation and may change in the future. Alternatives assessment Even if PFASs are assessed, according to the criteria in Table 1 , to be non-essential in a particular use, and functional alternatives are available, this is only a first step to phase out and responsibly substitute PFASs. It cannot be generally assumed that non-fluorinated alternatives will be less harmful to human health and the environment than the PFASs they are replacing. The scientific discipline of alternatives assessment has established processes and best practices for identifying, evaluating, comparing, and selecting safer alternatives to chemicals of concern based on hazards, performance, and economic viability. 86–88 This process can be applied to PFASs used in material components, finished goods, manufacturing processes, or technologies. Not all substitutions require direct replacements of a fluorinated compound with a non-fluorinated alternative ( i.e. chemical alternative); a technological or engineering innovation ( i.e. functional alternative) can be equally successful 4 and should always be encouraged/prioritized over chemical alternatives. Multiple alternatives should be assessed for a given PFAS until an acceptable substitution is found. Often, once an alternative is found for one use case, it may be easily adapted for other use cases of that chemical as well. In the assessment, once possible non-hazardous alternatives are identified, it is also important to consider multiple endpoints 89 such as energy use, material use (incl. food waste, water use, packaging/machinery use and durability), and land-use ( e.g. paper vs. plastic vs. glass), to avoid burden-shifting between different environmental and human impacts. When considering chemical alternatives for PFASs, the focus should be on the service the product should deliver. The compound should therefore be evaluated for performance using the specifications required for the product, as opposed to comparing directly to the PFAS being replaced. Additionally, the potential for health hazard and potential for exposure – combined, these elements establish the health risks associated with the alternative – must be considered for the general public and vulnerable populations. Finally, additional considerations such as product longevity, persistence in the environment, and sustainability may be considered. Currently there are several established frameworks and evaluation metrics available for conducting alternative assessments. 86,90 In the absence of a thorough evaluation, regrettable substitutions can occur. Challenges and opportunities in chemical regulation The Madrid Statement 12 recommends limiting the use of PFASs in society. Although all PFASs are highly persistent (or lead to highly persistent transformation products), many of them do not comply with the usual concerns considered in international chemical regulation. It can be argued that their extremely high persistence alone should be cause for regulation and substitution, 13,14 but the practical regulatory tools to implement this approach are currently lacking. Within the context of the EU REACH Regulation, it has been argued 91 that the most effective way of regulating short-chain PFASs (as with the regulation of long-chain PFASs) is to identify them as Substances of Very High Concern under REACH Article 57, followed by a REACH Annex XVII restriction. Indeed, the EU has considered ( e.g. in the case of the restriction of PFOA and its related chemicals), and is continuously considering ways to group PFASs in recognition of the impossibility of regulating more than 4700 PFASs individually. Another relevant regulatory framework is the UN Stockholm Convention on Persistent Organic Pollutants, which includes exempted uses similar to the essential-use exemptions under the Montreal Protocol. Under the Convention, the Conference of the Parties (COP) considers listing new persistent organic pollutants for elimination (Annex A), or restriction (Annex B), and/or involuntary production (Annex C) based on a recommendation from the Convention's Persistent Organic Pollutants Review Committee (POPRC). The Convention requires that the COP, “taking due account of the recommendations of the Committee, including any scientific uncertainty, shall decide, in a precautionary manner, whether to list the chemical, and specify its related control measures, in Annexes A, B and/or C” (Art. 8, Para. 9). As part of its deliberation of whether to list a chemical, the COP also considers whether to allow for any “specific exemptions” and/or “acceptable purposes”. “Specific exemptions” is time-limited with one period of five years with the possibility of one extension for another five years, whereas the time period for the applicability of “acceptable purposes” is more open-ended. Currently, there is no clearly defined criteria for identifying “specific exemptions” and “acceptable purposes” set in the text of the Stockholm Convention. Such “essential use-like” exemptions are primarily identified through the work of the POPRC on a case-by-case basis. However, the COP has subsequently adopted detailed criteria for consideration of requests to extend specific exemptions. For production exemptions, the requesting party must have submitted a justification for the continuing need for the exemption that establishes that the extension is necessary for health or safety, or is critical for the functioning of society; included a strategy in its national implementation plan aimed at phasing out the production for which the extension is requested as soon as is feasible; taken all feasible measures to minimize the production of the chemical and to prevent illegal production, human exposure and release into the environment; and the chemical must be unavailable in sufficient quantity and quality from existing stockpiles. Finally, in the case of a party with an economy in transition, the party must have requested technical or financial assistance pursuant to the Convention, in order to phase out as soon as feasible the production for which the extension is requested (see COP Decision SC-2/3, “Review process for entries in the Register of Specific Exemptions” 92 ). We are convinced that having clear legal guidelines for what constitutes an essential use (a process started in this present work) will benefit the Stockholm Convention and other regulatory frameworks by providing guidelines for determining how to apply the essential use-like exemptions, i.e. , by balancing costs versus the societal benefits of the use of a substance or product. A clear definition of essential use ensures that only those applications that are necessary for health or safety (or other purposes highly important to society as a whole) and for which non-fluorinated alternatives are not yet available could receive exemptions when chemicals are listed under the Convention. Further, this approach would protect those uses that are legitimately deemed essential until appropriate substitutions can be identified. The way forward Innovation in the development of alternatives to PFASs is ongoing and many functional alternatives that provide adequate technical performance have been developed and put into practice for some use categories. However, in other use categories little innovation is under way, due to lack of financial or regulatory drivers to change methods/production, significant technical challenges, lack of awareness of the market opportunities, or the small size of the market. Innovation is being encouraged in countries like Denmark ( e.g. substitution of PFASs in textiles) and in Sweden through the availability of government funding for industry-academic partnerships ( e.g. the POPFREE project 93 to encourage small companies to develop non-fluorinated alternatives to PFASs). Furthermore, one of the four key areas in ECHA's 2018 strategy on substitution 94 is to ‘Develop coordination and collaboration networks between all stakeholders, ranging from institutions, member states, industry, academia and civil society’. In some cases, the PFASs in a product or use will be determined as the only compound capable of delivering the required level of performance for that application. In these cases, it is recognized that immediate phase out will not be feasible. But this assessment is only based on current technologies. With clear legislative incentives, new technologies will typically be developed, and consequently PFAS uses in category 3 should continue to be reviewed for potential removal or replacement by new entrants to the market. In fact, use cases identified as category 3 should be the targets of industry and academic programs to develop innovations that may succeed in removing or replacing the PFAS with more sustainable functional alternatives. This system creates a market pressure to be the first to develop new technologies. Chemical regulation on the other hand progresses slowly compared to product innovation, and assessment of individual PFASs is not feasible for protecting public health. It is simply unlikely that society and industry will spend the money and time to generate adequate data to risk assess >4700 PFASs. Therefore, we strongly recommend a grouping approach be employed, and for PFASs to be regulated as a group. Since regulation of the many thousands of PFASs by authorities is likely to be time consuming, it is important for industry (in particular product designers and manufacturers) to take voluntary measures that will contribute substantially in reducing the emissions of PFASs and their presence in products. There have already been several examples of retailers who through private procurement have phased out PFASs from their supply chains ( e.g. IKEA, Lindex, and H&M in Sweden, 15,17,95 COOP in Denmark, 61 Vaude in Germany, 96 L'Oreal in France 97 ), which in turn puts pressure on chemical manufacturers to find safer alternatives. We are convinced that our criteria on essential use can inform and encourage other retailers to consider phasing out and substituting PFASs in their products. These types of voluntary measures will in turn help regulators by demonstrating that functional alternatives exist. When policy makers face stakeholder groups from both sides, they can use data-driven essentiality assessments to support their decision making, e.g. , to show why certain uses are not necessary and therefore can be restricted. This will speed up regulatory actions in support of phasing out non-essential uses of PFASs, without risk to health or safety applications. It is a formidable task to apply the essential use concept to all use cases of PFASs in detail. We have made a start here by illustrating how the concept can be applied to several use cases of PFASs, but to have a conclusive assessment for each use case described in this review, follow-up work may need to be covered in more detail (expanded, subdivided and refined) and engage relevant stakeholders with the necessary in-depth knowledge, where necessary. Although here we have focused on PFASs, the concept of essential use can also be applied in the management of other chemicals, or groups of chemicals, of concern. Conflicts of interest This paper does not necessarily reflect the opinion or the policies of the German Environment Agency or the European Environment Agency. Acknowledgements This article has been prepared by the scientists collaborating as the Global PFAS Science Panel. We would like to thank the Tides Foundation for supporting our cooperation (grant 1806-52683). In addition, Stockholm University would like to thank the Swedish Research Council for Environment, Agricultural Sciences and Spatial Planning (FORMAS) and Stockholm County Council for providing funding (SUPFES FORMAS project no. 2012-2148 and the SUPFES-Health project), and the University of Rhode Island would like to thank the US National Institute of Environmental Health Sciences (grant P42ES027706). The authors appreciate the contribution of Dr Andrew Lindstrom of the U.S. Environmental Protection Agency.
Human exposure to unnecessary and potentially harmful chemicals could be greatly reduced if manufacturers add chemicals only when they are truly essential in terms of health, safety and functioning of society. That's the conclusion of a study published today in Environmental Science: Processes & Impacts, a peer-reviewed journal published by the Royal Society of Chemistry. In this study, the researchers proposed a framework based on the concept of "essential use" to determine whether a chemical is really needed in a particular application. They demonstrate the concept on a class of synthetic chemicals known as PFAS (per- and polyfluoroalkyl substances). PFAS are used in many consumer goods because of their unique properties, including water and stain repellency. However, a growing number of scientists and health professionals are expressing concern about these chemicals since they persist for a very long time, seep into the water and soil, and may adversely impact humans and wildlife. Human health problems linked to certain PFAS exposure include kidney and testicular cancer, liver malfunction, hypothyroidism, high cholesterol, ulcerative colitis, lower birth weight and size, obesity, and decreased immune response to vaccines. The study classifies many uses of PFAS as "non-essential." For example, the study points out that it may be nice to have water-repelling surfer shorts, but in this instance, water repellency is not essential. Other products analyzed with the Essential Use Framework include personal care products and cosmetics, durable water repellency and stain resistance in textiles, food contact materials, medical devices, pharmaceuticals, laboratory supplies and ski waxes. Some uses may be regarded as essential in terms of health and safety, e.g., fire-fighting foams, but functional alternatives have been developed that can be substituted for PFASs. "Our hope is the approach can inform and encourage manufacturers, retailers and end users to consider phasing out and substituting uses of PFASs." said Ian Cousins of Stockholm University, lead author of the study and a world-leading researcher specializing in understanding the sources and exposure pathways of highly fluorinated chemicals. "A starting point would be the phase-out of the multiple non-essential uses of PFASs, which are driven primarily by market opportunity." The article notes that some retailers and manufacturers are already taking voluntary measures to phase out the use of PFAS in their products. It suggests that the Essential Use Framework can be applied to other chemicals of concern.
10.1039/c9em00163h
Space
Ancient asteroid grains provide insight into the evolution of our solar system
Takaaki Noguchi, A dehydrated space-weathered skin cloaking the hydrated interior of Ryugu, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01841-6. www.nature.com/articles/s41550-022-01841-6 Journal information: Nature Astronomy
https://dx.doi.org/10.1038/s41550-022-01841-6
https://phys.org/news/2022-12-ancient-asteroid-grains-insight-evolution.html
Abstract Without a protective atmosphere, space-exposed surfaces of airless Solar System bodies gradually experience an alteration in composition, structure and optical properties through a collective process called space weathering. The return of samples from near-Earth asteroid (162173) Ryugu by Hayabusa2 provides the first opportunity for laboratory study of space-weathering signatures on the most abundant type of inner solar system body: a C-type asteroid, composed of materials largely unchanged since the formation of the Solar System. Weathered Ryugu grains show areas of surface amorphization and partial melting of phyllosilicates, in which reduction from Fe 3+ to Fe 2+ and dehydration developed. Space weathering probably contributed to dehydration by dehydroxylation of Ryugu surface phyllosilicates that had already lost interlayer water molecules and to weakening of the 2.7 µm hydroxyl (–OH) band in reflectance spectra. For C-type asteroids in general, this indicates that a weak 2.7 µm band can signify space-weathering-induced surface dehydration, rather than bulk volatile loss. Main Solar wind irradiation and high-velocity micrometeoroid bombardment dominate space weathering 1 , 2 for all airless bodies. However, the effects of these processes vary substantially, depending on the specific class of body. The solar wind is a plasma composed mainly of low-energy protons and electrons streaming from our Sun 1 , 2 , 3 , which induces radiation damage, including amorphization of silicates and formation of nanophase metallic iron particles (npFe 0 ). In contrast, micrometeoroids are interplanetary dust particles that impact airless surfaces at hypervelocities 4 , resulting in cratering, melting and vapour deposits, and sometimes also amorphous silicates and npFe 0 . Space-weathering products of two anhydrous bodies, the Moon and the S-type asteroid Itokawa, have been investigated extensively 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 . These studies revealed that nanometre-sized metallic Fe particles (npFe 0 ), formed via space weathering, resulted in weakened absorption features in visible to near-infrared reflectance. In contrast, it has been unclear what role npFe 0 plays in the reflectance properties of dark (C- and D-type) asteroids 1 , 2 . Space-weathering modification of reflectance spectra features from airless bodies makes identifying a direct link between asteroids and specific meteorite classes based on composition and mineralogy difficult. The Hayabusa mission of the Japan Aerospace Exploration Agency (JAXA) revealed the connection between visible to near-infrared reflectance spectra from S-type asteroids and ordinary chondrite meteorites 5 , with the difference largely attributable to the role of nanophase particles. However, laboratory experiments that mimic solar wind irradiation and micrometeoroid impact on C-type asteroids show a lack of detectable production of npFe 0 , with some spectra reddening (a positive change in spectral slope) and others bluing (just the opposite) 15 , 16 , 17 , 18 , 19 , 20 . Thus, the observed change of spectral slope and absorption band in reflectance spectra of C-type asteroids compared with carbonaceous chondrites meteorites is difficult to interpret 15 , 16 , 17 , 18 , 19 , 20 . JAXA’s Hayabusa2 spacecraft observed spectral variation on asteroid Ryugu 21 , 22 , 23 , 24 thought to be related to space weathering. Our studies of Ryugu samples offer the first opportunity to directly link the spectral variation to the space-weathering-induced physical and chemical alteration of regolith on C-type asteroids. Results Surface modifications found on Ryugu grains The mineralogy of most Ryugu grains investigated by (scanning) transmission electron microscopy is similar to that of CI chondrites 25 , which are the most chemically primitive materials in the Solar System 26 , consistent with other recent studies 27 , 28 , 29 , 30 , 31 . Therefore, to understand the space weathering of Ryugu grains is to understand the weathering of the most chemically primitive Solar System material. More than 500 grains (average diameter ~71 µm) collected at the first touchdown (landing) site (TD1) and >300 grains (average diameter ~57 µm) collected at the second touchdown site (TD2) were investigated for surface modifications potentially related to space weathering. Recognizable surface modifications of the phyllosilicate-rich matrix were found in ~6% of the observed grains from TD1 and ~7% from TD2 (Extended Data Fig. 1 ). The surface modifications of grains differ considerably from those from the Moon and Itokawa because the most abundant phases in Ryugu grains are hydrated sheet silicates (phyllosilicates), not anhydrous silicates (for example, olivine). Several distinct surface modifications are observed, including smooth layers, frothy layers, melt splashes and their combinations (Fig. 1 and Extended Figs. 2 and 3 ). We also examined three millimetre-sized grains (A0067, A0094 and A0058) that have surface modifications related to space weathering. Fig. 1: Secondary electron images of Ryugu grains showing surface modifications related to space weathering. a , The grain C0105–03004800 was collected at the second touchdown site. It is composed of two parts showing different types of space weathering: a frothy layer and a smooth layer. Enlarged images of the two boxed areas on this grain are shown in the insets at the upper right (frothy layer) and the lower left (smooth layer) corners of a . b , The grain A0104–02203700 was collected at the first touchdown site. The frothy layer partially covers the smooth layer on the left-hand side of the image. The boundary between two types of layers is indicated by a dashed curve. The frothy layer has many burst vesicles. A melt splash, located at the lower centre of the image, is attached to the surface of the frothy layer. Source data Full size image Smooth layers on Ryugu grains Approximately 5% of the observed grains from TD1 and ~3% from TD2 have a smooth layer, evident as a thin (<100 nm) continuous smooth sheet covering the surface. Some of these layers contain vesciles of <50 nm diameter that intersect the surface (Fig. 2a and Extended Data Fig. 2a ). Partial detachment of the smooth layers is observed in some grains (Extended Data Fig. 4 ). Electron diffraction reveals that smooth layers are almost completely amorphous (Extended Data Fig. 2d ). Atomic ratios among major cations of the smooth layers are indistinguishable from those of the phyllosilicate-rich matrix (Fig. 3a ). Most Fe in the smooth layer is Fe 2+ , but most Fe in the underlying phyllosilicates is Fe 3+ based on the Fe L 3 -edge electron energy-loss spectroscopy (EELS), and Fe L 3 and Fe K X-ray absorption near-edge spectroscopy (XANES) (Fig. 3b and Supplementary Table 1 ), indicating that the smooth layer is more reduced than the matrix. Fig. 2: Cross-sections of three Ryugu grains showing typical surface modifications on the phyllosilicate-rich matrix. a , The cross-section A0104–02306901 was prepared from the grain A0104–02306900 collected at the first touchdown site. It has a smooth layer that forms a ~100-nm-thick continuous layer covering the surface of the grain. The phyllosilicate-rich matrix is present below the smooth layer. A yellow dashed curve indicates the cross-section of the sample surface. The boundary between the smooth layer and the phyllosilicate-rich interior is indicated by an orange dashed curve. b , The cross-section C0105–03003701 was prepared from the grain C0105–03003700 collected at the second touchdown site. A frothy layer containing abundant vesicles (darker circles) and <50-nm-size brighter spots (Fe-Ni sulfide beads) covers the surface of this grain. The thickness of the frothy layer varies considerably locally from <100 nm to >500 nm. A yellow dashed curve indicates the cross-section of the sample surface. C-depo denotes carbon depositions to protect the surface of the samples during FIB processing. c , An enlarged image of a frothy layer in a cross-section A0104–02802202. The frothy layer contains many tiny (<20 nm across) blisters (vesicles just below the surface) on its surface. These are high-anglular dark-field scanning transmission electron microscope images, in which materials with higher average atomic numbers are brighter than those with lower average atomic numbers. Source data Full size image Fig. 3: Elemental compositions and redox states of Fe in a smooth layer, frothy layers and the interior phyllosilicates. a , The ternary [Si+Al]-Mg-Fe atomic-ratio diagram shows that elemental compositions of a smooth layer are indistinguishable from those of the interior phyllosilicates in the cross-section sample A0104–02306901. b , However, a Fe L 3 -edge peak in EELS spectra shows that Fe 2+ is enriched in the smooth layer, which means that Fe 3+ in the smooth layer is reduced to Fe 2+ . The EELS spectra were obtained from the upper (U) and lower (L) parts of the smooth layers, the upper (U) and lower (L) areas around the boundary between the smooth layer and the interior phyllosilicates, and the interior phyllosilicates. c , By contrast, the frothy layer in the cross-section sample C105–03003700 is more enriched in Fe relative to [Si+Al] and Mg than the interior phyllosilicates. d , The same compositional relationship is shown between the frothy layer in the cross-section sample A0058–C2001 and the interior phyllosilicates. The whole grain sizes of these samples are quite different. C105–03003700 and A0058–C2001 are ~30 µm and ~3 mm across, respectively. e – h , Fe 3+ in the frothy layers is also reduced to Fe 2+ . e , Fe L 3 -edge peak spectra obtained by EELS. The spectra were obtained from the frothy layer, the boundary area between the frothy layer and the interior phyllosilicates, and the upper and lower areas of the interior phyllosilicates. f , Fe L 3 -edge peak spectra obtained by STXM–XANES. g , h , Fe K-edge spectra ( g ) and background-subtracted pre-edge peak spectra ( h ) obtained by XANES. Int. phyllosilicates, phyllosilicates in the interior of a sample. Serp and Sap in a , c , and d are the abbreviations for serpentine and saponite, respectively. Source data Full size image An unweathered Ryugu grain that was irradiated with 4 keV He + at a fluence of 1.3 × 10 18 ions cm − 2 to simulate space weathering shows a surface morphology and an internal structure that are very similar to those of the smooth layers (Figs. 1a and 2a , and Extended Data Fig. 2 ) of the bonafide space-weathered grains. On lunar and Itokawa grains, smooth surfaces were formed by micrometeoroid impacts and subsequent redeposition. On Ryugu grains, in contrast, we found a ~10-nm-thick vapour deposit on top of a smooth layer of only one grain (Extended Data Fig. 5 ), suggesting that ‘vapour deposition’ does not play an important role in forming smooth layers. Instead, the laboratory-irradiated Ryugu grain indicates that solar wind irradiation probably played an important role in modifying the surface of the phyllosilicate-rich matrix, and the smooth layers represent space weathering induced by solar wind irradiation. Frothy layers and melt splashes on Ryugu grains Frothy layers are found on ~1% of the observed grains from TD1 and ~2% of grains from TD2, which are composed of silicate glass containing abundant embedded vesicles ~0.1- to ~1 µm wide and numerous submicroscopic (<200 nm) rounded Fe-Ni sulfide beads (Fig. 2b and Extended Data Fig. 3 ). The internal structure suggests that silicate and Fe-Ni sulfides were melted and immiscibly separated into silicate and sulfide melts and that vesiculation occurred during melting. The frothy layers have higher Fe and lower Si+Al and Mg than the interior phyllosilicate-rich matrix (Fig. 3c,d ), irrespective of the size of the grain on which they reside. The frothy layer is also more reduced in Fe than the underlying phyllosilicates (Fig. 3e–h ). The Fe-Ni sulfide beads, composed of pyrrhotite and pentlandite with diameters from ~200 to <10 nm, are ubiquitous within frothy layers (Extended Data Fig. 6a–d ). While most microphase sulfides are probably immiscibly separated as droplets during melting, the ~10-nm-sized nanophase sulfide shown in Extended Data Fig. 6d may be a vapour deposit, consistent with previous laser irradiation experiments 15 , 16 . We identified no npFe 0 within the frothy layers investigated. However, on the surface of a frothy layer, we found an aggregate composed of npFe 0 and troilite (stoichiometric FeS) (Extended Data Fig. 6e–h ). Porous apatite, dolomite and magnetite occur in some frothy layers and are believed to be relict minerals that survived melting. In addition, a frothy layer is observed with abundant blisters (vesicles just below the surface) (Fig. 2c ). Melt splashes (<10 µm across) are found on <1% of the observed grains from TD1 and ~1% of grains from TD2, and are attached to Ryugu grains with and without detectable surface modifications (Fig. 1b ). Exceptionally rare npFe 0 on Ryugu The exceptionally low abundance of npFe 0 in Ryugu grains is in stark contrast to lunar and Itokawa surface samples, which contain abundant npFe 0 in both radiation-damaged layers on ferromagnesian silicates 7 , 8 , 9 , 11 , 13 , 14 and in vapour deposit layers produced by micrometeoroid impacts 32 , 33 . While nano- to microphase Fe-bearing sulfides are ubiquitous in all the frothy layers on the Ryugu grains investigated, no interior npFe 0 was found. The reduction effect of space weathering might be insufficient to form npFe 0 from abundant Fe 3+ contained in phyllosilicates in Ryugu grains (Supplementary Table 1 ). In addition, the –OH in phyllosilicates may hinder the reduction of Fe. These results are similar to laboratory experiments where in situ X-ray photoelectron spectroscopic analyses of H + and He + irradiated Murchison CM2 chondrite showed that partial reduction of surface Fe to lower oxidation states occurred with simulated solar wind exposure 19 . Nano- to microphase Fe sulfide is also common in both H + and He + irradiated Murchison CM2 chondrite and laser-irradiated Murchison. These results are consistent with our observations, although Murchison is a CM chondrite with significantly different mineralogy and Fe content than Ryugu. We suggest that it is unlikely that npFe 0 contributes significantly to the observed spectral variability on Ryugu, but submicroscopic Fe-Ni sulfides may contribute to this variability. More abundant impact melts on Ryugu than on Itokawa Among Itokawa grains, only 2 out of 590 grains (0.3%) show melted structures 34 that resemble the frothy layers found on ~1% to 2% of Ryugu grains. The calculated dry and wet solidus temperatures of Ryugu material 28 are approximately the same (862 and 867 °C) under ~10 5 Pa because of the low H 2 O solubility in the melt at low ambient pressures 35 , 36 , which indicates that high porosity may explain the higher abundance of impact melts among Ryugu grains relative to Itokawa grains. Ryugu grains have high average microporosity (~28%, measured by synchrotron radiation nanotomography) and may have experienced higher post-shock temperature than low-porosity (1.5–1.9%) 37 Itokawa grains since porosity collapse by shock compression causes a large temperature increase 38 . Both the surface morphology and internal structure of laser irradiation products 15 , 17 that simulate shock heating by micrometeoroid impacts are similar to those of the frothy layers (Figs. 1 and 2b , and Extended Data Fig. 3 ). Thus, one of the major formation mechanisms of the frothy layers might be frictional heating among loose regolith grains by meteoroid impact. In addition, in situ formation of melt by micrometeoroid impact onto the grains and deposition of melt formed by a neighbouring impact event would also contribute to the formation of the frothy layers. The observed small melt splashes might be ejecta formed during micrometeoroid cratering. Discussion We estimated the timescale of formation of the smooth layer on Ryugu grain surfaces. The fluence of the ion irradiation experiment (Extended Data Fig. 2c ) is equivalent to ~3 × 10 3 years at 1.2 au (the semimajor axis of Ryugu’s orbit) by considering the solar wind flux density at 1 au, 3–5 × 10 8 ions cm − 2 s −1 (ref. 39 ), and the average He/H ratio in the solar wind, 0.045 (ref. 40 ). We found an olivine crystal exhibiting a radiation-damaged rim and containing solar flare tracks with a number density of ~2 × 10 8 cm − 2 (Extended Data Fig. 7 ), which corresponds to an ~6 × 10 3 year dwell time for the grain within ~1 mm from the surface, based on lunar sample studies 41 . A thin (~20 nm) smooth layer on the phyllosilicate matrix was found near the olivine grain in the same sample. These independent results suggest that it may take >3 × 10 3 years to form a detectable smooth layer on phyllosilicates. The exposure age of the smooth layer-covered surface of Ryugu grain A0067 is estimated to be 3 × 10 4 years, calculated by its crater population assuming they formed by interplanetary meteoroid impacts 4 (Extended Data Fig. 8 ). In comparison, studies of craters on Itokawa grains 42 , 43 showed that most submicrometre-scale craters were probably formed by secondary impacts of ejecta excavated from larger craters. If such impacts occurred on Ryugu, then the formation of the smooth layer would require less time. In either scenario, the upper limit on time required to develop the smooth layer is 3 × 10 4 years, which is consistent with the above estimate. A frothy layer with abundant blisters (Fig. 2c ) suggests that, after irradiation by the solar wind, blisters formed during subsequent heating, probably related to micrometeoroid impact, which induced the release of trapped solar wind gas species. This is consistent with steady and continual solar wind irradiation through time, while micrometeoroid bombardment occurs sporadically. Some grains have partially exfoliated smooth layers (Extended Data Fig. 4 ), which suggests that smooth layers can detach. Detachment of smooth layers may explain the low abundance (~7%) of investigated grains with space-weathered features. In addition, the fragility of Ryugu grains may be another important factor that reduces the abundance of grains with observable space weathering. Among 6 large (millimetre-sized) grains collected at TD1, ~66% (4 of 6) show evidence of space weathering based on the field emission scanning electron microscope (FE–SEM) observation at Kyoto and Tohoku Universities, which is much higher than 6–7% for <100-µm-sized grains (Extended Data Fig. 1 ). The difference can be interpreted as indicating that most fine-grained samples are fragments of larger grains. Exfoliation and destruction could occur by thermal fatigue and meteoroid impacts on Ryugu. In addition, they could also occur during sampling and transportation to Earth, or even during handling processes. To quantify the amount of –OH in the Ryugu grain surfaces, we used energy dispersive X-ray spectroscope (EDS) measurements of focused ion beam (FIB) cross-sections of weathered and pristine grains (Fig. 4 ) to determine oxygen to cation ratios, correcting for S-bonded Fe and Ni. The ratio of oxygen to cations bonded with oxygen shows that, in pristine grains, interlayer H 2 O molecules in saponite are largely absent from a mixture of saponite and serpentine, but structural –OH groups in the phyllosilicates are retained (Fig. 4a ). This estimation of –OH abundance is consistent with the thermogravimetric analysis of Ryugu grains 28 . Fig. 4: Histograms of atomic ratios of oxygen to the cations bonded to oxygen in phyllosilicates, a smooth layer and frothy layers. A mixture of saponite without interlayer H 2 O molecules and serpentine has a range of ratios represented by green bands. If a mixture of saponite and serpentine is decomposed into an anhydrous compound, it has a range of ratios represented by red bands. In order to calculate the atomic ratios of oxygens to the cations bonded to oxygen in phyllosilicates, we subtracted the cations bonded to sulfur (S), which were calculated based on the assumption that the ratio of the S-bonded Fe and Ni ions to S is unity for simplicity. a , Phyllosilicates in a non-space-weathered grain contain almost no interlayer H 2 O but preserve structural –OH groups. b , A smooth layer lost a considerable amount of structural –OH groups and phyllosilicates just below the smooth layers partially lost structural –OH groups. c , Phyllosilicates just below the frothy layer have lost the structural –OH groups considerably. d , Phyllosilicates just below the frothy layer have lost almost all the structural –OH groups.Because the frothy layers have even lower ratios than the red bands, they are also anhydrous. Their very low ratios may be related to their high abundance of embedded Fe-Ni sulfide. The ratio at the right end of the green belts is 1.8, which is calculated from the generalized chemical formula of serpentine Y 6 Z 4 O 10 (OH) 8 . O/(Y + Z) = 18/10 = 1.8. The ratio at the left end of the green belts is 1.64, which is calculated from the generalized chemical formula of saponite with no interlayer H 2 O molecules X 0.6 Y 6 Z 8 O 20 (OH) 4 . O/(X + Y + Z) = 24/14.6 = 1.64. The ratio at the right end of the red belts is 1.5, which is calculated from the generalized chemical formula of the dehydrated decomposition product of saponite X 0.6 Y 6 Z 8 O 22 . O/(X + Y + Z) = 22/14.6 = 1.5. The ratio at the left end of the red belts is 1.4, which is calculated from the generalized chemical formula of the dehydrated decomposition product of serpentine Y 6 Z 4 O 14 . O/(Y + Z) = 14/10 = 1.4. Source data Full size image Space weathering over the length of Ryugu’s residence time at near-Earth orbits after its orbital shift from the main belt, which is thought to be several megayears based on noble gas data 44 , may also play an important role in the removal of interlayer H 2 O from saponite. During the prolonged exposure to interplanetary space, even structural –OH groups might be removed from a mixture of serpentine and saponite that has lost its interlayer H 2 O. This may occur by decomposition of saponite and serpentine by dehydroxylation, that is, a decomposition to anhydrous compounds and liberated H 2 O molecules. However, we note that the solar wind would probably penetrate grains very heterogeneously owing to the high microporosity of the phyllosilicate-rich matrix of Ryugu grains. A substantial amount of structural –OH has been lost in smooth layers (Fig. 4b ). Almost all the structural –OH has been lost in frothy layers and also in the phyllosilicates just below frothy layers (Fig. 4c,d ). These data suggest that more structural –OH in phyllosilicates is removed through dehydroxylation as space weathering proceeds (Fig. 5 ). A portion of structural –OH in phyllosilicates just below the smooth layer also appears to be absent (Fig. 4b ), and sporadic amorphization of phyllosilicates is observed. Solar wind particles could penetrate to such depths, in regions of highest porosity of the phyllosilicate-rich matrix (Fig. 2a ), and potentially promote dehydroxylation reactions. In addition, phyllosilicates just below the frothy layer lost most of their structural –OH (Fig. 4c, d ). Frictional heating induced by meteoroid impacts and the formation of new surfaces by thermal stress 45 could promote dehydration by dehydroxylation. A conceptual illustration (Fig. 5 ) shows the development of solar wind implantation, dehydration by dehydroxylation of phyllosilicates and progressive coverage of anhydrous silicate-rich melt on a Ryugu grain. Once a surface of a Ryugu grain is exposed to interplanetary space, the effects of solar wind irradiation start to accumulate at and near the surface. As time passes, the gradual accumulation of solar wind radiation damage and phyllosilicate dehydroxylation form the smooth layer on its surface, which means that partial dehydration occurs in the smooth layer as shown in Fig. 4b . Because the effects of solar wind are constrained by the limited kinetic energy of solar wind particles 3 , the thickness of the smooth layer seldom exceeds ~100 nm. Sometimes, a very thin (~10 nm) vapour deposition may be formed on its surface as shown in Extended Data Fig. 5 , although it is not illustrated in Fig. 5 . In contrast, formation of impact melts (frothy layers and melt splashes) is an intermittent process. The impact melt can be formed in several ways: in situ formation of melt by micrometeoroid impact melt onto the grain, deposition of melt formed by a neighbour impact event and in situ melting by frictional heating among porous regolith. In this conceptual illustration, partial coverage by impact melt occurred twice, at times I and II. After the coverage by a frothy layer, the effects of solar wind irradiation start to accumulate in the frothy layer. If another subsequent heating event or impact occurs, resulting in the deposition of another melt deposit, the implanted solar wind gases in the frothy layer may form blisters (Fig. 2c ). The frothy layers (impact melts) are almost anhydrous because they were formed by high-temperature processing, which is indicated in Fig. 4c,d . In Fig. 5 , the change of colour from light blue via orange to yellow represents the progress of dehydration. The surface material of asteroid Ryugu becomes covered by nearly anhydrous material over time. After a long period of space exposure, dehydration by dehydroxylation of the phyllosilicates proceeds below both the smooth layers and the frothy ones, as shown in Fig. 4 . We measured the chemical compositions of the frothy and smooth layers and the underlying phyllosilicate-rich matrix from the surface of grains to ~1.5 µm below the surface, demonstrating that the effects of dehydration by dehydroxylation extend to at least that depth in space-weathered Ryugu grains. Note that the natural overturn, or gardening, of regolith grains on the asteroid parent body interrupt the schematic history of space weathering so that the space-weathering processes on any one grain do not necessarily progress as shown in Fig. 5 . Fig. 5: A conceptual illustration showing the development of two types of space weathering with dehydration by dehydroxylation observed on a Ryugu grain. Once a surface of a Ryugu grain is exposed to interplanetary space, the effects of solar wind irradiation start to accumulate at and near the surface, which is shown as hatched areas labelled as the ‘Solar wind implanted zone’ in the figure. As time passes, the gradual accumulation of solar wind radiation damage and phyllosilicate dehydroxylation form the smooth layer on its surface with a thickness that seldom exceeds ~100 nm. In contrast, the formation of impact melts (frothy layer, cratering and melt splash) is an intermittent process. In this conceptual illustration, partial coverage by impact melt occurred twice at times I and II. The change of colour from light blue via orange to yellow represents the progress of dehydration. As shown in Fig. 4 , the impact melts are almost anhydrous. Therefore, the surface of the model grain is covered by both nearly anhydrous impact melts and dehydroxylated amorphized phyllosilicates. As a result, the surface of the asteroid Ryugu becomes covered by anhydrous material over time. After a long period of space exposure, dehydration by dehydroxylation of the phyllosilicates proceeds below both the smooth layers and the frothy ones. Note that the natural overturn, or gardening, of regolith grains on the asteroid parent body interrupt the schematic history of space weathering so that the space-weathering processes on any one grain do not necessarily progress as shown in Fig. 5 . Full size image Strong absorption in the 3 µm region of the reflectance spectra 1 , 2 is attributed to phyllosilicates and other –OH-rich minerals as well as H 2 O ice. Among these materials, the 2.7 µm band is ascribed to –OH 1 , 2 . Dehydroxylation from phyllosilicates may weaken the band. Spacecraft-based measurements of the reflectance spectra obtained at the artificial crater at TD2 on Ryugu normalized to a surface standard spectrum show a 2.7 µm band depth inside the crater that is ~5% stronger than that of the standard spectrum and those from outside the crater 46 . These observations suggest that more –OH is preserved in the subsurface material that was exposed by the formation of the artificial crater. Additional factors such as grain size, porosity and viewing geometry can also affect reflectance spectra 47 , 48 , 49 . Considering all these factors together, we hypothesize that the 2.7 µm band features are potential indicators of the degree of space weathering for C-type asteroids. In addition to the band depth, band shift could be an important indicator of the degree of space weathering, but a definitive interpretation of shift is so far elusive, with different experiments indicating different shift directions. Measurements of the Ryugu artificial crater showed a small shift, but it is difficult to link this to measurements at the scale of individual particles. Several experiments have tried to simulate space weathering induced by solar wind irradiation or micrometeoroid impact to assess impact on spectral features 15 , 16 , 17 , 18 , 19 , 20 , 48 , 49 , 50 . However, none yet fully satisfy realistic space-weathering conditions and include coordinated near-infrared spectroscopy, scanning electron microscopy and transmission electron microscopy on the irradiated samples. Further studies are greatly needed to accurately reproduce the features of space weathering of Ryugu samples. In the case of Ryugu, strong thermal alteration on the parent body 23 , 46 can be excluded as the cause for surface dehydration because phyllosilicates in nearly all Ryugu samples preserve their structural –OH 27 , 28 . Therefore, either space weathering, solar radiative heating or both could have caused these differences 22 . If the surface material of Ryugu was heated by solar radiation 23 , all the grains and boulders from the surface to 10 to 100 cm deep must have experienced heating and dehydration throughout their interiors 23 . However, no heavily heated grains were found in this and other studies 28 , 29 , 30 , 31 , 44 . Therefore, solar radiation heating is unlikely to have caused the decrease of the 2.7 µm feature. As described previously, space-weathered Ryugu grains have an almost completely dehydrated surface and an interior with no evidence of thermal metamorphism. If the surface material contains a higher number of space-weathered grains than the subsurface material, as expected based on regolith gardening processes, the spectral differences before and after crater formation can be explained by different amounts of space-weathered grains in the surface and subsurface material of Ryugu. Therefore, it is likely that space weathering played an important role in the dehydration of Ryugu’s surface. We note that ~40% of C-type asteroids do not show the 2.7 µm band features (sometimes generally referred to as the 3 µm band) and several interpretations were proposed for their origins 51 , 52 , 53 . Based on our data from Ryugu grains, we propose that the absence of the 2.7 µm absorption band can be at least partly explained by surface dehydration due to space weathering. Gradual covering by anhydrous amorphous silicate with longer exposure to space weathering was proposed by a radiative transfer study of Bennu 20 . The clear 2.7 µm feature on carbonaceous asteroid Bennu 54 explored by NASA’s OSIRIS-REx spacecraft may be related to weaker space weathering experienced by Bennu than Ryugu or due to differences in phyllosilicate species and their chemical compositions. Suppression of the ‘water band’ by space weathering of C-type asteroid surfaces has implications for interpreting remote spectra, the first and least expensive tool for identifying water resources for eventual in situ resource utilization in space. Asteroids that appear dry on the surface may be water-rich, potentially requiring revision of our understanding of the abundances of asteroid types and the formation history of the asteroid belt. Methods Sample transfer and preparation for analyses To preserve the pristine nature of the returned samples, the samples were prepared and analysed without cleaning, washing or other procedures that could introduce terrestrial contamination. The sample catcher has three separate chambers to store samples collected in the different locations on Ryugu. Chambers A and C contain samples collected from the first and second touchdown sites (TD1 and TD2), respectively. Air-tight sample transfer holders 55 were used to bring samples from JAXA to Kyoto University. The allocated grains were handled in an N 2 -filled glove box at Kyoto University. Ryugu grains from both chambers were attached to Au plates on pin stubs with small amounts of epoxy glue. About 250 grains and about 40 thin foil sections prepared by FIB were investigated at the 21 hub universities and laboratories. FIB–scanning electron microscopy Observation and sample preparation at Kyoto University are as follows. Surface morphology of about 300 grains was observed by a JEOL JSM-7001F FE–SEM. We observed them 15 pA current and 2 kV acceleration voltage. FIB sections were prepared using a Thermo Fisher Helios FIB–SEM. Selected areas were cut out with a 30 kV Ga + ion beam. Before the extraction, the target areas were Pt-C coated by a 2 kV electron beam. Then, Pt-C was deposited on the target areas by 16 or 30 kV Ga + ion beams. The sections mounted on the TEM grids were thinned to a thickness of 50 to 200 nm on the 12 or 16 kV Ga + ion beams. The damaged layers were removed using a 2 kV Ga + ion beam. About 70 FIB sections were prepared and investigated by the team. In parallel with the above work, we also performed FIB and (scanning) transmission electron microscopy under air-free conditions using an air-tight FIB–SEM sample transfer holder and a double tilt LN 2 Atmos Defend Holder (Mel-Build Corporation) at Kyushu University. Another air-tight sample holder was used to transfer the samples. An Ar-filled glove box was used for sample handling. A Thermo Fisher Scios FIB–SEM was used for the observation of about 500 grains and for FIB processing of space-weathered grains. The conditions of the FIB psrocessing are similar to those at Kyoto University. (Scanning) transmission electron microscopy At Kyoto University, a JEOL JEM 2100 F (scanning) transmission electron microscope ((S)TEM) operating at 200 kV equipped with a JEOL JED-2300T EDS was used. The ζ-factor method 56 was used for quantitative analysis. Electron diffraction maps with quasi-parallel illumination were acquired using a Gatan Orius200D camera. At Kyushu University, a monochromatized and Cs aberration-corrected Thermo Fisher Titan Cubed G2 operating at 300 kV, equipped with four-quadrant windowless super-X silicon drift detector EDS and Gatan Quantum 965 image filter (GIF) for EELS was used. The probe current was less than 200 pA for TEM observation and ~60 pA for STEM observation as well as energy dispersive X-ray spectroscopy and EELS. The typical energy resolution of this EELS analysis was 0.4 eV. The energy dispersion was 0.1 eV per channel at the camera for EELS, which was calibrated by using standard samples of fayalite (Fe 2 SiO 4 ) and synthetic Co olivine (Co 2 SiO 4 ). EELS mapping was conducted using ~6 nm × ~6 nm square pixels and the acquisition time per pixel was 100 ms. The obtained EELS spectra were averaged over several hundred pixels (~10,000 nm 2 ) to improve the signal-to-noise ratio. The Fe 3+ /ΣFe ratio was quantified by the Fe L 3 peak as follows. The background was first subtracted using a power-law fit over an energy range of 20 eV, then the spectrum was decomposed to Fe 2+ and Fe 3+ components by multiple linear least-squares fitting based on two standard spectra obtained from fayalite (Fe 2 SiO 4 ) and Fe 2 O 3 in the energy range corresponding to the Fe L 3 peak (705–715 eV). The Fe 3+ /ΣFe ratio was calculated from the ratio of the integrated spectral intensity of the Fe 3+ component to the sum of the integrated spectral intensity of both components. The EDS acquisition time per pixel was 10 µs. For quantitative analysis, Cliff–Lorimer correction was used. K -factors were determined using many mineral standards. At Tohoku University, a JEOL JEM-2100F (S)TEM operating at 200 kV was used. K -factor correction was based on several mineral standards. At the University of Hawai’i at Mānoa, a monochromatized and Cs aberration-corrected Thermo Fisher Titan G2 (S)TEM operating at 300 kV, equipped with EDAX ® thin-window EDS and Gatan Tridium EELS was used. A ‘TitanX’ ChemiSTEM and a Thermo Fisher Titan G2 STEM were also used for additional analysis at the Molecular Foundry, Lawrence Berkeley National Laboratory. At Université de Lille, a monochromatized and Cs aberration-corrected Thermo Fisher Titan Themis (S)TEM operating at 300 kV, equipped with four-quadrant windowless super-X SDD and Gatan Quantum 966 ERS GIF for EELS was used. For quantitative analysis, Cliff–Lorimer correction was used. K -factor correction was based on several mineral standards. At the University of Arizona, a Cs aberration-corrected Hitachi HF5000 (S)TEM, equipped with an Oxford Instruments X-Max 100 TLE EDS system and a Gatan GIF Quantum ER (model 965) electron energy-loss spectroscope was used. The microscope was operated at 200 kV using a 100 pm probe. The energy dispersion was set to 0.25 eV per channel. Maps were acquired by averaging three frames with a relatively large pixel time of 0.2 s for core loss and 0.001 s for the low loss. To quantify the Fe 3+ /Fe 2+ ratio, FeO, Fe 2 O 3 and Fe 3 O 4 were used as standards. The samples were measured with a dispersion of 0.25 eV per channel. The background was subtracted using a power-law fit over an energy range of 100 eV. Then, a linear fit was applied in the pre-edge region for the removal of any residuals to ensure a null background intensity. After the background removal, the continuum intensity beneath the edge was subtracted, using a double arctan function 57 . The Fe L 3a and Fe L 3b peak maxima of FeO and Fe 2 O 3 were shifted to 708.7 eV and 710.25 eV, respectively, by systematically applying an offset of 3.19 eV to match the energies described in the literature 58 . The spectrum images were acquired over an area measuring 76 × 20 pixels and 2.83 × 0.75 µm. The Fe L 2,3 -edge was quantified using the methods described in the literature 58 , 59 . The obtained calibration curve was applied to each pixel of the spectrum image to determine the Fe 3+ /ΣFe. At the Naval Research Laboratory, the Nion UltraSTEM200-X equipped with a Gatan Enfinium ER electron energy-loss spectroscope and a windowless Bruker SDD EDS was used. Bright-field TEM images were collected on a JEOL2200FS TEM, equipped with a Gatan OneView camera. Quantification of STEM-EDS data was performed with the Cliff–Lorimer method. The EELS measurements have a typical energy resolution of 0.5 eV. Scanning transmission X-ray microscopy The HERMES scanning transmission X-ray microscopy (STXM) beamline at the synchrotron SOLEIL was used. Analytical conditions for X-ray absorption near-edge structure (XANES) analysis using STXM is as follows. Energy calibration was done using the 3 p Rydberg peak of gaseous CO 2 at 294.96 eV as well as an internal haematite standard. The microscope operates under a high vacuum at 10 –5 mbar. Stacks of images are collected at the Fe L 2,3 -edge in the energy range 680–720 eV, with an energy increment of 0.15 eV in the spectral range of the two main peaks associated with Fe 3+ and Fe 2+ absorption. The dwell time per pixel was fixed to 1 ms. The beam is focused onto the sample using a Fresnel zone plate of 25 nm. We selected pixel sizes of ~40 nm. The hyperspectral dataset was extracted and processed using the Hyperspy Python-based package 60 . The Fe 3+ /ΣFe ratio was quantified at each pixel 58 , 59 . The background is first subtracted, then a double arctangent is fitted and subtracted to take into account the iron content variation. Then the spectrum is integrated over the energy range corresponding to the Fe 3+ absorption (708.8–712 eV) and the retrieved value is divided by the spectrum integrated over the full energy range (705–712 eV). This ratio is converted into the Fe 3+ /ΣFe ratio using calibration curves obtained on silicate standards. Ultimately, the component map was created by a linear least-squares fitting of the hyperspectral dataset, using component end-member spectra as inputs (oxidized silicates, melted silicate and sulfides). Synchrotron Fe K X-ray absorption spectroscopy The I 14 X-ray Nanoprobe Beamline at Diamond Light Source, UK was used to achieve X-ray absorption spectroscopy mapping and X-ray fluorescence maps were obtained, each map measured at varied energies ranging from 7,050 to 7,350 eV with a higher energy resolution range over the XANES features (~7,100–7,150 eV). The XANES maps were processed using Mantis v.2.3.02 (ref. 61 ) and isolated spectra normalized in Athena v.0.8.056 (ref. 62 ). Microcrater measurement Forty impact craters ranging from 0.5 to 8.5 µm in average diameters on a millimetre-sized grain (A0067) were measured using a FIB–SEM at Kyoto University. The surface area of 3.6 × 10 5 µm 2 was observed. The cumulative impactor flux F ( m ) was calculated as F ( m ) = N ( m )/ ST , where m is the mass of the impactor, N(m) is the cumulative distribution of the impactor, S = 3.6 × 10 5 µm 2 is the total investigated area of the Ryugu grain and T is the exposure time needed for craters to accumulate in the space environment. The mass of the impactor ( m ) is calculated from the diameter of the craters, assuming that an impactor is a spherical object having a density of 3 g cm − 3 and that the ratio of the crater diameter D to the impactor diameter d ( D / d ) is assumed to be 1.60 based on laboratory impact experiments 63 , 64 . For comparison, interplanetary meteoroid flux is calculated using the models in the literature 4 , 64 , 65 . Microporosity measurements using X-ray nanotomography Microporosity was estimated from the results of scanning imaging X-ray microscopy 66 performed at SPring-8 BL47XU on Ryugu regolith samples (~15–80 µm; 27 particles). The samples were mounted on Ti needles using a FIB–SEM at Kyoto University, and differential phase images were obtained by scanning X-rays at 8 keV. Then 180° images were obtained every 0.4–1.2°, followed by phase recovery and tomographic reconstruction to obtain three-dimensional images of phase contrast with refractive index decrements (RIDs). The pixel size was ~100 × 100 × 100–200 nm. The grain surfaces were defined by automatic segmentation 67 and manual modification from the three-dimensional images, and their average RIDs were obtained. The RIDs ( δ ) are approximately proportional to the material density ( ρ ), as expressed by the following equation 68 : δ = aρ b , where a = 3.7174, b = 0.87132. In this manner, the average RIDs were converted to the material density. Microporosity is the ratio of bulk density to particle density subtracted from 1. Since the particle density of the Ryugu has not been measured, the grain density of the Orgueil meteorite (2.42 × 10 3 kg m − 3 ) 69 was used to estimate the microporosity. Helium irradiation experiments The irradiation experiments were performed at ISAS/JAXA. Irradiation experiments of 4 keV He ions onto a Ryugu grain C0107–HE01 (~300 × ~200 µm) were performed in a vacuum. The grains were fixed on a gold substrate with a small amount of epoxy glue. The ion flux was kept at ~1.5 × 10 13 ions cm –2 s and the total dose of the irradiated ions was 1.3 × 10 18 ions cm –2 . The surface textures before and after irradiation were observed with a JEOL JSM-7000F FE–SEM at the University of Tokyo without carbon deposition under low acceleration voltage and low current conditions (2 kV and 50 pA). FIB thin foil preparation and (S)TEM observation were performed at Kyoto University. Data availability All data needed to evaluate the conclusions in the paper are present in the paper and the Supplementary Information. All data are also available through the DARTS archive ( ). Source data are provided with this paper.
The U.K.'s national synchrotron facility, Diamond Light Source, was used by a large, international collaboration to study grains collected from a near-Earth asteroid to further our understanding of the evolution of our solar system. Researchers from the University of Leicester brought a fragment of the Ryugu asteroid to Diamond's Nanoprobe beamline I14 where a special technique called X-ray Absorption Near Edge Spectroscopy (XANES) was used to map out the chemical states of the elements within the asteroid material, to examine its composition in fine detail. The team also studied the asteroid grains using an electron microscope at Diamond's electron Physical Science Imaging Center (ePSIC). Julia Parker is the principal beamline scientist for I14 at Diamond. She said, "The X-ray Nanoprobe allows scientists to examine the chemical structure of their samples at micron- to nano-length scales, which is complemented by the nano to atomic resolution of the imaging at ePSIC. It's very exciting to be able to contribute to the understanding of these unique samples, and to work with the team at Leicester to demonstrate how the techniques at the beamline, and correlatively at ePSIC, can benefit future sample return missions." The data collected at Diamond contributed to a wider study of the space weathering signatures on the asteroid. The pristine asteroid samples enabled the collaborators to explore how space weathering can alter the physical and chemical composition of the surface of carbonaceous asteroids like Ryugu. Image taken at E01 ePSIC of Ryugu serpentine and Fe oxide minerals. Credit: ePSIC/University of Leicester. The researchers discovered that the surface of Ryugu is dehydrated and that it is likely that space weathering is responsible. The findings of the study, published today in Nature Astronomy, have led the authors to conclude that asteroids that appear dry on the surface may be water-rich, potentially requiring revision of our understanding of the abundances of asteroid types and the formation history of the asteroid belt. Ryugu is a near-Earth asteroid, around 900 meters in diameter, first discovered in 1999 within the asteroid belt between Mars and Jupiter. It is named after the undersea palace of the Dragon God in Japanese mythology. In 2014, the Japanese state space agency JAXA launched Hayabusa2, an asteroid sample-return mission, to rendezvous with the Ryugu asteroid and collect material samples from its surface and sub-surface. The spacecraft returned to Earth in 2020, releasing a capsule containing precious fragments of the asteroid. These small samples were distributed to labs around the world for scientific study, including the University of Leicester's School of Physics & Astronomy and Space Park where John Bridges, one of the authors on the paper, is a Professor of Planetary Science. John said, "This unique mission to gather samples from the most primitive, carbonaceous, building blocks of the solar system needs the world's most detailed microscopy, and that's why JAXA and the Fine Grained Mineralogy team wanted us to analyze samples at Diamond's X-ray nanoprobe beamline. We helped reveal the nature of space weathering on this asteroid with micrometeorite impacts and the solar wind creating dehydrated serpentine minerals, and an associated reduction from oxidized Fe3+ to more reduced Fe2+. "It's important to build up experience in studying samples returned from asteroids, as in the Hayabusa2 mission, because soon there will be new samples from other asteroid types, the Moon and within the next 10 years Mars, returned to Earth. The U.K. community will be able to perform some of the critical analyses due to our facilities at Diamond and the electron microscopes at ePSIC." The building blocks of Ryugu are remnants of interactions between water, minerals and organics in the early solar system prior to the formation of Earth. Understanding the composition of asteroids can help explain how the early solar system developed, and subsequently how the Earth formed. They may even help explain how life on Earth came about, with asteroids believed to have delivered much of the planet's water as well as organic compounds such as amino acids, which provide the fundamental building blocks from which all human life is constructed. The information that is being gleaned from these tiny asteroid samples will help us to better understand the origin not only of the planets and stars but also of life itself. Whether it's fragments of asteroids, ancient paintings or unknown virus structures, at the synchrotron, scientists can study their samples using a machine that is 10,000 times more powerful than a traditional microscope.
10.1038/s41550-022-01841-6
Medicine
Discovery of T cells' role in Alzheimer's, related diseases, suggests new treatment strategy
David Holtzman, Microglia-mediated T cell infiltration drives neurodegeneration in tauopathy, Nature (2023). DOI: 10.1038/s41586-023-05788-0. www.nature.com/articles/s41586-023-05788-0 Journal information: Nature
https://dx.doi.org/10.1038/s41586-023-05788-0
https://medicalxpress.com/news/2023-03-discovery-cells-role-alzheimer-diseases.html
Abstract Extracellular deposition of amyloid-β as neuritic plaques and intracellular accumulation of hyperphosphorylated, aggregated tau as neurofibrillary tangles are two of the characteristic hallmarks of Alzheimer’s disease 1 , 2 . The regional progression of brain atrophy in Alzheimer’s disease highly correlates with tau accumulation but not amyloid deposition 3 , 4 , 5 , and the mechanisms of tau-mediated neurodegeneration remain elusive. Innate immune responses represent a common pathway for the initiation and progression of some neurodegenerative diseases. So far, little is known about the extent or role of the adaptive immune response and its interaction with the innate immune response in the presence of amyloid-β or tau pathology 6 . Here we systematically compared the immunological milieux in the brain of mice with amyloid deposition or tau aggregation and neurodegeneration. We found that mice with tauopathy but not those with amyloid deposition developed a unique innate and adaptive immune response and that depletion of microglia or T cells blocked tau-mediated neurodegeneration. Numbers of T cells, especially those of cytotoxic T cells, were markedly increased in areas with tau pathology in mice with tauopathy and in the Alzheimer’s disease brain. T cell numbers correlated with the extent of neuronal loss, and the cells dynamically transformed their cellular characteristics from activated to exhausted states along with unique TCR clonal expansion. Inhibition of interferon-γ and PDCD1 signalling both significantly ameliorated brain atrophy. Our results thus reveal a tauopathy- and neurodegeneration-related immune hub involving activated microglia and T cell responses, which could serve as therapeutic targets for preventing neurodegeneration in Alzheimer’s disease and primary tauopathies. Main To explore the disease microenvironment in the presence of amyloid-β or tau deposition, we systematically compared the immunological milieux in the brains of the amyloid-β-depositing mice APP/PS1-21 (A/PE4) and 5xFAD (5xE4) 7 , 8 , 9 , 10 , and tauopathy (TE4) mice 11 that express human APOE4 (E4). The pathologies in these models mirror amyloid deposition and tau aggregation with neurodegeneration, respectively 12 . We observed significant brain regional atrophy by 9.5 months but not at 6 months of age in TE4 mice (Fig. 1a ). In addition, brain atrophy was not present in A/PE4 or 5xE4 mice by 9.5 months of age despite high levels of amyloid-β deposition in the brain (Fig. 1a and Extended Data Fig. 1a ). The atrophy in the TE4 mice at 9.5 months primarily occurred in regions that developed the most tauopathy (that is, the hippocampus, piriform–entorhinal cortex and amygdala) and was accompanied by significant lateral ventricular enlargement (Fig. 1a–d and Extended Data Fig. 1b–d ). The thickness of the granule cell layer in the dentate gyrus as assessed by NeuN staining was noticeably decreased in TE4 mice, and the thickness correlated highly with hippocampal volume (Extended Data Fig. 1e–g ). Consistent with the neuronal loss, positive staining for myelin basic protein, which is present around intact axons, was altered in TE4 mice at 9.5 months (Extended Data Fig. 1h,i ). Both TE4 and TE3 (expressing human APOE3) mice developed prominent brain atrophy with greater atrophy in the TE4 mice (Extended Data Fig. 1j–l ). Additionally, male mice tended to have higher levels of brain atrophy than that of females (Extended Data Fig. 1m–o ). For further exploration of mechanisms of brain atrophy and neurodegeneration, we focused on male mice for the remainder of the experiments. Fig. 1: Immune scRNA-seq reveals increased proportion of T cells in the context of tau-mediated neurodegeneration. a , Representative images of 6-month-old E4 and TE4, and 9.5-month-old E4, TE4, A/PE4 and 5xE4 mouse brain sections stained with Sudan black. Scale bar, 1 mm. b – d , Volumes of hippocampus ( b ), piriform–entorhinal cortex (piri–ent ctx) ( c ) and posterior lateral ventricle ( d ) in 6-month-old E4 and TE4, and 9.5-month-old E4, TE4, A/PE4, 5xE4 and WT mice (6-month E4: n = 3; 6-month TE4: n = 7; 9.5-month E4: n = 15; 9.5-month TE4: n = 13; 9.5-month A/PE4: n = 7; 9.5-month 5xE4: n = 6; 9.5-month WT: n = 6). Data are mean ± s.e.m.; *** P < 0.0001 for 9.5-month TE4 versus A/PE4; TE4 versus 5xE4; TE4 versus E4; and TE4 versus WT (one-way analysis of variance (ANOVA) with Tukey’s post hoc test). e , Fluorescence-activated cell sorting of CD45 total and/or CD45 hi cells from brain parenchyma and meninges from E4, A/PE4 and TE4 mice for immune scRNA-seq. f , CD45 total immune cells from brain parenchyma assigned into 12 cell types as visualized by uniform manifold approximation and projection (UMAP) plots. DCs, dendritic cells; ILCs, innate lymphocyte cells. g , Bar plot showing the proportions of the 12 cell types of immune cells in the brain parenchyma. Data are mean ± s.e.m.; two biologically independent samples were used, and samples were sequenced in n = 2 batches from the E4 and TE4 groups. Prolif., proliferating. h , CD45 total immune cells from meninges assigned into 12 cell types as visualized by UMAP plots. i , Bar plot showing the proportions of the 12 cell types of immune cell in the meninges. Data are mean ± s.e.m.; two biologically independent samples were used, and samples were sequenced in n = 2 batches from the E4 and TE4 groups. Full size image Dysregulated innate and adaptive immune responses contribute to some neurodegenerative diseases 13 , 14 . Neuroinflammation is present in the brain of individuals with Alzheimer’s disease, and many studies focus on the cellular and molecular changes and the role of microglia, a key component of the innate immune response in the brain during the development and progression of Alzheimer’s disease 15 . Microglia are brain-resident cells, which may lead to a pro- or anti-inflammatory milieu within the brain together with monocytes, monocyte-derived macrophages and dendritic cells 16 , 17 , 18 . T cells and natural killer (NK) cells, if present, are more directly linked with cytotoxicity, and could potentially contribute to neuronal loss in a pro-inflammatory environment 19 , 20 , 21 , 22 . Recent studies found an increase of T cells in the cerebrospinal fluid, leptomeninges and hippocampus in patients with AD 23 , 24 and in mouse models 25 , 26 . Brain and immune cells continuously surveil the environment and make on-demand adjustment to maintain their homoeostasis 6 , 27 . State and functional mapping of these cell types in single-cell resolution provide a foundation for understanding the brain in health and disease 28 . To fully map the innate and adaptive immune responses in the presence of amyloid-β or tau pathology, we generated a cellular and molecular atlas of the meningeal and parenchymal immune cell niche through immune single-cell RNA sequencing (scRNA-seq) on sorted total CD45 + cells (CD45 total ) from meninges and CD45 total and CD45-high cells (CD45 hi ) cells from the brain parenchyma in APOE4 -knock-in (E4), A/PE4 and TE4 male mice at 9.5 months with matched genetic backgrounds (Fig. 1e , Extended Data Fig. 2a and Supplementary Table 1 ). Unsupervised clustering identified 12 robust cell types of CD45 total in the parenchyma of E4, A/PE4 and TE4 mice—microglia, T cells, neutrophils, proliferating cells, B cells, dendritic cells, NK cells, macrophages, γδ T cells, innate lymphocyte cells, mast cells and FN1 + macrophages (Fig. 1f ). Unexpectedly, the percentage of the immune cells represented by T cells was strongly increased in TE4 mice as compared with A/PE4 and E4 mice (Fig. 1g and Supplementary Table 3 ). In fluorescence-activated cell sorting analysis, the proportion of CD45 hi cells, which mainly represents the adaptive immune cell populations and innate immune cells such as dendritic cells and macrophages, was enriched in the brain parenchyma of 9.5-month-old TE4 mice (Fig. 1e and Extended Data Fig. 2a–d ). Consistent with scRNA-seq data, a significant increase in CD4 + and CD8 + T cells was observed in TE4 versus E4 mice, and CD8 + cells were the more abundant population (Extended Data Fig. 2b ). The meninges are a triple-layer structure enveloping the brain and are an immune blood–brain interface 29 . During ageing and neurodegenerative diseases, dysfunctional lymphatic vessels lead to impaired drainage, which seems to result in dysregulated immune cell trafficking 30 , 31 . Distinct cell types were observed in CD45 total populations, and the diversity and relative abundance were consistent with observations in previous studies 32 (Fig. 1h,i and Extended Data Fig. 2e ). In addition, the peripheral T cell composition as assessed in the spleen was not significantly changed in TE4 mice as compared to E4 and A/PE4 mice (Extended Data Fig. 3a,b ). Together, these results reveal comprehensive and distinct innate and adaptive immune niches present in the parenchyma and an increased proportion of T cells in the presence of tauopathy and neurodegeneration. Increased T cells with tau pathology To further investigate the apparent expansion of the T cell population observed in our scRNA-seq data, we carried out immunohistochemical analyses of the parenchyma from TE4, A/PE4 and 5xE4 mice using antibodies to cluster of differentiation 3 (CD3) and ionized calcium-binding adaptor molecule 1 (IBA1, also known as AIF1), pan markers for T cells and microglia, respectively. We found that T cell numbers were significantly elevated in 9.5-month-old TE4 mice, but not in 9.5-month-old E4 controls or in 6-month-old TE4 mice (Fig. 2a,b and Supplementary Video 1 ). Increased T cell numbers were also found in TE3 mice (Extended Data Fig. 3c ) and tau mice expressing mouse APOE, suggesting a link between T cells and tau-mediated neurodegeneration rather than it requiring a specific APOE isoform. Notably, T cell numbers were not obviously increased in amyloid-depositing A/PE4 and 5xE4 mice at 9.5 months of age or even at 19 months of age compared with those for TE4 mice (Fig. 2a–c ). Of note, CD3 staining was primarily present in the hippocampus and piriform–entorhinal cortex, which are regions with accumulation of hyperphosphorylated tau and neuronal loss, indicating a possible detrimental role for T cells in tau-dependent neurodegeneration (Extended Data Fig. 3d ). In accordance with the increase in the number of infiltrated T cells, microglia numbers were also significantly elevated in 9.5-month-old TE4 mice in regions with brain atrophy (Fig. 2a,d ). The number of T cells showed a positive correlation with the number of microglia (Fig. 2e ) and negatively correlated with the granule cell layer thickness in the dentate gyrus (Fig. 2f ). To assess whether the T cells were localized in the brain parenchyma as opposed to within the vasculature, we co-stained brain vessels by retro-orbital injection of lectin–dye and for CD3, and noted that CD3 + cells were not present in the lumen of blood vessels (Fig. 2c ). Furthermore, transmission electron microscopy also revealed that T cells were in the parenchyma adjacent to other cells in the brain (Extended Data Fig. 3e ). To determine whether a similar tau-pathology-associated increase of T cell numbers is present in the parenchyma in human Alzheimer’s disease, we carried out immunohistochemical analyses in brain samples of patients with Alzheimer’s disease (superior frontal gyrus) of low (I–II) and high (VI) Braak stages (Fig. 2g and Supplementary Table 2 ). In line with the amount of phosphorylated tau (p-tau) pathology, CD3 + T cell numbers were strongly elevated in the superior frontal gyrus from Braak stage VI versus Braak stage I or II cases (Fig. 2g–i ). By contrast, in these samples, overall amyloid-β deposition was similar in brain tissues of both low and high Braak stages (Fig. 2g,j ). Together, these data demonstrate that increased parenchymal T cell numbers are present in brain regions with tauopathy but not in those with amyloid deposition alone in both humans and mice. Fig. 2: T cell numbers are strongly elevated in the brain parenchyma of humans and mice with tauopathy and neuronal loss. a , IBA1 and CD3 staining in 6-month-old E4 and TE4, and 9.5-month-old E4, TE4, A/PE4 and 5xE4 mice in the dentate gyrus. Scale bar, 20 μm. b , Quantification of numbers of CD3 + T cells in the dentate gyrus (DG) per 0.3 mm 2 (6-month E4: n = 3; 6-month TE4: n = 7; 9.5-month E4: n = 15; 9.5-month TE4: n = 13; 9.5-month A/PE4: n = 7; 9.5-month 5xE4: n = 6). *** P < 0.0001 for 9.5-month TE4 versus A/PE4; TE4 versus 5xE4; and TE4 versus E4 (one-way ANOVA with Tukey’s post hoc test). c , Vessel and CD3 staining in 9.5-month-old TE4, and 19-month-old A/PE4 and 5xE4 mice, Scale bar, 20 μm. d , Quantification of the area covered by IBA1 in the dentate gyrus (6-month E4: n = 3; 6-month TE4: n = 7; 9.5-month E4: n = 15; 9.5-month TE4: n = 13; 9.5-month E3: n = 5; 9.5-month TE3: n = 6). *** P < 0.0001 for 9.5-month TE4 versus 9.5-month E4; * P = 0.0276 for 9.5-month TE3 versus 9.5-month E3 (one-way ANOVA with Tukey’s post hoc test). e , Correlation between the area covered by IBA1 and number of CD3 + T cells. n = 49 biologically independent animals from d . Pearson correlation analysis (two-sided); R 2 = 0.902, P < 0.0001. f , Correlation between the number of CD3 + T cells with the granule cell layer thickness in the dentate gyrus. n = 49 biologically independent animals from d . Pearson correlation analysis (two-sided); R 2 = 0.7454, P < 0.0001. g , CD3, amyloid-β (Aβ) and AT8 staining in brain sections from patients with Alzheimer’s disease (superior frontal gyrus) of low Braak stage I or II and high Braak stage VI. Scale bar, 10 μm. h , Quantification of the area covered by AT8 in g (Braak I or II: n = 3; Braak VI: n = 7). n refers to staining quantification from brain of individual humans. * P = 0.0195 (unpaired two-tailed Student’s t -test). i , Quantification of numbers of CD3 + T cells per square millimetre in g . * P = 0.0277 (unpaired two-tailed Student’s t -test). j , Quantification of the area covered by amyloid-β in g (Braak I or II: n = 3; Braak VI: n = 7). n refers to the number of individual human brains for which staining was quantified. NS (not significant), P = 0.1522 (unpaired two-tailed Student’s t -test). Data are mean ± s.e.m. Full size image T cells shift states with tau pathology To depict the cellular and molecular signatures of the T cells in the presence of amyloid-β or tau pathology, we assessed T cell populations from immune scRNA-seq data from CD45 hi cells in the parenchyma and CD45 total cells in the meninges in E4, A/PE4 and TE4 mice. T cells were categorized into 15 subgroups across all samples on the basis of expression of featured genes (Fig. 3a , Extended Data Fig. 4a and Supplementary Table 3 ). Cell population analysis revealed population differences between parenchyma and meninges. Naive CD8 + T cells (subgroup 11), FOLR4 + CD4 + T cells (subgroup 4) and regulatory T (T reg ) cells (subgroup 13) were highly enriched in meninges, but effector CD8 + T cells (subgroups 3, 8 and 10) were preferentially enriched in brain parenchyma (Fig. 3b ). These results suggest that brain-border and brain-resident T cells are functionally different in accordance with their immune niche. Interaction between the T cell receptor (TCR) and antigens presented by the major histocompatibility complex (MHC) is critical to adaptive immunity 33 . T cells clonally expand when they recognize cognate antigen 34 . We next carried out single-cell TCR sequencing (scTCR-seq) on T cells, which showed unique T cell clonal enrichment in the parenchyma with tauopathy and neurodegeneration (Extended Data Fig. 4b–d ). We evaluated TCR repertoires among CD4 + T cell subsets and observed an increased clonality in CD4 + T cells in TE4 mice that was concentrated within the activated CD4 + T cells (NKG7 + CCL5 + and CXCR6 + CCR8 + CD4 + T cells; Fig. 3c–e ). Similar to what we found in CD4 + T cells, the results of paired TCRα–TCRβ repertoire analysis revealed TCR clonal expansion in CD8 + T cells in TE4 mice (Fig. 3f ). Unsupervised clustering identified ten robust cell types in CD8 + T cells (Fig. 3g , Extended Data Fig. 4a and Supplementary Table 3 ). Activated CD8 + T cells (CD11c + KLRE1 + and ISG15 + CD8 + T cells) were more abundant in TE4 mice, whereas the fraction of TOX + PDCD1 + CD8 + exhausted T cells was slightly decreased, suggesting a potential role for activated CD8 + T cells in mediating neuronal loss in tauopathy (Fig. 3h ). Pseudotime analysis of CD8 + T cells found a range of T cell states indicative of a dynamic shift from activated to exhausted states (Fig. 3i ). We also observed an increased clonality in activated and exhausted CD8 + T cells in TE4 mice (Fig. 3j ). Together, these data illustrate that T cells in the brain parenchyma dynamically shift from activated to exhausted states with unique TCR clonal expansion in both CD4 + and CD8 + populations in the brain in a mouse model of tauopathy. Fig. 3: T cells dynamically shift from activated to exhausted states with unique TCR clonal expansion in tauopathy. a , Total T cells from brain parenchyma and meninges assigned into 15 categories as visualized by UMAP plots. b , Bar plot showing the percentages of T cells subgroups in E4, A/PE4 and TE4 mice. Data are mean ± s.e.m.; two biologically independent samples were used, and samples were sequenced in n = 2 batches from the E4 and TE4 groups. c , Scatter plot illustrating differential TRAV and TRBV pairing expression in CD4 + T cells in TE4 versus E4 ( x axis) and A/PE4 versus E4 ( y axis) mice. Avg, average. FC, fold change. d , CD4 + T cells from brain parenchyma assigned into six cell types as visualized by a UMAP plot. e , Representative TRAV–TRBV (TRAV6N-6–TRBV13-1) pairing projection in CD4 + T cells in E4, A/PE4 and TE4 mice. f , Scatter plot illustrating differential TRAV and TRBV pairing expression in CD8 + T cells in TE4 versus E4 ( x axis) and A/PE4 versus E4 ( y axis) mice. g , CD8 + T cells from brain parenchyma assigned into ten cell types as visualized by UMAP plot. h , Percentages of activated (cell types 3 and 6) and exhausted (cell type 1) CD8 + T cells in E4, A/PE4 and TE4 mice. Data are mean ± s.e.m.; two biologically independent samples were used and samples were sequenced in n = 2 batches from the E4 and TE4 groups. i , Trajectory analysis showing naive CD8 + T cells demarcated into three paths: cell type 6, ITGAX + KLRE1 + ; cell type 3, ISG15 + ; and cell type 1, TOX + PDCD1 + . j , Representative TRAV–TRBV (TRAV16–TRBV13-1) pairing projection in CD8 + T cells in E4, A/PE4 and TE4 mice. Full size image Interaction of microglia and T cells We next explored the unique but complex immune hubs in the parenchyma of tauopathy brains, which lead to T cell homing and activation. Notably, CCL3, CCL4 and CXCL10, chemokines previously reported to be associated with T cell chemotaxis and brain infiltration, were increased in the brain lysates of TE4 mice compared to those of E4 and TEKO (tau(P301S) and Apoe -KO) mice 25 , 35 (Extended Data Fig. 5 ). Microglia are responders to neuroinflammation or damage, and they rapidly adapt their phenotypes and functions in response to the dynamic brain milieu 36 . Typical functions of microglia such as phagocytosis and cytokine production have been well characterized in models of neurodegeneration including AD 37 , 38 ; however, whether they exert their effects through their interactions with T cells is largely unknown. We subgrouped microglia (cell type 0, Fig. 1f ) from the CD45 total population of E4, A/PE4 and TE4 mice and obtained three subgroups with distinguishing markers associated with homoeostatic microglia (HOM), disease-associated microglia (DAM) and interferon (IFN)-activated microglia (Fig. 4a,b ). Notably, the percentages of the DAM and IFN subgroups were strongly elevated in TE4 mice, whereas the percentage of the HOM subgroup decreased (Fig. 4c and Supplementary Table 3 ). We found that genes related to antigen presentation, complement response and cytokines, metabolism and oxidative stress, together with lysosomal enzymes, were upregulated in TE4 mice to a greater extent than in A/PE4 mice, for which the expression levels were higher than those in the control mice (Fig. 4d ). Classically, MHC class I and II proteins enable antigen presentation to CD8 + T cells and CD4 + T cells, respectively. MHC class I proteins are expressed by all nucleated cells, whereas MHC class II proteins are expressed only by antigen-presenting cells (APCs), such as dendritic cells, macrophages, B cells and microglia 39 . By co-staining for the perivascular macrophage marker MRC1 mannose receptor C (CD206) in addition to MHC class II proteins, IBA1 and GFAP, we found that MHC class II proteins were primarily present in IBA1 + microglia in the brain parenchyma in regions with neurodegeneration (Extended Data Fig. 6a ). Indeed, in line with the increase in the number of parenchymal T cells, we found that the number of microglia positive for MHC class II proteins was significantly elevated in brain regions with tau pathology in TE4 mice (Extended Data Fig. 6b,c ). DAM and its subtypes have been well characterized in amyloid models 40 . Here, in TE4 mice, we found that microglia positive for α X integrin (also known as CD11c), a representative marker for triggering receptor expressed on myeloid cells 2 (TREM2)-dependent type 2 DAM, physically co-localized with CD8 + T cells (Supplementary Video 2 ). Notably, CD11c was also strongly increased in TE4 hippocampus as compared to the levels in A/PE4 and E4 control mice (Extended Data Fig. 6d,e ). These results highlight a tight correlation with microglia positive for MHC class II proteins, CD11c + microglia, T cells and neurodegeneration. Sparse microglia positive for MHC class II proteins and CD11c + microglia were also found co-localized with parenchymal plaques in A/PE4 mice (Extended Data Fig. 6b,d ). Apoe deletion rescued brain atrophy in P301S mice, and the numbers of microglia positive for MHC class II proteins and CD11c + microglia as well as T cell numbers were significantly decreased (Extended Data Fig. 6b–g ). The higher inflammatory reactivity associated with tau-mediated neurodegeneration and APOE were also confirmed by assessment of inflammatory cytokines in brain tissue from TE4 and TEKO mice (Extended Data Fig. 5 ). Together, these data demonstrate that parenchymal microglia, in the presence of tauopathy, shift their transcriptomic and phenotypical states from homoeostatic to disease-associated, IFN-activated states positive for CD11c and MHC class I proteins, with an accompanying increase in the number of inflammatory chemokines and cytokines. Fig. 4: Microglia depletion prevents T cell infiltration in tauopathy. a , Total microglia assigned into three subgroups (HOM, DAM and IFN-activated microglia) as visualized by UMAP plot. b , Heat map showing representative markers specifically expressed in the three subgroups. c , Bar plot showing the percentages of the three subgroups of microglia in E4, A/PE4 and TE4 mice. Data are mean ± s.e.m.; two biologically independent samples were used, and samples were sequenced in n = 2 batches from the E4 and TE4 groups. d , Differentially expressed genes in microglia subgroups. e , IBA1 and CD3 staining in 9.5-month-old TE4 mice. PLX, PLX3397. Ctrl, control. Scale bar, 20 μm. f , P2RY12, MHC class II protein and IBA1 staining in 9.5-month-old TE4 mice. Scale bar, 50 μm. g , CD11c, CD8 and IBA1 staining in 9.5-month-old TE4 mice. Scale bar, 50 μm. h – k , Quantification of the areas covered by IBA1, P2RY12, MHC class II proteins and CD11c in 9.5-month-old mice (TE4 Ctrl: n = 5; TE4 PLX: n = 11). *** P < 0.0001, *** P < 0.0001, ** P = 0.0031 and *** P < 0.0001 for TE4 PLX versus TE4 Ctrl for IBA1, P2RY12, MHC class II proteins and CD11c, respectively (unpaired two-tailed Student’s t -test). l , Representative images of brain sections from 9.5-month-old mice stained with Sudan black. Scale bar, 1 mm. m – o , Volumes of hippocampus ( m ), piriform–entorhinal cortex ( n ) and posterior lateral ventricle ( o ) in 9.5-month-old mice (E4 PLX: n = 12; TE4 Ctl: n = 5; TE4 PLX: n = 11). ** P = 0.0078 and ** P = 0.0012 for TE4 Ctrl versus TE4 PLX for volumes of hippocampus and posterior lateral ventricle, respectively (one-way ANOVA with Tukey’s post hoc test). p , Quantification of the number of CD3 + T cells in the dentate gyrus per 0.3 mm 2 in 9.5-month-old mice (E4 PLX: n = 12; TE4 Ctl: n = 5; TE4 PLX: n = 11). * P = 0.0236 for TE4 Ctrl versus TE4 PLX (unpaired two-tailed Student’s t -test). q , Quantification of the number of CD8 + T cells in the dentate gyrus per 0.3 mm 2 in 9.5-month-old mice (TE4 Ctrl: n = 5; TE4 PLX: n = 11). ** P = 0.002 for TE4 Ctrl versus TE4 PLX. r , Quantification of the area covered by AT8 in the dentate gyrus per slice in 9.5-month-old mice (TE4 Ctrl: n = 5; TE4 PLX: n = 11). *** P = 0.0008 for TE4 Ctrl versus TE4 PLX (unpaired two-tailed Student’s t -test). Data in h – k , m – r are mean ± s.e.m. Full size image IFNγ, a cytokine upregulated in TE4 mice, is a pro-inflammatory cytokine produced by NK cells, NK T cells and T cells that can prime microglia for inflammatory responses to injury as well as promote cytotoxic CD8 + T cell function 6 . Previous studies identified IFNγ-related transcriptomic signatures in tauopathy and neurodegenerative disease models, although the cell-type expression and functional result of IFNγ on pathology was not described 41 . Ligand–receptor analysis revealed active interactions between T cells and microglia (Extended Data Fig. 7a ). IFNγ receptor was already known to be expressed in both neurons and microglia in the brain 42 . Notably, we found that in the brain of TE4 mice, IFNγ transcripts were enriched in T cells, especially in CD8 + T cells (Extended Data Fig. 7b ). Given that IFNγ can augment antigen-presentation and inflammatory functions of myeloid cells, we further investigated the role of IFNγ in the immune response in tauopathy. To determine whether microglia can present antigen to T cells in vitro, we co-cultured microglia acutely isolated from adult mouse brain with OT-1 T cells, with soluble ovalbumin as the antigen, and found that microglia were capable of weakly stimulating OT-1 T cell proliferation compared to dendritic cells (Extended Data Fig. 8c–f ). However, on IFNγ stimulation, OT-1 T cell proliferation was strongly enhanced in the presence of microglia with ovalbumin (Extended Data Fig. 8e,f ), nearly to the level observed with dendritic cells, suggesting that microglia in vitro can serve as APCs and that IFNγ can augment this response. Together, these data suggest the possibility that there are active interactions between microglia and infiltrated T cells. To determine the role of endogenous IFNγ in P301S mice in vivo and to study the interplay between activated microglia and T cells, we blocked IFNγ signalling by peripheral administration intraperitoneally every 5 days with a neutralizing antibody in TE3 mice from 7.5 to 9.5 months of age, right before T cell infiltration into the brain parenchyma. Anti-IFNγ treatment resulted in attenuated brain atrophy as compared to that for the IgG treatment control (Extended Data Fig. 8a–d ). CD11c + microglia were also significantly reduced in number in anti-IFNγ-treated mice (Extended Data Fig. 8e,f ), and there was a significant reduction in p-tau staining in anti-IFNγ-treated mice (Extended Data Fig. 8g,h ). Together, these results suggest that IFNγ secreted by CD8 + T cells in the brain can augment tau pathology and neurodegeneration, at least in part through promoting inflammatory microglial signalling and antigen-presentation functions. To further delineate the interrelationship between the activated microglia and infiltrated T cells, we administered PLX3397, a selective inhibitor of CSF1R c-kit and FLT3, in TE4 and E4 control mice from 8.5 months to 9.5 months of age (Extended Data Fig. 9i ). PLX3397 treatment resulted in strong depletion of microglia numbers (Fig. 4e–k ). PLX3397 treatment also decreased hippocampal atrophy and ameliorated the increase in ventricular volume in TE4 mice (Fig. 4l–o ). Notably, CD3 + and CD8 + T cell numbers as well as tau pathology were reduced on microglia depletion (Fig. 4p–r ). This suggests a pivotal role for microglia, especially activated microglia, in the setting of the tauopathy-specific immune hubs by recruiting T cells into the brain parenchyma and a detrimental role for this restructured immune hub in facilitating disease progression. T cell depletion prevents degeneration To directly investigate whether infiltration of T cells leads to neurodegeneration, we depleted T cells by peripheral administration of neutralizing antibodies in TE4 mice as well as their age-matched non-tau transgenic littermates from 6 months to 9.5 months of age, a critical time window when neurodegeneration develops (Extended Data Fig. 8j ). A single dose acute intraperitoneal treatment with anti-CD4 and anti-CD8 antibodies (anti-T) led to strong depletion of CD4 + and CD8 + T cells in brain parenchyma, meninges and peripheral blood, confirming the antibody depletion efficiency (Extended Data Fig. 9a,b ). Notably, in TE4 mice given anti-T treatment (intraperitoneally every 5 days) from 6 to 9.5 months, brain atrophy was strongly ameliorated compared to that in the IgG-treated control mice (Fig. 5a–d ). T cells were almost completely eliminated in the brain parenchyma in TE4 mice after 3.5 months of anti-T antibody treatment (Fig. 5e,g–i ). T cell depletion also reduced overall microglial staining (Fig. 5e–g,j ), suggesting that T cells in the brain of TE4 mice can augment microgliosis. To assess the activation status of microglia with and without T cell depletion, we immunohistochemically analysed the parenchyma from TE4 IgG- and TE4 anti-T-treated mice using antibodies to P2RY12, MHC class II proteins and CD11c (Fig. 5f,g ). We found significant elevation of P2RY12 + microglia and reduced microglia positive for MHC class II proteins and CD11c + microglia (Fig. 5k–m ) in the anti-T antibody-treated mice, suggesting that microglia shift from activated towards a more homoeostatic state after T cell depletion. scRNA-seq analysis of microglia from anti-T antibody- versus the IgG control-treated mice also revealed strong suppression of different aspects of the disease-related microglia signature and an increase in the homoeostatic signature (Extended Data Fig. 9c–e ). To assess tau pathology following T cell depletion, we analysed p-tau immunoreactivity in hippocampus and found a significant reduction in p-tau in anti-T-treated mice (Fig. 5n ). Four major p-tau staining patterns, designated as type 1–4, strongly correlated with the level of brain atrophy, with type 1 associated with most preserved brain tissue and type 4 associated with the greatest atrophy. Depletion of T cells resulted in a significant shift of p-tau staining pattern towards the earliest disease stage (Fig. 5o,p ). We also assessed plasma protein levels of neurofilament light chain, a marker of neuroaxonal damage and neurodegeneration 43 . The concentration of neurofilament light chain in T cell-depleted mice was significantly reduced (Fig. 5q ). Behavioural performance assessment revealed that after depletion of T cells, nest-building behaviour in 9.5-month-old TE4 mice was significantly improved (Extended Data Fig. 10f ). We also assessed an additional cohort of TE4 mice that we treated with anti-T antibody versus the IgG control from 6 months to 8.5 months of age. Assessment of behavioural performance revealed that depletion of T cells resulted in significant improvement in two additional behaviours. Alternation in a Y maze (assessing short-term memory and exploratory behaviour) and freezing in response to an auditory cue (assessing amygdala-dependent memory) were significantly improved (Fig. 5r–u and Extended Data Fig. 9g–i ). Freezing behaviour in response to a contextual cue showed a trend towards increased hippocampal-dependent memory after depletion of T cells (Fig. 5t ). Both groups showed similar baseline levels of general exploratory behaviour, and locomotor activity levels (Extended Data Fig. 9g,h ) and response to tone–shock pairing in the fear conditioning test (Fig. 5s ). Together, these data demonstrate that T cell depletion decreases functional decline. Fig. 5: Depletion of T cells ameliorates inflammation, tauopathy and brain atrophy, and improves behaviour. a , Representative images of brain sections from 9.5-month-old mice. Scale bar, 1 mm. b – d , Volumes of brain regions in 9.5-month-old mice (E4 IgG: n = 11; E4 anti-T: n = 10; TE4 IgG: n = 8; TE4 anti-T: n = 12). * P = 0.0112, * P = 0.0397 and * P = 0.0313 for TE4 IgG versus TE4 anti-T for hippocampus, piriform–entorhinal cortex and posterior lateral ventricle, respectively (unpaired two-tailed Student’s t -test). e , IBA1 and CD3 staining in 9.5-month-old mice. Scale bar, 20 μm. f , P2RY12, MHC class II protein and IBA1 staining in 9.5-month-old mice. Scale bar, 50 μm. g , CD11c, CD8 and IBA1 staining in 9.5-month-old mice. Scale bar, 50 μm. h , Quantification of CD3 + T cells in the dentate gyrus per 0.3 mm 2 in 9.5-month-old mice (E4 IgG: n = 11; E4 anti-T: n = 11; TE4 IgG: n = 11; TE4 anti-T: n = 11). *** P = 0.0004 (unpaired two-tailed Student’s t -test). i , CD8 + T cells per 0.3 mm 2 of the dentate gyrus in 9.5-month-old mice (TE4 IgG: n = 11; TE4 anti-T: n = 11). *** P = 0.0002 (unpaired two-tailed Student’s t -test). j – n , Quantification of the immunostained areas in 9.5-month-old mice (TE4 IgG: n = 11 and TE4 anti-T: n = 11 for j – m ; TE4 IgG: n = 8 and TE4 anti-T: n = 12 for n ). *** P = 0.0002, * P = 0.0229, *** P = 0.0004, *** P = 0.0002 and *** P = 0.0002 for area of IBA1, P2RY12, MHC class II proteins, CD11c and AT8, respectively (unpaired two-tailed Student’s t -test). o , Distinct p-tau staining patterns. p , Distribution of the four p-tau staining patterns in 9.5-month-old mice. ** P = 0.007 for distribution between TE4 IgG and TE4 anti-T (Fisher’s exact test) q , Concentration of neurofilament light chain (NfL) in the plasma of 9.5-month-old mice (E4 IgG: n = 11; E4 anti-T: n = 11; TE4 IgG: n = 11; TE4 anti-T: n = 11). * P = 0.0398 (unpaired two-tailed Student’s t -test). r , Behaviour of 8.5-month-old mice in a Y maze (TE4 IgG: n = 10; TE4 anti-T: n = 15). * P = 0.0239 (unpaired two-tailed Student’s t -test). s , Tone–shock pairing, day 1 (TE4 IgG: n = 10; TE4 anti-T: n = 15). P = 0.2152 (two-way ANOVA, with Bonferroni post hoc comparisons test). t , Freezing in response to a contextual cue, day 2 (TE4 IgG: n = 10; TE4 anti-T: n = 15). P = 0.067 (two-way ANOVA, with Bonferroni post hoc comparisons test) for the treatment with TE4 IgG and TE4 anti-T. u , Freezing in response to an auditory cue, day 3 (TE4 IgG: n = 10; TE4 anti-T: n = 15). ** P = 0.0118 (two-way ANOVA, with Bonferroni post hoc comparisons test) for the treatment with TE4 IgG and TE4 anti-T, ** P = 0.0099 for 8 min, P = 0.055 for 9 min and ** P = 0.007 for 10 min. Data in b – d , h – n , q – u are mean ± s.e.m. Full size image Immune checkpoints are regulatory pathways for maintaining systemic immune homoeostasis and tolerance 44 . PDCD1 is a checkpoint protein expressed on T cells, which processes inhibitory signals to control the magnitude of adaptive immune responses and tolerance 45 . Previous reports indicate that PDCD1 immune checkpoint blockade decreases cognitive impairment in mouse models with Alzheimer’s disease pathology 46 , 47 . PDCD1 blockade can lead to increased activation of exhausted CD8 + T cells or enhanced immunosuppression through increased activation of PDCD1 + CD4 + T reg cells 48 . To investigate whether a treatment that targets PDCD1–PDL1 blockade could be effective in tauopathy, we administered anti-PDCD1 treatment in TE4 mice from 8 months to 9.5 months of age, a time window in which brain atrophy develops. We found that with 1-week acute anti-PDCD1 treatment increased the percentage of FOXP3 + CD4 + T reg cells and PDCD1 + FOXP3 + CD4 + T reg cells, with no obvious changes on KLRG1 + effector CD8 + T cells and total PDCD1 + TOX + CD8 + T cells in the brain (Extended Data Fig. 10a–e ). These results suggest that PDCD1-antibody treatment at this age would increase immunosuppressive CD4 + T reg cells. Consistent with this hypothesis, chronic treatment beginning at 8 months significantly decreased tau-mediated neurodegeneration and p-tau staining (Extended Data Fig. 10f–i ), further supporting a role for T cells in tau-mediated neurodegeneration. In this study, a comprehensive map for immune responses at the cellular and molecular level in the brain and meninges during the development of amyloid or tau pathology and neurodegeneration using scRNA-seq and scTCR-seq is presented. We find that an immunological hub involving activated microglia and T cells is overrepresented in brain regions with tauopathy and neuronal loss. Although evidence regarding the pathological changes and the role of microglia in Alzheimer’s disease is emerging, here we expand on the immune microenvironment in the setting of tauopathy and neurodegeneration by assessing a previously less examined adaptive immunological response involving T cells and their interaction with cells in the brain. T cells dynamically shift from activated to exhausted states with unique TCR clonal expansion. We also present direct evidence that breaking the neurodegeneration-associated immune hub between activated microglia and infiltrated T cells effectively prevents neurodegeneration and decreases cognitive decline. As an innate primary response, microglia seem to have a protective role in the presence of amyloid plaques (restrict plaque growth and local damage) or a pro-inflammatory, damaging role in the presence of tau pathology (response to neuronal damage and aggregated tau, leading to severe neurodegeneration) in Alzheimer’s disease 12 . Here we found that CD11c and MHC class II protein expression strongly increased in microglia specifically in regions of the brain with atrophy. Genes encoding MHC class I and II proteins were highly upregulated in activated microglia in tauopathy. We also discovered adaptive immune responses in both a mouse model of tauopathy and brain samples from patients with Alzheimer’s disease, finding that T cells are present in the brain parenchyma and also that their enrichment highly correlates with the severity of brain atrophy. Removal and modulation of T cells rescued the brain atrophy and highlighted that T cells have an important role in neurodegeneration. The complex nature of the central nervous system (CNS) necessitates its own specialized immunological adaptations to detect and respond to environmental changes. Here we found significantly different proportions of T cells in the meninges and brain parenchyma. These results highlight that CNS-border and CNS-resident T cells are functionally different in accordance with their immune niche. The local tauopathy-related microenvironment in the brain parenchyma is likely to be instructive for recruiting and guiding the transformation of T cells. The interaction of T cells with APCs has been well established in peripheral systems 49 and CNS borders 32 . Here our findings raise a fundamental question regarding the interaction of T cells with APCs in the brain parenchyma. We find that T cells actively interact with the disease-related microglia subgroups. Depletion of microglia largely abolishes T cell infiltration and depletion of T cells also remarkably hinders microglia activation, demonstrating communication between the innate and adaptive family of immune cells. In combined scRNA-seq and scTCR-seq analyses, we uncover unique clonal expansion of T cells enriched in the parenchyma with tauopathy and neurodegeneration. Defining which antigens result in T cell activation, such as variously modified forms of tau, other proteins or myelin debris released by damaged neurons that are subsequently presented to adaptive immune cells within tauopathy and Alzheimer’s disease, remains an intriguing question. Sequencing TCRs at the single-cell level combined with high-throughput peptide screening would enable elucidation of the specific antigens, which might in turn yield pathological stage-specific therapeutic strategies. Microglia express many pattern-recognition receptors that bind and internalize foreign misfolded proteins 50 . We found that T cell infiltration did not increase in the tau mice lacking APOE. Therefore, a previously overlooked immunomodulatory function of APOE may serve as an important mechanism linking both innate and adaptive immunity. Mapping the disease-state-specific interlink between microglia and T cells, including their signalling communications, presented antigens and pathophysiological responses, will be a key nexus to set up unique therapeutic interventions to prevent or reverse brain atrophy and neurodegeneration in tauopathies. Methods Animals Human- APOE -knock-in mice, APOE3 and APOE4 (E3 and E4, respectively), were generated by replacing the mouse genomic sequence from the translation initiation codon in exon 2 to the termination codon in exon 4 with its human counterparts flanked by loxP sites 51 . Tau(P301S) transgenic mice (Jax, no. 008169) on a C57BL/6 background were crossed to human -APOE -knock-in or Apoe -knockout mice (Jax, no. 002052) to generate P301S/E3 (TE3), P301S/E4 (TE4) and P301S/EKO (TEKO) mice respectively. All tau transgenic mice involved in the final analysis were obtained from the same generation. A/PE4 and 5XFADE4 mice have been described previously 8 , 51 . Littermates of the same sex were randomly assigned to experimental groups. All animal procedures and experiments were carried out under guidelines approved by the Institutional Animal Care and Use committee at Washington University School of Medicine. Human Alzheimer’s disease tissues All participants gave prospective pre-mortem written consent for their brain to be banked and used for research with information to potentially be published under procedures approved by the human institutional review board at Banner Sun Health Research Institute. Patient demographics are available in Supplementary Table 2 . Volumetric analysis The left hemi-brain of each mouse was fixed with 4% paraformaldehyde for 24 h at 4 °C and then placed in 30% sucrose at 4 °C overnight. Serial free-floating coronal sections were cut from the rostral crossing of the corpus callosum to the caudal end of the hippocampus at 50 μm on a Leica SM2010 microtome. Brain sections (spaced 300 μm apart) from bregma −1.3 mm to −3.1 mm were mounted for volumetric analysis. All mounted sections were stained with 0.1% Sudan black (Sigma, 199664-25G) in 70% ethanol at room temperature for 20 min, and then washed in 70% ethanol for 50 s, three times. The sections were washed in Milli-Q water three times and covered with Floromount-G (Southern Biotech, 0100-01). Slides were scanned using a Hamamatsu NanoZoomer microscope at ×20 magnification. Hippocampus, piriform–entorhinal cortex and ventricles were traced using NDP viewer. The volume was calculated using the formula: volume = (sum of area) × 0.3 mm × number of sections. The experimenter assessing brain volumes was blinded to experimental groups. Immunohistochemistry Two sections from each mouse (300 μm apart), corresponding approximately to bregma coordinates −1.4 mm, −1.7 mm were used for p-tau staining. Brain sections were washed in Tris-buffered saline (TBS) buffer for 3 min, followed by incubation in 0.3% hydrogen peroxide in TBS for 10 min at room temperature. After three washes in TBS, sections were blocked by 3% milk in TBS with 0.25% Triton X-100 (TBSX) for 1 h at room temperature followed by incubation with AT8-biotinylated antibody (Thermo Scientific, MN1020B) overnight at 4 °C. The next day, after three washes in TBS, the slices were developed by VECTASTAIN Elite ABC-HRP kit (Vector Laboratories, PK-6100) following the manufacturer’s instructions. Slides were covered by Cytoseal 60 (Thermo Scientific, 8310-4) and scanned using a Hamamatsu NanoZoomer microscope at ×20 magnification. Images were analysed by ImageJ. For immunofluorescent staining, two sections (bregma −2.0 mm and −2.3 mm) from each mouse were used. The sections were washed three times in TBS, permeabilized with 0.25% TBSX for 10 min, and then blocked with 3% BSA in 0.25% TBSX for 1 h at room temperature. Sections were incubated in primary antibodies overnight at 4 °C. The next day, sections were washed in TBS and incubated with corresponding fluorescence-labelled secondary antibodies for 1.5 h at room temperature. The slices were washed and mounted in Prolong Gold Antifade mounting medium (Invitrogen, P36930). Primary antibodies were as follows: CD3 (Novus, NB600-1441, 1:200), CD8 (Invitrogen, MA1-145, 1:100), IBA1 (Wako, 019-19741, 1:2,000; Abcam, ab5076, 1:500), AT8 (Invitrogen, MN1020B, 1:500), amyloid-β (made in house, HJ3.4B, 1:1,000), P2RY12 (gift from Butovsky lab, 1:2,000), NeuN (Abcam, ab177487, 1:1,000), myelin basic protein (Abcam, ab7349, 1:500), MHC class II protein (Biolegend, 107650, 1:200), X34 (Sigma, 1954-25MG, 10 mM in dimethylsulfoxide stock, 1:5,000), CD206 (Bio-Rad, MCA2235, 1:300), Hoechst (Sigma, 94403, 1:5,000). Secondary antibodies were as follows: donkey anti-rat 488 (Invitrogen, A21208, 1:500), donkey anti-rabbit 405 (Invitrogen, A48258, 1:500), donkey anti-rabbit 568 (Invitrogen, A10042, 1:500), streptavidin 568 (Invitrogen, S11226, 1:500), donkey anti-goat 647 (Invitrogen, A21447, 1:500). Images were acquired on a Zeiss LSM800 microscope. Areas covered by antibody–fluorophores and their numbers were analysed by ImageJ. Three-dimensional construction was carried out using Imaris 9.7.0 software. CD3, IBA1, CD8 and CD11c were labelled and detected with fluorophores using the surface area function. PLX3397 formulation and supplement PLX3397 was purchased from SelleckChem. PLX3397 was formulated in AIN-76A (Research Diet) at a concentration of 400 mg per kilogram of chow. E4 and TE4 mice were treated with PLX3397 for 4 weeks for microglial acute depletion from 8.5 to 9.5 months of age. IFNγ treatment For blocking IFNγ signalling, mice were intraperitoneally injected with 100 mg per kilogram of body weight with either control IgG (Leinco, P376) or anti-mouse IFNγ (Leinco, clone H22, I-1190) antibodies 52 every 5 days from 7.5 to 9.5 months of age. Anti-PDCD1 treatment For blocking PDCD1–PDL1 signalling chronically, mice were intraperitoneally injected with 500 μg anti-PDCD1 antibody (BioXCell, BP0146) every 5 days from 8 to 9.5 months of age. IgG (BioXCell, BP0089) isotype control was administered at the same frequency and dosage. Brains were collected for flow cytometry assessment of T cell populations. To characterize the T cell populations with anti-PDCD1 treatment, mice were acutely treated with 500 μg anti-PDCD1 or IgG every 2 days. At day 7, after perfusion, brains were isolated for single-cell analysis by flow cytometry. Intracellular staining for transcription factors was carried out using eBioscience FOXP3/Transcription Factor Kit (Ref. 00-5523-00) per the manufacturer’s instructions. In brief, cells were stained with LIVE/DEAD Fixable Aqua Dead Cell Stain Kit (Invitrogen, ref. L34966A) for 5 min and then incubated with surface antibody mix and TruStain FcX PLUS (anti-mouse CD16/CD32, Clone S17011E, Biolegend, ref. 156604, 1:200) for 1 h at room temperature. After cell-surface staining, cells were fixed, permeabilized and incubated with intracellular antibody mix overnight at 4 °C. Flow cytometry was carried out on a BD Symphony A3. The following antibodies were used: CD45.2 (Biolegend, 104), CD4 (Biolegend, GK1.5), PDCD1 (Biolegend, 29F.1A12), KLRG1 (Biolegend, 2F1/KLRG1), CD3e (BD, 145-2C11), CD8a (BD, 53-6.7), FOXP3 (Invitrogen, FJK-16s), TOX (Invitrogen, TXRX10). T cell depletion For the depletion of CD4 + and CD8 + T cells, mice were intraperitoneally injected with 500 μg anti-CD4 (BioXCell, BP0003-1) and anti-CD8 antibody (BioXCell, BP0061) every 5 days from 6 to 9.5 months of age or for memory-related behavioural experiments from 6 to 8.5 months of age. IgG (BioXCell, BP0090) isotype control was administered at the same frequency and dosage. To characterize the depletion efficiency, mice were acutely treated with 500 μg anti-CD4, or anti-CD8 or IgG. Brain, meninges and blood were extracted for single-cell analysis followed by flow cytometry assessment of CD4 + and CD8 + T cell populations. Brain extraction Mouse cortex tissue was weighed and homogenized using a pestle with 10 μl buffer per 1 mg tissue in chilled lysis buffer (Thermo Scientific, 78503). After centrifugation at 20,000 g for 10 min at 4 °C, the supernatant was saved and protein concentration was measured by micro BCA protein assay kit (Thermo Scientific, 23235) before multiplex immunoassay (Thermo Scientific). Nest-building behaviour Group-housed mice were switched to individual housing in the week of assessment at 9.5 months. A pre-weighed nestlet was provided in each cage. After overnight housing, the remaining nestlet was weighed. A 5-point scale system was assigned on the basis of the percentage of remaining nesting material and shredded conditions. Score 1: nestlet >90% untorn; score 2: nestlet 50–90% untorn; score 3: nestlet 10–50% untorn; score 4: nestlet <10% untorn, but nest is flat and uncompact; score 5: nest is compact and nest wall is higher than the mouse for >50% of its circumference. General design of behavioural tests TE4 male mice were treated with IgG or with anti-CD4 and anti-CD8 antibodies for T cell depletion from 6 to 8.5 months of age. They were then tested for behavioural differences. Following 1-week habituation and handling in the Washington University Animal Behavior Core, mice were evaluated on 1 h locomotor activity, spontaneous alternation in a Y maze and fear conditioning. All tests were conducted during the light phase of the light–dark cycle. Behavioural testers were blind to the treatment group. One-hour locomotor activity and open-field behaviour test To evaluate general activity levels and possible alterations in emotionality, mice were evaluated over a 1-h period in transparent (47.6 × 25.4 × 20.6 cm high) polystyrene enclosures. Each cage was surrounded by a frame containing a 4 × 8 matrix of photocell pairs, the output of which was fed to an online computer (Hamilton-Kinder, LLC). The system software (Hamilton-Kinder, LLC) was used to define a 33 × 11-cm central zone and a peripheral or surrounding zone that was 5.5 cm wide with the sides of the cage being the outermost boundary. This peripheral area extended along the entire perimeter of the cage. Variables that were analysed included the total number of ambulations and rearing on hindlimbs, as well as the number of entries, the time spent and the distance travelled in the centre area as well as the distance travelled in the periphery surrounding the centre. Spontaneous alternation in Y maze Testing was conducted according to our previously published procedures 53 . In brief, this involved placing a mouse in the centre of a Y maze that contained three arms that were 10.5 cm wide, 40 cm long and 20.5 cm deep with the arms oriented at 120° with respect to each successive other arm. Mice were allowed to explore the maze for 10 min and entry into an arm was scored only when the hindlimbs had completely entered the arm. An alternation was defined as any three consecutive choices of three different arms without re-exploration of a previously visited arm. Dependent variables included the number of alternations and arm entries along with the percentage of alternations, which was determined by dividing the total number of alternations by the total number of entries minus 2, and then multiplying by 100. Conditioned fear A previously described protocol 54 was used to train and test mice using two clear-plastic conditioning chambers (26 × 18 × 18 cm high; Med-Associates) that were easily distinguished by different olfactory, visual and tactile cues present in each chamber. On day 1, each mouse was placed into the conditioning chamber for 5 min and freezing behaviour was quantified during a 2 min baseline period. Freezing (no movement except that associated with respiration) was quantified using FreezeFrame image analysis software (Actimetrics) that allows for simultaneous visualization of behaviour while adjusting for a ‘freezing threshold’ during 0.75-s intervals. After baseline measurements, a conditioned stimulus consisting of an 80-dB tone (white noise) was presented for 20 s followed by an unconditioned stimulus consisting of a 1-s, 1.0-mA continuous foot shock. This tone–shock (T–S) pairing was repeated each minute over the next 2 min, and freezing was quantified after each of the three tone–shock pairings. Twenty-four hours after training, each mouse was placed back into the original conditioning chamber to test for fear conditioning to the contextual cues in the chamber. This involved quantifying freezing over an 8-min period without the tone or shock being present. Twenty-four hours later, the mice were evaluated on the auditory cue component of the conditioned fear procedure, which included placing each mouse into the other chamber containing distinctly different cues. Freezing was quantified during a 2-min ‘altered context’ baseline period as well as over a subsequent 8-min period during which the auditory cue (conditioned stimulus) was presented. Shock sensitivity was evaluated following completion of the conditioned fear test as previously described 55 . Concentration of neurofilament light chain The concentration of neurofilament light chain in plasma was measured with NF-Light Simoa Assay Advantage kit (Quanterix) by an experimenter blinded to experimental groups. Single-cell isolation Mechanical dissociation was carried out as previously described 56 . In brief, mice were perfused with pre-chilled PBS to fully remove blood contamination. Hippocampus and cortex were dissected followed by Dounce homogenization. Cell suspensions were then passed through Percoll density centrifugation to remove myelin and debris. The cell pellets were washed with 0.5% BSA for analysis or collection. For meninges, meninges were peeled intact from the skullcap using fine forceps and prepared for single-cell analysis as previously described 32 . In brief, meninges were mashed through a cell strainer, using a sterile syringe plunger, and washed in 0.5% BSA. Flow cytometry for single cells All steps were carried out on ice or using a pre-chilled centrifuge set to 4 °C. Single-cell suspensions were incubated with anti-CD16/32 (Fc block; Biolegend) for 5 min and then fluorescently conjugated antibodies were added for 20 min. After washing, samples were collected by 300 g followed by a 5 min spin down and suspended in 5% BSA with PI for live–dead selection before sorting. Cells were sorted using a FACS Aria II (BD Bioscience). Immune scRNA-seq After quantifying and analysing single-cell integrity, 8,000–16,000 individual single cells per sample were loaded onto a 10x Genomics Chromium platform for Gel Beads-in-emulsion and cDNA generation carrying cell- and transcript-specific barcodes and sequencing libraries constructed using the Chromium Single Cell 5′ library & Gel Bead Kit V2. Libraries were sequenced on the Illumina NovaSeq6000. Single-cell data processing and TCR analysis Alignment, barcode assignment and UMI counting with Cell Ranger (v6.1.1) were used for preparation of count matrices for the gene expression library. For alignment, a custom mouse genome (GRCm38) containing human sequences for APOE , PSEN1 , APP and MAPT genes was used as a reference. Barcodes in all samples that were considered to represent noise and low-quality cells were filtered out using the knee-inflection strategy available in default Cell Ranger (v6.1.1) from 10x Genomics. For downstream analysis, the Seurat package (v4.0.4) was used, and genes expressed in fewer than three cells were also filtered from expression matrices. The fraction of mitochondrial genes was calculated for every cell, and cells with a mitochondrial fraction of more than the highest confidence interval for a scaled mitochondrial percentage were filtered out, which results in removal of cells with a mitochondrial percentage of more than 20%. Additionally, a cutoff of log 10 [number of unique expressed genes] = 2.5 was used for removing the cells from both CD45 hi and CD45 total parenchyma cells, and 2 was used as a threshold for the cells from meninges. Doublets have been excluded on the basis of the co-expression of the canonical cell-type-specific genes. Each sample was normalized using the SCTransform function with mitochondrial content as a variable to regress out in a second non-regularized linear regression. For integration aims, variable genes across the samples were identified by the SelectIntegrationFeatures function with the number of features equal to 2,000. Then the object was prepared for integration (PrepSCTIntegration function), the anchors were found (FindIntegrationAnchors function) and the samples were integrated into the whole object (IntegrateData function). Principal component analysis was used for dimensionality reduction, and the first 20 principal components were used further to generate UMAP dimensionality reduction by the RunUMAP function. The clustering procedure was carried out by FindNeighbors and FindClusters with a range of resolutions (from 0.2 to 1.0 with 0.2 as a step) and the first 20 principal components as input. The object covering all cells was subsetted into T cell-, microglia- and myeloid-specific sub-objects on the basis of expression of canonical gene markers. The T cell object was further split into CD4 + and CD8 + cells. Then, all objects were passed through the iterative process of quality control with doublet removal and exclusion of the cell types that have no relevant markers and contained high mitochondrial content as well as poor coverage (all filters are object-specific). Cell Ranger’s vdj workflow (v6.1.1) was used for TCR data analysis. Non-canonical T cells (such as γδ T cells and NK T cells) as well as T cells with inappropriate combinations of α- and β-chains were removed. Then, all barcodes were assigned to two populations based on CD4 and CD8 gene expression. The Gini coefficient was calculated using the immunarch package (v0.6.6) to estimate the clonal diversity among samples. Trajectory analysis was carried out using a slingshot container available at dynverse package with normalized count matrices with barcodes assigned to microglia as input data as well as cells assigned to CD8 + T cells. Interaction analysis was implemented using the CellChat package (v. 1.1.3) with the Cell-Cell Contact database. As input data, microglia, CD4 + and CD8 + T cells from the E4 genotype and microglia, CD4 + and CD8 + T cells from the TE4 genotype were used. Following the CellChat vignette, CellChat objects were prepared (createCellChat), overexpressed genes and interactions were identified (identifyOverExpressedGenes, identifyOverExpressedInteractions functions), communication probabilities were estimated (computeCommunProb, filterCommunication, computeCommunProbPathway functions) and network analysis (aggregateNet, netAnalysis_computeCentrality functions) was carried out. The genotype-specific as well as genotype-common ligand–receptor pairs were identified (netVisual_bubble function). The number of interactions was evaluated using the netVisual_circle function. Peripheral immune cell composition assay Spleens from E4, A/PE4 and TE4 mice were collected and smashed through a 70-μm strainer to prepare single-cell suspensions. After single-cell suspensions were made, the cells were pelleted down and resuspended in 5 ml red blood cell lysis buffer (ACK buffer) at room temperature for 2 min. Cells were blocked in the presence of Fc block (2.4G2; Leinco, C247) in magnetic-activated cell-sorting buffer (0.5% BSA, 2 mM EDTA in PBS) at 4 °C. The following antibodies were used: CD45 (Biolegend, 30-F11), CD19 (Biolegend, 6D5), CD3 (Biolegend, 145-2C11), CD44 (Biolegend IM7), CD4 (Biolegend, PM4-5), CD8 (BD, 53-6.7), FOXP3(Invitrogen, FJK-16s). Microglia antigen presentation in vitro assay C57BL/6-Tg (TcraTcrb) 1100 Mb/J (OT-I) (Jax, no. 003831) and B6.SJL-Ptprc a Pepc b /BoyJ (B6. CD45.1) (Jax, no. 002014) were from Jackson laboratory. OT-1.CD45.1/2 mice were generated by crossing OT-1 and B6.CD45.1 for one generation. Mice of 8–12 weeks of age were used for the experiment. To isolate APCs, spleens were chopped into small pieces and digested at 37 °C for 45 min with buffer containing 0.28 U ml −1 Liberase (Roche, 5401119001), 100 U ml −1 hyaluronidase (Sigma, H3506) and 50 U ml −1 DNase I (Roche, 10104159001) in RMPI1640 (Gibco, 11875093). Cells were pelleted down for CD11c microbead (Miltenyi Biotec,130-125-835) enrichment based on the manufacturer’s instructions. Dendritic cells were sorted as CD45 + CD11c + MHC-II hi cells. To enrich OT-1 CD8 + naive T cells, naive CD8α + T Cell Isolation Kit (Miltenyi Biotec, 130-096-543) was used for column-based enrichment. OT-1 naive CD8 + T cells were sorted followed by CD45 + CD3 + CD8 + TCRVβ5 + TCRVα2 + CD62L + CD44 low cells. Microglia were sorted followed by CD45 low CD11b + cells after single-cell collection from brain parenchyma with cortex and hippocampus. A total of 25,000 T cells labelled with CellTrace Violet (5 μM, Thermo Fisher, C34571) were co-cultured with 20,000 microglia or dendritic cells for 3 days in a U-bottom 96-well plate (Corning, 07-200-720). A serial dilution of ovalbumin (Worthington, LS003049) starting from 1,000 μg ml −1 (2× dilution) was made and added into the wells. For microglia–OT-1 co-culture, two doses of IFNγ (100 ng ml −1 and 1,000 ng ml −1 ) were added at the same time. After 3 days, cells were analysed by flow cytometry for T cell proliferation. Flow cytometry and cell sorting were completed on a FACS CantoII or FACS Aria II instrument and analysed using Flowjo (v10). Staining was carried out at 4 °C in the presence of Fc block (2.4G2; Leinco) in magnetic-activated cell-sorting buffer (0.5% BSA, 2 mM EDTA in PBS). The following antibodies were used: CD45 (Biolegend, 30-F11), CD11b (Biolegend, M1/70), I-A/I-E (Biolegend, M5/114.15.2), CD3 (Biolegend, 145-2C11), TCRVβ5 (Biolegend, MR9-4), TCRVα2 (Biolegend, B20.1), CD45.2 (Biolegend, 104), CD45.1 (Biolegend, A20), CD44 (Biolegend, IM7), CD8α (Biolegend, 53-6.7), CD62L (Biolegend, MEL-14). Statistics Statistical analysis was carried out using Prism. Differences between groups were evaluated by Student’s t -test, or one-way or two-way ANOVA followed by post hoc tests. For conditioned fear behaviour, two-way ANOVA followed by Bonferroni test was used. Data are expressed as mean ± s.e.m. *** P < 0.0001; ** P < 0.001; * P < 0.05; NS, no significant difference. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Immune scRNA-seq sample information, information for samples from patients with Alzheimer’s disease, and immune cell numbers in each cluster in the brain are available in the Supplementary Information . All source data, including sequencing reads and single-cell expression matrices, are available from the Gene Expression Omnibus under accession code GSE221856 . Code availability Code for preprocessing of immune scRNA-seq bioinformatic analysis is available at .
Nearly two dozen experimental therapies targeting the immune system are in clinical trials for Alzheimer's disease, a reflection of the growing recognition that immune processes play a key role in driving the brain damage that leads to confusion, memory loss and other debilitating symptoms. Many of the immunity-focused Alzheimer's drugs under development are aimed at microglia, the brain's resident immune cells, which can injure brain tissue if they're activated at the wrong time or in the wrong way. A new study from researchers at Washington University School of Medicine in St. Louis indicates that microglia partner with another type of immune cell—T cells—to cause neurodegeneration. Studying mice with Alzheimer's-like damage in their brains due to the protein tau, the researchers discovered that microglia attract powerful cell-killing T cells into the brain, and that most of the neurodegeneration could be avoided by blocking the T cells' entry or activation. The findings, published March 8 in the journal Nature, suggest that targeting T cells is an alternative route to preventing neurodegeneration and treating Alzheimer's disease and related diseases involving tau, collectively known as tauopathies. "This could really change the way we think about developing treatments for Alzheimer's disease and related conditions," said senior author David M. Holtzman, MD, the Barbara Burton and Reuben M. Morriss III Distinguished Professor of Neurology. "Before this study, we knew that T cells were increased in the brains of people with Alzheimer's disease and other tauopathies, but we didn't know for sure that they caused neurodegeneration. These findings open up exciting new therapeutic approaches. Some widely used drugs target T cells." CD3 and IBA1 staining in hippocampus of TE4 mice. IBA1 (red) and CD3 (green) staining in 9.5-month-old TE4 mice with tau pathology in DG. Scale bar, 10 μm. Credit: Nature (2023). DOI: 10.1038/s41586-023-05788-0 "Fingolomid, for example, is commonly used to treat multiple sclerosis, which is an autoimmune disease of the brain and spinal cord. It's likely that some drugs that act on T cells could be moved into clinical trials for Alzheimer's disease and other tauopathies if these drugs are protective in animal models." Alzheimer's develops in two main phases. First, plaques of the protein amyloid beta start to form. The plaques can build up for decades without obvious effects on brain health. But eventually, tau also begins to aggregate, signaling the start of the second phase. From there, the disease quickly worsens: The brain shrinks, nerve cells die, neurodegeneration spreads, and people start having difficulty thinking and remembering. Microglia and their role in Alzheimer's have been intensely studied. The cells become activated and dysfunctional as amyloid plaques build up, and even more so once tau begins to aggregate. Microglial dysfunction worsens neurodegeneration and accelerates the course of the disease. First author Xiaoying Chen, Ph.D., an instructor in neurology, wondered about the role of other, less studied immune cells in neurodegeneration. She analyzed immune cells in the brains of mice genetically engineered to mimic different aspects of Alzheimer's disease in people, looking for changes to the immune cell population that occur over the course of the disease. Mirroring the early phase of the disease in people, two of the mouse strains build up extensive amyloid deposits but do not develop brain atrophy. A third strain, representative of the later phase, develops tau tangles, brain atrophy, neurodegeneration and behavioral deficits by 9½ months of age. A fourth mouse strain does not develop amyloid plaques, tau tangles or cognitive impairments; it was studied for comparison. Along with Chen and Holtzman, the research team included Maxim N. Artyomov, Ph.D., the Alumni Endowed Professor of Pathology & Immunology, and Jason D. Ulrich, Ph.D., an associate professor of neurology, among others. The researchers found many more T cells in the brains of tau mice than the brains of amyloid or comparison mice. Notably, T cells were most plentiful in the parts of the brain with the most degeneration and the highest concentration of microglia. T cells were similarly abundant at sites of tau aggregation and neurodegeneration in the brains of people who had died with Alzheimer's disease. Additional mouse studies indicated that the two kinds of immune cells work together to create an inflammatory environment primed for neuronal damage. Microglia release molecular compounds that draw T cells into the brain from the blood and activate them; T cells release compounds that push microglia toward a more pro-inflammatory mode. Eliminating either microglia or T cells broke the toxic connection between the two and dramatically reduced damage to the brain. For example, when tau mice were given an antibody to deplete their T cells, they had fewer inflammatory microglia in their brains, less neurodegeneration and atrophy, and an improved ability to perform tasks such as building a nest and remembering recent things. "What got me very excited was the fact that if you prevent T cells from getting into the brain, it blocks the majority of the neurodegeneration," Holtzman said. "Scientists have put a lot of effort into finding therapies that prevent neurodegeneration by affecting tau or microglia. As a community, we haven't looked at what we can do to T cells to prevent neurodegeneration. This highlights a new area to better understand and therapeutically explore."
10.1038/s41586-023-05788-0
Earth
Extensive deep coral reefs in Hawaii harbor unique species and high coral cover
Richard L. Pyle et al, A comprehensive investigation of mesophotic coral ecosystems in the Hawaiian Archipelago, PeerJ (2016). DOI: 10.7717/peerj.2475 Journal information: PeerJ
http://dx.doi.org/10.7717/peerj.2475
https://phys.org/news/2016-10-extensive-deep-coral-reefs-hawaii.html
Abstract Although the existence of coral-reef habitats at depths to 165 m in tropical regions has been known for decades, the richness, diversity, and ecological importance of mesophotic coral ecosystems (MCEs) has only recently become widely acknowledged. During an interdisciplinary effort spanning more than two decades, we characterized the most expansive MCEs ever recorded, with vast macroalgal communities and areas of 100% coral cover between depths of 50–90 m extending for tens of km 2 in the Hawaiian Archipelago. We used a variety of sensors and techniques to establish geophysical characteristics. Biodiversity patterns were established from visual and video observations and collected specimens obtained from submersible, remotely operated vehicles and mixed-gas SCUBA and rebreather dives. Population dynamics based on age, growth and fecundity estimates of selected fish species were obtained from laser-videogrammetry, specimens, and otolith preparations. Trophic dynamics were determined using carbon and nitrogen stable isotopic analyses on more than 750 reef fishes. MCEs are associated with clear water and suitable substrate. In comparison to shallow reefs in the Hawaiian Archipelago, inhabitants of MCEs have lower total diversity, harbor new and unique species, and have higher rates of endemism in fishes. Fish species present in shallow and mesophotic depths have similar population and trophic (except benthic invertivores) structures and high genetic connectivity with lower fecundity at mesophotic depths. MCEs in Hawai‘i are widespread but associated with specific geophysical characteristics. High genetic, ecological and trophic connectivity establish the potential for MCEs to serve as refugia for some species, but our results question the premise that MCEs are more resilient than shallow reefs. We found that endemism within MCEs increases with depth, and our results do not support suggestions of a global faunal break at 60 m. Our findings enhance the scientific foundations for conservation and management of MCEs, and provide a template for future interdisciplinary research on MCEs worldwide. Cite this as Pyle RL, Boland R, Bolick H, Bowen BW, Bradley CJ, Kane C, Kosaki RK, Langston R, Longenecker K, Montgomery A, Parrish FA, Popp BN, Rooney J, Smith CM, Wagner D, Spalding HL. 2016 . A comprehensive investigation of mesophotic coral ecosystems in the Hawaiian Archipelago . PeerJ 4 : e2475 Main article text Introduction Tropical coral reefs are compelling subjects for a wide range of scientific investigations because they provide an optimal combination of high diversity, extensive existing data, robust information infrastructure, large potential for the discovery of new taxa, and opportunities to gain new insights into fundamental ecological dynamics ( Reaka-Kudla, 1997 ). They are also among the most severely threatened ecosystems on Earth ( Pandolfi et al., 2003 ; Knowlton et al., 2010 ). It has become increasingly evident in recent years that anthropogenic impacts, such as overharvesting, pollution, coastal development, invasive species, ocean acidification, and global climate change, imperil the health of coral-reef ecosystems worldwide ( Bruno & Selig, 2007 ; Burke et al., 2011 ). Although the vast majority of known hermatypic coral reefs occur at depths of less than 40 m, there is longstanding evidence for photosynthetic corals and associated reef communities at greater depths. Zooxanthellate hermatypic corals have been found at 98 m in the tropical Atlantic ( Hartman, 1973 ; Fricke & Meischne, 1985 ; Reed, 1985 ), below 100 m in the Caribbean ( Locker et al., 2010 ; Bongaerts et al., 2015 ; Garcia-Sais et al., 2014 ; Smith et al., 2014 ), 112 m at Enewetak ( Colin et al., 1986 ), 125 m on the Great Barrier Reef ( Englebert et al., 2014 ), 145 m in the Red Sea ( Fricke & Schuhmacher, 1983 ), 153 m in Hawai‘i, and 165 m at Johnston Atoll ( Strasburg, Jones & Iversen, 1968 ; Maragos & Jokiel, 1985 ; Kahng & Maragos, 2006 ). Hopley (1991) reported 100% coral cover at 70 m on the Great Barrier Reef, and Jarrett et al. (2005) reported up to 60% coral cover at 60–75 m at Pulley Ridge in the Gulf of Mexico. Photosynthetic algae have been observed at similar or deeper depths ( Porter, 1973 ; Littler et al., 1985 ; Colin et al., 1986 ; Hills-Colinvaux, 1986 ), and fish species at such depths belong almost exclusively to families typical of shallower coral-reef environments ( Pyle, 1996b ; Pyle, 1999a ). Despite these scattered reports, coral-reef environments at depths greater than 30 m are poorly characterized, largely because of the logistical difficulties associated with accessing such depths ( Pyle, 1996c ; Pyle, 1998 ; Pyle, 1999b ; Pyle, 2000 ; Parrish & Pyle, 2001 ). There are potentially thousands of species that have yet to be discovered and scientifically described from deeper coral reef habitats ( Pyle, 1996d ; Pyle, 2000 ; Rowley, 2014 ) and the basic ecology and population dynamics of these communities, as well as their connectivity with shallow reefs, are just beginning to be explored. Most coral-reef monitoring programs are designed to target shallow reefs ( Jokiel et al., 2001 ; Brown et al., 2004 ; Preskitt, Vroom & Smith, 2004 ; Kenyon et al., 2006 ). In recent years, there has been a greater effort to document coral-reef ecosystems at depths of 30 to over 150 m, now referred to as “Mesophotic Coral Ecosystems” (MCEs) ( Hinderstein et al., 2010 ; Baker, Puglise & Harris, 2016 ). These research efforts have primarily focused on aspects of MCEs that are relevant to management policies, such as their distribution, ecology and biodiversity, as MCEs have been identified as a conservation priority ( Blyth-Skyrme et al., 2013 ; Sadovy de Mitcheson et al., 2013 ). However, despite the growing body of research targeting MCEs, they are often not included in reef assessment and monitoring programs, management-related reports on the status and health of coral reefs ( Brainard et al., 2003 ), or general overviews of coral-reef science ( Trenhaile, 1997 ). Most studies of coral-reef development (and the models derived from them) ( Dollar, 1982 ; Grigg, 1998 ; Braithwaite et al., 2000 ; Rooney et al., 2004 ) and coral-reef ecology ( Luckhurst & Luckhurst, 1978 ; Friedlander & Parrish, 1998 ; Friedlander & DeMartini, 2002 ; Friedlander et al., 2003 ) do not include MCEs. Indeed, most of our understanding of coral-reef ecosystems is biased by the preponderance of data from depths less than 30 m, which represents less than one-fifth of the total depth range of the tropical coral-reef environment ( Pyle, 1996b ; Pyle, 1999a ). An understanding of MCEs is essential to successfully characterize the health of coral reefs in general, and to formulate effective management plans in the face of increasing anthropogenic stress. Coral-reef environments within the Hawaiian Archipelago have been extensively studied and documented for decades ( Maragos, 1977 ; Chave & Malahoff, 1998 ; Hoover, 1998 ; Mundy, 2005 ; Randall, 2007 ; Fletcher et al., 2008 ; Grigg et al., 2008 ; Jokiel, 2008 ; Rooney et al., 2008 ; Toonen et al., 2011 ; Selkoe et al., 2016 ). These islands and reefs stretch over 2,500 km across the north-central tropical Pacific Ocean, and consist of the eight Main Hawaiian Islands (MHI) in the southeast, and a linear array of uninhabited rocky islets, atolls, reefs, and seamounts comprising the Northwestern Hawaiian Islands (NWHI) ( Fig. 1 ). Many Hawaiian reefs are protected by local, state and federal laws, with a wide range of management and conservation efforts already in place. In particular, the NWHI fall within the Papahānaumokuākea Marine National Monument, a federally protected area larger than all U.S. National Parks combined (>360,000 km 2 ), which is listed as a World Heritage site and includes about 10% of coral-reef habitats within U.S. territorial waters ( Rohmann et al., 2005 ). Figure 1: Map of the Hawaiian Archipelago. Source Imagery: Landsat. Download full-size image DOI: 10.7717/peerj.2475/fig-1 The first investigations of MCEs within the Hawaiian Archipelago were conducted in the 1960s with SCUBA ( Grigg, 1965 ) and submersibles ( Brock & Chamberlain, 1968 ; Strasburg, Jones & Iversen, 1968 ). These early investigations found an unexpected abundance of reef-associated species (including hermatypic corals) at depths from 25 to 180 m. These studies also revealed that many species of fishes previously believed to be restricted to shallow water inhabit much greater depths than expected. In the decades that followed, a smattering of publications reported on MCEs within the Hawaiian Archipelago ( Grigg, 1976 ; Agegian & Abbott, 1985 ; Maragos & Jokiel, 1985 ; Moffitt, Parrish & Polovina, 1989 ; Chave & Mundy, 1994 ; Parrish & Polovina, 1994 ; Pyle & Chave, 1994 ), but most of these involved either a few individual species or habitats, or focused on a broader depth range (with MCEs representing only a small portion of the study). Beginning in the late 1980s, the advent of “technical” mixed-gas diving opened up new opportunities for exploration of MCEs in Hawai‘i and elsewhere ( Pyle, 1996a ; Pyle, 1999b ; Pyle, 2000 ; Grigg et al., 2002 ; Parrish & Pyle, 2002 ; Pence & Pyle, 2002 ; Parrish & Boland, 2004 ; Boland & Parrish, 2005 ; Grigg, 2006 ). In 2006, the discovery of extensive MCEs with near-100% coral cover off Maui, coupled with interest in documenting MCEs in the NWHI and a growing infrastructure supporting mixed-gas diving operations among Hawaiian research institutions, led to a surge of research in these deep-reef environments and a series of collaborative, multi-disciplinary projects dedicated to improving the understanding of MCEs. These projects include (1) the Deep Coral Reef Ecosystem Studies (Deep-CRES) program focused on the MCEs of the ‘Au‘au Channel off Maui and their relationship to shallower reefs funded by the National Oceanic and Atmospheric Administration’s (NOAA) Center for Sponsored Coastal Ocean Research, (2) two separate studies funded by NOAA’s Coral Reef Conservation Program to study MCEs off Kaua‘i and O‘ahu, and (3) ongoing annual research cruises sponsored by NOAA’s Office of National Marine Sanctuaries to study MCEs within the Papahānaumokuākea Marine National Monument. These projects, as well as many other smaller surveys over the past two decades, have provided an opportunity for a coordinated effort to explore and document MCEs across the Hawaiian Archipelago. The overarching goal of these activities was to establish a baseline understanding of MCEs in Hawai‘i at depths ranging from 30 to over 150 m, and to provide insights into the structure, composition, ecological dynamics, and management needs of MCEs in general. The primary research activities were driven by a series of hypotheses designed to reveal fundamental characteristics of Hawaiian MCEs and how they compare with both shallow reef habitats and non-MCE habitats at comparable depths. The questions behind these hypotheses involved characterizations in four general categories: (1) basic geophysical habitat (water clarity, temperature, photosynthetically active radiation [PAR], water movement, nutrient levels, and substrate type), (2) patterns of biodiversity (species composition, relative abundance, and overlap, as well as patterns of endemism), (3) population structure and dynamics (growth rates, impact from anthropogenic and natural disturbance, connectivity, disease levels, age structure, fecundity and production), and (4) broad ecological patterns (trophic dynamics and genetic connectivity). In addition, data from these studies were used extensively to develop a spatial model based on physical parameters and other factors to predict the occurrence of MCEs in Hawai‘i and globally ( Costa et al., 2015 ). Ultimately, our hope is that the insights gained from this research, such as the predicted distribution and abundance of MCEs, the richness and uniqueness of the biodiversity they harbor, and the potential for MCEs to serve as refugia for overexploited biological resources on shallow reefs, will help guide future policy decisions in the conservation and management of marine resources in Hawai‘i and elsewhere. Materials and Methods As this synthesis represents a broad summary of MCEs in Hawai‘i, based on the results of a multi-year interdisciplinary collaborative effort by many individuals, the methods involved are extensive and diverse. The following represents a summary of methods used throughout this study, particularly as they pertain to data not previously published elsewhere. More detailed descriptions of methods used during this study, including aspects that have been previously published, are included in Supplemental Information 1 , and within cited publications. The State of Hawai‘i Department of Land and Natural Resources developed Special Activity Permits for the University of Hawai‘i and National Marine Fisheries Service for work related to this project that occurred within State of Hawai‘i waters. All sampling procedures and experimental manipulations were reviewed as part of obtaining the field permit. All vertebrates (fishes) were collected in accordance with University of Hawai‘i IACUC protocol 09-753-5, “Phylogeography and Evolution of Reef Fishes” (PI: Dr. Brian Bowen), including collection and euthanization by spear. Study sites We examined MCEs at multiple sites throughout the Hawaiian Archipelago. The primary MHI study sites were in the ‘Au‘au Channel off Maui, southeast Kaua‘i, and the southern shore of O‘ahu ( Fig. 2 ). Additional qualitative observations of MCEs around the islands of O‘ahu, Kaua‘i, Maui, and Hawai‘i provide complementary insights into general characteristics of MCEs in the MHI. Surveys of MCEs in the NWHI included visits to ten islands and reefs labelled in Fig. 1 . Figure 2: Location of study areas. Inset shows remote camera survey (TOAD) track locations, and sites for “John,” “Frank,” and “Tele 1” and “Tele 2” data moorings. MHI imagery from Landsat, USGS. Download full-size image DOI: 10.7717/peerj.2475/fig-2 Survey effort Data from MCEs were gathered from mixed-gas rebreather dives, submersible dives, Remotely Operated Vehicle (ROV) dives, and Towed Optical Assessment Device (TOAD camera sled) transects. The submersible and ROV dives were conducted using the Hawai‘i Undersea Research Laboratory’s (HURL) Pisces IV and Pisces V submersibles, and the RCV-150 ROV. On several occasions, both submersibles and rebreather divers conducted simultaneous, coordinated field operations ( Fig. 3 ). Mixed-gas dives in the NWHI were conducted from the NOAA Ship Hi‘ialakai . Dive sites in the MHI were determined by a variety of factors, including previously known MCE habitat, bathymetry data, and direct site identification by submersible, ROV and TOAD; whereas dive sites in the NWHI targeted steep vertical drop-offs at depths of 50–85 m located using historical charts and new multibeam sonar data collected with the Hi‘ialakai . Figure 3: Research divers place a dome over a set of corals 89 m deep. Research divers Ken Longenecker (left), Dave Pence (center) and Christina Bradley (right) place a dome over a set of corals 89 m deep as part of an experiment to determine coral feeding patterns, while pilot Terry Kerby and science observers Brian Popp and Andrea Grottoli watch on from the HURL submersible Pisces V . Photo: RL Pyle. Download full-size image DOI: 10.7717/peerj.2475/fig-3 Table 1: List of all temperature sensors deployed across Maui and Kaua‘i. Location Latitude Longitude Depth (m) Deployment Makaheuna Point, Kaua‘i 21°51.388N 159°26.003W 46 13 June 2009 to 12 July 2010 Makaheuna Point, Kaua‘i 21°51.388N 159°26.003W 63 13 June 2009 to 12 July 2010 Kipu Kai, Kaua‘i 21°52.460N 159°23.028W 57 17 June 2009 to 13 July 2010 Keyhole Pinnacle, Maui 20°56.437N 156°45.619W 70 6 April 2009 to 17 January 2010 Keyhole Pinnacle, Maui 20°56.452N 156°45.652W 88 6 April 2009 to 17 January 2010 Keyhole Pinnacle, Maui 20°56.452N 156°45.652W 102 6 April 2009 to 17 January 2010 Keyhole Pinnacle, Maui 20°56.454N 156°45.661W 116 6 April 2009 to 17 January 2010 Keyhole Pinnacle, Maui 20°56.478N 156°45.666W 134 6 April 2009 to 17 January 2010 Keyhole Pinnacle, Maui 20°56.478N 156°45.666W 160 6 April 2009 to 17 January 2010 Branching Coral Reef, Maui 20°49.300N 156°40.377W 58 14 December 2009 to 14 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 34 17 December 2009 to 13 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 34 17 December 2009 to 13 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 42 17 December 2009 to 13 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 46 17 December 2009 to 13 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 49 17 December 2009 to 13 December 2010 Stone Walls, Maui 20°52.890N 156°43.794W 58 17 December 2009 to 13 December 2010 DOI: 10.7717/peerj.2475/table-1 Geophysical habitat characterization The majority of geophysical habitat characterization focused on the ‘Au‘au Channel site ( Fig. 2 , inset). Existing moderate-resolution (20-m grid) bathymetry was supplemented with multibeam surveys at a resolution of 5 m in a few areas. Video transects from submersible, ROV and TOAD dives were used to document the spatial distribution of corals and macroalgal communities across >140 linear km of habitat ( Fig. 2 ). Two specific areas were selected for detailed physical oceanographic characterization using four oceanographic moorings with temperature and pressure loggers, current meter, and current profiler. Additional temperature data were recorded at four sites between Maui and Kaua‘i ( Table 1 ). Underwater irradiance was measured by lowering a calibrated spherical (4 π ) quantum sensor (Underwater LI-193SA, LI-COR, Lincoln, NE, USA) through the water via a profiling rig ( n = 6 profiles taken within one hour of noon on 4, 5, and 6 August 2008 and 12, 13, and 14 July 2010 over dense Leptoseris spp. reefs at a maximum depth of 91 m off west Maui); data were stored with a LI-COR LI-1400 datalogger. Biodiversity patterns The biodiversity surveys focused primarily on macroalgae, fishes, marine invertebrates, and corals. Surveys were conducted using a variety of visual, video, and collecting techniques. Direct visual observations were made by trained individuals during mixed-gas SCUBA dives and from submersibles. Videotapes were generated by divers, submersibles, ROV, and TOAD camera system. Specimens were collected by divers and the manipulator arm of the Pisces submersibles ( Fig. 4 ). Qualitative collections and observations were made to determine species presence, and quantitative transects were made to measure the distribution and abundance of species. Depth ranges for algae and fishes known to occur at depths of less than 200 m were determined from quantitative and qualitative surveys, and available published information. All specimens were photo-documented using high-resolution digital cameras, and voucher specimens were deposited in the Bishop Museum Natural Sciences collections. Figure 4: Collecting samples using the Pisces submersible manipulator arm. Photo: HURL. Download full-size image DOI: 10.7717/peerj.2475/fig-4 Figure 5: Colony of Leptoseris sp. being stained with Alizarin Red for growth rate studies. Photo: HURL. Download full-size image DOI: 10.7717/peerj.2475/fig-5 Population dynamics Colonies of Leptoseris were stained in situ using Alizarin Red for subsequent harvesting to determine growth rates ( Fig. 5 ). Additional test colonies of Leptoseris were sent to collaborators at the Woods Hole Oceanographic Institution for computerized axial tomography (CT) scanning, 14 C and U/Th (Uranium–thorium) dating and elemental ratio analyses to determine growth rates. Three fish species exploited on shallow reefs and reported from MCEs, Centropyge potteri (Jordan & Metz 1912), Ctenochaetus strigosus (Bennett 1828), and Parupeneus multifasciatus (Quoy & Gaimard 1825), were selected to compare with existing estimates of production and reproductive output in shallow habitats. We collected specimens to describe length-weight and length-fecundity relationships, growth, and size-specific sex ratios; and to estimate size-at-maturity for mesophotic populations. Laser-videogrammetry surveys ( Fig. 6 ) were used to estimate densities and size structure of target species encountered during the belt transects, based on a high-definition video camera fitted with parallel laser pointers. We then reviewed the video and captured still frames when an individual was oriented perpendicular to the laser beam axes and both lasers appeared on the fish. Because the beams are parallel, the lasers superimpose a reference scale on the side of the fish, allowing length estimates by solving for equivalent ratios. Results of life-history analysis and laser-videogrammetry surveys were incorporated into a modified Ricker production model to estimate annual biomass production and reproductive output for mesophotic populations. Figure 6: Ross Langston demonstrating the videogrammetry technique for estimating fish sizes. A video camera fitted with parallel lasers (A); superimposes a measurement scale on target fish (B–D). Photos: H Bolick, K Longenecker and R Langston. Download full-size image DOI: 10.7717/peerj.2475/fig-6 Broad trophic characterizations To determine the trophic level of key food web components and functional groups, we conducted stomach content and carbon and nitrogen stable isotopic analyses of reef fishes from 45 species, 30 genera and 18 families ( Bradley et al., in press ). Further isotope analysis was performed on 24 selected species from seven families without stomach content analysis ( Papastamatiou et al., 2015 ). We also used compound-specific isotope analysis of amino acids to estimate trophic positions of fishes ( McClelland & Montoya, 2002 ; McClelland, Holl & Montoya, 2003 ; Pakhomov et al., 2004 ; McCarthy et al., 2007 ; Popp et al., 2007 ; Hannides et al., 2009 ; Hannides et al., 2013 ) using the difference in δ 15 N values of trophic and source amino acids for trophic position calculation ( Chikaraishi et al., 2009 ; Bradley et al., 2015 ). Additional isotopic analyses were performed on Galapagos sharks [ Carcharhinus galapagensis (Snodgrass & Heller 1905)] and giant trevally [ Caranx ignobilis (Forsskål 1775)] in the NWHI ( Papastamatiou et al., 2015 ). Figure 7: Generalized diagram of major components of MCEs in the ‘Au‘au Channel, Hawaiian Islands. Illustration by RL Pyle. Download full-size image DOI: 10.7717/peerj.2475/fig-7 Results and Discussion Our intention in this work is to provide a broad characterization of MCEs across the Hawaiian Archipelago based on more than two decades of interdisciplinary and collaborative research, with emphasis on a seven-year effort to document MCEs in Hawaii largely funded by NOAA. Some portions of this overall research have already been published elsewhere, and others are presented for the first time herein. For purposes of clarity and cohesiveness, we present both novel and previously published information together. When information has previously been published, we provide appropriate literature citations, and when information is presented for the first time, we indicate it as such. Our investigations revealed that MCEs throughout the Hawaiian Archipelago can be broadly categorized into several distinct habitat types ( Fig. 7 ). The shallowest portions of MCEs (30–50 m) are characterized by a few of the coral species found on shallow Hawaiian reefs, in particular Montipora capitata Dana 1846, Pocillopora meandrina Dana 1846, Pocillopora damicornis (Linnaeus 1758) and Porites lobata Dana 1846. At depths of ∼40–75 m, expanses of low relief “carpets” of branching M. capitata are found overlying sediment fields, switching to a plate-like or laminar morphology on ledges and rocky slopes ( Rooney et al., 2010 ). All MCE depths had large Halimeda spp. meadows and other dominant macroalgal communities over both hard and soft substrates. Although these macroalgal communities generally did not comprise major habitats for large-bodied fishes in the MHI (either in MCE depths, or in shallower areas), endemic reef-associated fishes were found in macroalgal ( Microdicyton spp.) beds at MCE depths in the NWHI ( Kane, Kosaki & Wagner, 2014 ). Throughout the archipelago, undercut limestone ledges with small caves and other features (the remnants of ancient shorelines) ( Fletcher & Sherman, 1995 ) represent the dominant MCE habitat type at depths of 50–60 m, 80–90 m, and 110–120 m. In certain areas, particularly the ‘Au‘au Channel off Maui and off southeastern Kaua‘i, near-100% Leptoseris coral cover extends for tens of km 2 at 70–90 m. This habitat type, one of the primary subjects of our investigations, represents the most spatially extensive MCE environment documented to date ( Costa et al., 2015 ). These MCE habitats are often not in close proximity to each other, but separated by vast areas of sand lacking any rocky reef structure. Some of these sandy areas are characterized by meadows of Halimeda kanaloana (Maui), Avrainvillea sp. and/or Udotea sp. (west and south O‘ahu), while others are devoid of organisms associated with coral-reef ecosystems (i.e., non-MCE habitat within MCE depth ranges). Another MCE habitat within the Hawaiian Archipelago that was not a primary subject of investigation for this work, but for which we have extensive qualitative observations, are the steep slopes and drop-offs characteristic of the island of Hawai‘i (i.e., the “Big Island”), especially at the southeastern end of the archipelago. This habitat is dominated by basaltic rock (rather than the coral and limestone, which dominate MCE habitat throughout the rest of the archipelago). Finally, one MCE habitat notably absent from the Hawaiian Archipelago, but prevalent throughout most of the tropical Indo-Pacific, is steep limestone drop-offs, which often extend more or less continuously from shallow-reef depths down to MCE depths and beyond. In the sections that follow, we highlight and summarize the most salient aspects of MCEs throughout the Hawaiian Archipelago. In particular, we compare and contrast patterns across different MCE habitats, different parts of the archipelago, and among different taxa, as well as emphasize both commonalities and differences among these patterns. Geophysical habitat characterization General habitat characterization MCE habitats in different parts of the archipelago were characterized by contrasting geophysical structures and bathymetric profiles. The general bathymetry of MCEs throughout most of the archipelago (except the island of Hawai‘i, which was not a primary study site) is characterized primarily by gradually sloping flat substrate with occasional rocky outcrops of both volcanic and carbonate material. In most areas, the gradually sloping bottom was interrupted by bathymetric discontinuities at approximately 50–60 m, 80–90 m, and 110–120 m depths (the 80–90 m discontinuity is buried in sand throughout most of the NWHI, except for a small exposed area near Pearl and Hermes Atoll). These discontinuities were typically continuous, rocky undercut limestone ledges or steep sandy or limestone slopes parallel to shore. In some locations, such as within the ‘Au‘au Channel, these discontinuities were the result of karstification ( Fig. 2 ). MCE habitats identified within flat-bottom areas (i.e., between discontinuities) included macroalgal meadows, macroalgal beds, and (especially in the 40–75 m range) expansive low relief beds of interlocking branching colonies, or laminar tiers of Montipora spp. Gradually sloping, flat-bottom areas were also commonly surfaced by sand, gravel, rhodoliths and pavement, with very little coral cover. Corals were more common on exposed rock surfaces along rock ledges and outcrops. In contrast to most sites in the Hawaiian Archipelago, the 80–90 m discontinuity within the ‘Au‘au Channel included very few exposed rocky areas at MCE depths, except along very steep walls and in a few areas otherwise dominated by Leptoseris spp. Figure 8: Temperature data from “John” and “Frank” moorings. Temperature data from “John” and “Frank” moorings, comparing seasonal and daily fluctuations in water temperature at each of eight different depths off the ‘Au‘au Channel from August 2008 to July 2009. Graphs represent the average daily temperature (A) and the daily standard deviation (SD) (B) at each depth. The thin black line below each depth trace in (B) represents SD = 0, and the thin black line above represents SD = 1; the greater the distance of the color line from the black line below (SD = 0), the more dynamic the daily temperature. SD is based on n = 72 temperature values/day for data recorded at 84 and 123 m, and n = 36 for other depths. Download full-size image DOI: 10.7717/peerj.2475/fig-8 Geophysical factors One of the most important geophysical characteristics of MCEs in Hawai‘i is water clarity. Within the ‘Au‘au Channel, MCEs were found to occur in areas offshore with very clear water, with a diffuse attenuation coefficient ( K o ) of 0.041 ± 0.001 m −1 . In comparison, nearby areas inshore of west Maui had higher attenuation coefficients (and thus more turbid water), ranging from 0.107 m −1 at 10 m depth to 0.073 m −1 at 30 m depth ( Spalding, 2012 ). The average percent surface irradiance (SI) and irradiance values (PAR) at depth were 10% SI at 34 m (245 ± 15 SE μE m −2 s −1 ), 1% SI at 90 m (25 ± 3 SE μE m −2 s −1 ), and 0.1% at 147 m (2.5 ± 0.4 SE μE m −2 s −1 ). The 1% SI is often referred to as the compensation point, where photosynthesis equals respiration; above this point, there is net photosynthesis and production of organic matter; below this point, there is net consumption of organic material, and respiration exceeds photosynthesis ( Kirk, 2011 ). In general, areas with the clearest water also supported the richest and most expansive MCEs. Water temperatures in the Au‘au Channel from August 2008 through July 2009 ranged from just below 21°C to just over 26.5°C throughout the water column over the time period sampled. A seasonal temperature cycle was apparent throughout the water column, with warmest temperatures from September to November, and coolest from February to May. The temperature was consistently 2–3°C cooler at the deep end of the sampled depth range ( Fig. 8A ), with less short-term variability and less seasonal fluctuation. Water temperatures were logged within three depth ranges: shallow (53 and 64 m), middle (73, 84, and 93 m), and deep (102, 112, and 123 m). Water temperatures at the deepest depths were the most stable on a daily basis, whereas temperatures at the middle depths were the most dynamic. The shallowest depths were intermediate in terms of daily thermal stability ( Fig. 8B ). Relatively large (1–2°C) short-term (1 day) temperature excursions occurred at 50–75 m, however, temperatures were very consistent on this time scale at the deep site ( Fig. 8B ). The most dynamic temperatures for all but the deepest three depths, corresponded with the warmest months (September to November), and the most thermally stable months were the coolest (February to May). We were unable to determine whether MCEs are influenced by tidally-correlated vertical thermocline shifts. The establishment of moorings at the deep extreme of the MCE range would provide useful comparative data to test for a tidal effect, versus bathymetric forcing. In March of 2009, the “Tele 1” mooring slid down slope, causing the temperature sensor to change depth, so data from this mooring were not included in the analyses. A comparison of temperatures between Maui and Kaua‘i at 34–62 m showed that both islands had a seasonal trend, but Kaua‘i had a higher daily and seasonal fluctuation than Maui ( Fig. 9 ). Figure 9: One-year temperature profile in two MCE habitat types at Kaua‘i and Maui. Branching coral ( Montipora ; A) habitat was at approximately 57 m and black coral ( Antipathes , B) habitat was at 34 to 62 m. Download full-size image DOI: 10.7717/peerj.2475/fig-9 Figure 10: Current magnitude profiles. Sontek 250 kHz Acoustic Doppler Profiler profiles of current magnitude in cm s −1 , with overlap shown in the black line between the deeper Frank mooring and shallower John mooring. Broken down by seasons to show detail (A, Autum; B, Winter; C, Spring; D, Summer). Download full-size image DOI: 10.7717/peerj.2475/fig-10 Acoustic profiler analysis indicated that the current magnitude at 70–90 m where corals were most abundant fluctuated between 10–15 cm s −1 with sporadic, brief pulses >25 cm s −1 , with a clear pulsing (strengthening and weakening) that corresponded with direction changes on a tidal frequency ( Fig. 10 ). At greater depths, the flow was almost stagnant with little tidal signal and variable direction. These results are in stark contrast to the higher magnitude currents (up to more than 40 cm s −1 ) that occur at shallow depths subject to daily tidally forced flows ( Fig. 10 ). Although there were clear differences in flow rates observed at MCE depths, the observed direction of flow was highly variable and difficult to attribute to tidal or wind driven processes. Greater resolution in sampling would be needed to determine the relationship between flow direction and reef orientation for MCE habitats. Biodiversity patterns Species diversity Seventy-two species of frondose macroalgae were identified based on morphological characteristics from MCEs in the MHI, including 29 Chlorophyta, 31 Rhodophyta, and 12 Phaeophyceae. Estimates of macroalgal diversity are likely conservative because of taxonomic limitations regarding morphological identifications. For instance, large green algal “sea lettuce” blades from MCEs were all identified morphologically as “ Ulva lactuca .” However, recent molecular analyses revealed that these specimens represent four new species belonging to the genera Ulva and Umbraulva , which cannot be identified using morphological characters alone ( Spalding et al., 2016 ). Nevertheless, the methods used were similar to current taxonomic treatments in Hawai‘i ( Abbott, 1999 ; Abbott & Huisman, 2003 ; Huisman, Abbott & Smith, 2007 ) allowing for comparisons with the better-known shallow-water flora. Macroalgal communities were found in discrete patches (separated by sand or other benthic habitats) at all MCE depths in the MHI. Examples include expansive meadows of Halimeda kanaloana Vroom in sand, beds of Halimeda distorta (Yamada) Hillis-Colinvaux over hard substrates, as well as monospecific beds of Distromium spp., Dictyopteris spp., Microdictyon spp., Caulerpa spp., and mixed assemblages of other macroalgal species ( Spalding, 2012 ). These MCE macroalgal assemblages are abundant, diverse, and spatially heterogeneous with complex distributional patterns, contributing to heterogeneous structural complexity. In contrast, MCE macroalgal communities in the NWHI tended to be dominated by beds of Microdictyon spp., although ROV video around the banks of the NWHI also show regions of Sargassum or Dictyopteris species ( Parrish & Boland, 2004 ) (collections not yet available for verification). Of the approximately fifty species of scleractinian corals known from the Hawaiian Islands ( Hoover, 1998 ), ten were recorded by this study from MCEs in the MHI. Three of these— Pocillopora damicornis , P. meandrina , Porites lobata , and Montipora capitata —are common species on adjacent shallow reefs. A previously published phylogeny resolved six Leptoseris species in Hawai‘i: Leptoseris hawaiiensis Vaughan 1907, L. papyracea (Dana 1846), L. scabra Vaughan 1907, L. tubilifera Vaughan 1907, L. yabei (Pillai & Scheer, 1976), and a putative undescribed species, “ Leptoseris sp. 1” ( Luck et al., 2013 ; Pochon et al., 2015 ). However, the reliability of coral phylogenies has been challenged due to limited genetic variation ( Sinniger, Reimer & Pawlowski, 2010 ). In addition to scleractinian corals, eight antipatharian coral species were documented from MCE depths through work associated with this study and published previously ( Wagner et al., 2010 ; Wagner et al., 2011 ; Wagner, 2015a ; Wagner, 2015b ). One of the goals of this research was to identify key parameters that might determine the presence and distribution of MCE habitats elsewhere in Hawaiian waters through the development of a spatial model. This portion of our study has been published previously ( Costa et al., 2015 ), but in summary, depth, distance from shore, euphotic depth and sea surface temperature were identified as the four most influential predictor variables for partitioning habitats among the three genera of corals included in the modeling exercise ( Leptoseris , Montipora , and Porites ). Costa et al. (2015) found that for corals that occur in the shallower depth (50 m) of MCEs, hard substrate is necessary, but not sufficient, for colonization. It is less certain whether hard substrate is necessary at greater depths, where some of the Leptoseris beds were found in a density of three or more layers of coral plates deep. Whether this is due to accretion on a hard substrate or a stable soft bottom needs further examination. Additional details of the methods and results from this portion of the study are available in Costa et al. (2015) . Extensive Leoptoseris -dominated MCEs with very similar structure, depth and species composition have been identified in two MHI regions: the ‘Au‘au Channel, and off southeastern Kaua‘i ( Fig. 11 ). Although no other Leoptoseris -dominated MCEs have yet been located within the Hawaiian Archipelago, similar MCE habitats may exist elsewhere in the MHI. Figure 11: Comparison of Leptoseris -dominanted MCE habitats. (A) Kaua‘i and (B) Maui, showing the close similarity in general structure. Download full-size image DOI: 10.7717/peerj.2475/fig-11 Table 2: Comparison of fish assemblages associated with black coral beds of two Main Hawaiian Islands, Kaua‘i and Maui. Feeding guild percentages from number of fish in the feeding guild compared to total number of fish observed in the island’s black coral bed. Maui Kaua‘i Total fish observed 2,080 1,322 Number of species observed 60 52 Herbivore 7% 5% Planktivore 68% 67% Omnivore 1% 1% Benthic Carnivore 23% 26% Piscivore 1% 1% DOI: 10.7717/peerj.2475/table-2 In addition to corals, nearly 200 specimens from among eight phyla of marine invertebrates (Foraminifera, Porifera, Bryozoa, Annelida, Arthropoda, Cnidaria, Ophiuroidea, and the subphylum Urochordata) were collected from MCEs in the MHI. Unfortunately, many of these groups are poorly known taxonomically, and nearly three quarters of these specimens remain unidentified. Qualitative comparisons of fish assemblages associated with black coral beds are available for two main islands: Kaua‘i (this study) and Maui ( Boland & Parrish, 2005 ). Both were similar in species number; the Maui survey recorded 60 species and Kaua‘i had 52 ( Table 2 ). A Wilcoxon Signed Rank Test showed no significant difference ( P = 0.100, W = − 1.647) in species abundance. When all fish were categorized by feeding guilds, both had similar distributions. Both were dominated by planktivores with Maui at 68% and Kaua‘i at 67%, with the next largest group being benthic carnivores at 23% and 26%, respectively ( Table 2 ). By comparison, black coral MCEs of the Mid-Atlantic Ridge host 33% planktivores and 9% benthic carnivores ( Rosa et al., 2016 ). Figure 12: Heterogeneous reef fish distribution on Leptoseris reefs in the ‘Au‘au Channel. Reef fish distribution on Leptoseris reefs in the ‘Au‘au Channel was heterogeneous, with large areas nearly devoid of fishes (A) punctuated with areas of high fish diversity and abundance (B). The fishes seen in the distance in (A) represent a separate localized area of high abundance. All but two of the fishes visible in (B) belong to endemic species (Endemics: Chaetodon miliaris, Pseudanthias thompsoni, Sargocentron diadema, Dascyllus albisella, Holacanthus arcuatus, Centropyge potteri; Non-endemic: Forcipiger flavissimus, Parupeneus multifasciatus ). Photos: HURL. Download full-size image DOI: 10.7717/peerj.2475/fig-12 Within the ‘Au‘au Channel site, both divers and submersible observers made repeated anecdotal observations of highly heterogeneous fish diversity and abundance over Leptoseris beds with similar morphology and abundance. Some areas were almost devoid of fishes ( Fig. 12A ), whereas others harbored high levels of both diversity and abundance ( Fig. 12B ). No transect data were performed to quantify this preliminary observation, and we can think of no obvious reason why the pattern might exist. However, we feel this observation is interesting enough, and was made consistently enough, to justify noting here in the hope of prompting future research. Elsewhere in the Pacific, MCEs harbor high numbers of species new to science ( Pyle, 2000 ; Rowley, 2014 ). The fish fauna of Hawai‘i has been better documented than any other location in the tropical insular Pacific, so new species were not expected. However, at least four undescribed species of fishes have been collected on MCEs in Hawai‘i, including a highly conspicuous butterflyfish ( Prognathodes ) and three other less conspicuous species ( Scorpaenopsis , Suezichthys and Tosanoides ). These have been determined by experts in the respective taxonomic groups to be undescribed, and are in various stages of formal description. At least one putative new species of scleractinian coral ( Leptoseris ) has been identified (as noted above), and one putative new species and one new record of Antipatharian corals for the Hawaiian Archipelago have been recorded ( Wagner et al., 2011 ; Opresko et al., 2012 ; Wagner, 2015a ). Undescribed macroalgae will require molecular characterizations that will likely increase the number and diversity of recognized species. Likewise, the taxonomy of many groups of marine invertebrates is poorly known (see Hurley et al. 2016 ), and many of the unidentified specimens of invertebrates may prove to be new species, once subjected to the same taxonomic scrutiny by appropriate experts that the aforementioned fishes, corals and algae have undergone. Sponge taxonomists have indicated numerous undescribed species and genera from among MCE collections. This is likely the case for the other phyla as well, especially within polychaetes, small crustaceans, and tunicates. Depth zonation The preponderance of new species within MCE habitats reinforces the observation that the species inhabiting deeper MCEs are generally different from those inhabiting shallow reefs. Overall, diversity was lower on MCEs than on nearby shallow reef habitats. This pattern was consistent for macroalgae, corals, macroinvertebrates and fishes. However, within MCE habitats, different taxonomic groups showed different patterns of diversity. Survey results for macroalgae show more species at 70–100 m compared to 40–60 m, with the most distinctive changes in diversity (i.e., the most substantial changes in total number of species at each depth interval) occurring at 80–90 m and 110–120 m depths ( Fig. 13 ). These depths corresponded to ∼3% and 0.5% of SI, respectively, and included depths where large changes in seasonal thermoclines were observed. The water column at most sites was characterized by high clarity and deep penetration of irradiance (10 µmol m −2 s −1 at 110 m depths), although sedimentation from terrigenous sources appeared to reduce macroalgal abundance at a few sites. Figure 13: Total number of macroalgal species (over all sites combined) found at each depth surveyed. Depth of occurrence is based upon collections and visual observations when species level identifications were verified. Shallower depths (40–60 m) were collected by mixed-gas divers while depths ≥70 m were collected by submersibles. See Spalding (2012) for collection locations. Data are included in the “AlgaeData” ( Supplemental Information 2 ; Tab 1) worksheet of the Raw Data file. Download full-size image DOI: 10.7717/peerj.2475/fig-13 A similar pattern may exist for scleractinian corals. Three species of corals were found at 30–50 m ( M. capitata , P. damicornis and P. lobata ). P. damicornis and P. lobata were only seen at the shallowest MCEs (<50 m), while M . capitata occurred at greater depths (50–80 m) ( Rooney et al., 2010 ). At these greater depths, M. capitata was most commonly observed in a branching morphology that formed low-relief reefs carpeting tens of km 2 of sea-floor off the west coast of Maui. Similar reefs have been observed off Kaua‘i and Ni‘ihau, although a plate-like morphology is dominant around O‘ahu. At greater depths, the dominant corals are within the genus Leptoseris . Starting at a depth of ∼65 m, Leptoseris corals were most commonly encountered, becoming the dominant corals at depths below ∼75 m, and continuing in high abundance down to 130 m, with solitary colonies at depths in excess of 150 m ( Rooney et al., 2010 ). Of these, recent evidence indicates that L. hawaiiensis was found exclusively at depths below 115–125 m ( Pochon et al., 2015 ). The latter study also investigated endosymbiotic dinoflagellate Symbiodinium and resolved three unambiguous haplotypes in clade C, with one haplotype exclusively found at the lower MCE depth extremes (95–125 m) ( Pochon et al., 2015 ). These patterns of host–symbiont depth specialization indicate limited connectivity between shallower and deeper portions of MCEs, and suggest that niche specialization plays a critical role in the host–symbiont evolution of corals at MCE depth extremes. Invertebrate identifications completed thus far indicate that many species from various phyla are deep-water specialists, such as the polychaete Eunice nicidioformis (Treadwell 1906), which was originally described from a specimen collected at 200–300 m in the Hawaiian Islands. Hurley et al. (2016) observed strong zonation of brachyuran crab species by depth off O‘ahu, and this trend is likely to extend to all invertebrate phyla. Remarkably few amphipods were found, and most were parasitic or inquiline species ( Longenecker & Bolick, 2007 ); possibly an artifact of sampling with submersibles (i.e., free-living species may have been swept off during ascent). The pattern of depth stratification was much less apparent among fishes. Depth ranges of all reef-associated fish species known to occur at depths of less than 200 m ( n = 445) were obtained through this study and from historical literature, and are included in the “FishData” ( Supplemental Information 2 ; Tab 2) worksheet of the Raw Data file. Among species recorded at depths greater than 30 m ( n = 346), 87% ( n = 302) also occur at shallower depths (i.e., only 12% ( n = 44) of fishes recorded from MCEs are restricted to MCEs). In the Northwestern Hawaiian Islands, Fukunaga et al. (2016) found that fish assemblages at mesophotic depths (27–67 m) had higher densities of planktivores and lower densities of herbivores than on comparable shallow reef-fish assemblages between 1 and 27 m. It has been suggested that there may be a consistent and relatively sharp faunal break at around 60 m ( Slattery & Lesser, 2012 ). An analysis of beta -diversity among fishes in the Red Sea found that the rate of species turnover increased with depth ( Brokovich et al., 2008 ); however, this study only extended to 65 m, the shallowest portion of MCEs. Moreover, traditional approaches to quantifying beta-diversity changeover are designed to measure presence/absence data for multiple discrete zones, and would require multiple replicate transects at multiple depth zones across many different habitats and geographic locations (i.e., potentially thousands of transects) to adequately characterize species transition patterns. Instead, the question of where the largest and most substantial species assemblage transitions occur can be addressed by a more holistic approach to known species depth ranges (see Supplemental Information 1 ). Figure 14A , which summarizes data included in the “FishData” ( Supplemental Information 2 ; Tab 2) worksheet of the Raw Data file, reveals that the most substantial faunal transitions in fishes occur in the range of 10–30 m and 110–140 m, and the least substantial transitions occur in the range of 40–60 m, with moderate transitions in the range of 70–100 m. The most substantial floral transitions (from data included in the “AlgaeData” ( Supplemental Information 2 ; Tab 1) worksheet of the Raw Data file) occur between 90–110 m and at 130 m depths ( Fig. 14B ), indicating that 60 m is not a significant transition for species changeover in macroalgae. Brachyuran crabs show the strongest transitions between 60 and 90 m ( Hurley et al., 2016 ). The diversity of coral species is insufficient to allow the application of this type of analysis; however, applying this method to other taxa and in other regions should provide insight into whether floral and faunal breaks are consistent on a broader taxonomic and geographic scale. Figure 14: Fish and macroalgal species changeover at 10-m depth intervals. The degree of fish ( n = 445) changeover (A) and macroalgal ( n = 72) species changeover (B) at 10-m depth intervals. Values of each bar represent the number of species with a maximum known depth limit within 10 m above each depth interval plus the number of species with a minimum known depth limit within 10 m below each interval, expressed as a percentage of the total species present at the interval. A high value indicates a more substantial break, and a low value represents a less substantial break. Data are included in the “AlgaeData” ( (Supplemental Information 2 ; Tab 1)) and “FishData” ( Supplemental Information 2 ; Tab 2) worksheets of the Raw Data file. Download full-size image DOI: 10.7717/peerj.2475/fig-14 Endemism Our findings support previous reports of higher rates of endemism (species that occur only within the Hawaiian Islands) among fishes on MCEs ( Pyle, 1996b ; Pyle, 2000 ; Kane, Kosaki & Wagner, 2014 ). Among 259 species of fishes recorded on MCEs across the Hawaiian Archipelago, 70 (27%) are endemic (inclusive of Johnston Atoll), considerably higher than the 20.5% of endemic fishes across all reef and shore fishes reported for the Hawaiian Archipelago ( Randall, 1998 ). However, with more careful analysis, the trend of increasing endemism with increasing depth within MCEs is even stronger. Based on our surveys, the rate of endemism among reef fishes found exclusively shallower than 30 m ( n = 126) was 17%, and the rate of endemism among reef fishes found exclusively deeper than 30 m ( n = 42) is 43%. The rate of endemism remained roughly the same (16–17%) for fishes found only shallower than 40 to 80 m depths (in 10-m depth increments), but changed to 44% for fishes found only deeper than 40 m, 41% below 50 m, 50% below 60 m, and 51% for fishes found only deeper than 70 m. This trend appears to be restricted to fishes inhabiting MCEs (rather than a general trend of increasing endemism with increasing depth), because among fishes restricted to depths greater than 150 m, the rate of endemism is 14% ( Mundy, 2005 ). The proportion of endemic fish species increases even further with increasing latitude across the Archipelago. At the northwestern-most atolls, endemism among MCE fishes reaches 76% ( Fig. 15 ). This represents one of the highest rates of endemism reported for any marine ecosystem, which could be due to cooler water temperatures limiting the northward distribution of tropical species ( Kane, Kosaki & Wagner, 2014 ). Figure 16 shows a comparison of overall rates of endemism among fishes in both MCEs and shallow reefs of the NWHI and ‘Au‘au Channel, against general rates of endemism (mostly biased toward shallow reefs) for the tropical Indo-Pacific and Eastern Pacific. The trend towards elevated endemism is even stronger when relative abundance is taken into consideration. Not only are more endemic species found on MCEs, but the endemics also tend to be the most abundant species. At the northernmost end of the NWHI, the relative abundance of endemic reef fishes exceeds 92% ( Kane, Kosaki & Wagner, 2014 ), and even reach 100% in some places ( Kosaki et al., 2016 ). This pattern is also evident in the MHI, as illustrated by Fig. 12B (taken at 90 m in the ‘Au‘au Channel) in which all but two of the hundreds of fishes are endemics. This pattern may represent a combination of both depth and latitudinal gradients, and ongoing quantitative surveys throughout the Hawaiian Archipelago should reveal more detailed interpretations of these patterns. Figure 15: Proportion of endemic reef fish species in mesophotic fish communities of the NWHI. Download full-size image DOI: 10.7717/peerj.2475/fig-15 Figure 16: Proportion of endemic coral reef fish species across the tropical Indo-Pacific by island/region. Sources: Randall, 1998 ; Moura & Sazima, 2000 ; Allen, 2008 ; Floeter et al., 2008 . Kane, Kosaki & Wagner, 2014 ; this study. Download full-size image DOI: 10.7717/peerj.2475/fig-16 Patterns of endemism among other groups (particularly macroalgae and marine invertebrates) are more difficult to quantify. The full extent of endemism and the broader diversity within the MCE flora can only be determined after molecular studies are conducted and data are gathered from similar MCE habitats elsewhere in the Pacific. Until the unidentified specimens of invertebrates from MCEs are examined by appropriate experts, and comparable sampling is conducted elsewhere in the Pacific, it will not be possible to determine proportions of endemism among marine invertebrates on Hawaiian MCEs. Figure 17: Temperature log from stained coral. Download full-size image DOI: 10.7717/peerj.2475/fig-17 Population dynamics We hypothesized that growth rates (the radial extension of the carbonate plate) of Leptoseris sp. corals, the dominant benthic organism at depths of 70–150 m in the Main Hawaiian Islands, were similar to the only published growth rate for L . fragilis from the Red Sea (0.2–0.8 mm yr −1 ) ( Fricke, Vareschi & Schlichter, 1987 ). Because direct measurement via submersible was unlikely to have the precision to measure such rates over the period of our study, colonies were stained for later recovery and analysis. The stained colony recovered from a depth of 83 m had stain appearing only in two marked patches of the skeleton, making it difficult to conclusively describe the complete radial growth of the coral plate. X-radiographic imaging of entire Leptoseris test colonies did not reveal the resolution required to discern banding patterns. Additional colonies examined using CT scanning of the entire plate yielded excellent images, clearly revealing fine banding parallel to the outer edge of the colony. However, Δ14C analysis showed that all samples were younger than the 14C peak from the early 1960s that occurred as a result of nuclear weapons testing. This result indicated that bands are approximately an order of magnitude too closely spaced to be annual, but the cause of the banding remains unclear. The edge adjacent to the stain on the marked colony showed an addition of 12 to 13 new bands but these did not correspond to the two-year seasonal temperature cycle from the data logger deployed when the coral was marked ( Fig. 17 ). This first marked colony indicates some type of monthly banding cycle so the additional colonies stained in-situ are an important part of future studies. A series of U/Th dates from eight representative portions of the stained colony indicate a colony age of ∼15 years ( Table 3 ). Although 2- σ uncertainty for U/Th dating is generally ±0.1 years, each sub-sample integrates material from several years and different sides of the colony extended further from the center. Also, no sample was collected from the exact center of the colony, but based on the age difference between the two samples nearest the center (LH1, LH2) approximately another 2.2 years was added to the age of sample LH1 to determine a final colony age of ∼14.8 years at the time of recovery. The mean of several radius measurements is 14.9 cm providing a mean growth rate of ∼1 cm yr −1 , or more than an order of magnitude faster than that reported for L. fragilis . Table 3: Ages of Leptoseris sp. colony samples based on Uranium/Thorium (U/Th) dating techniques. The colony was marked in December 2007 and sampled January 2010. Sample Uranium–Uranium 234/238 activity Thorium–Uranium 230/238 activity Initial delta 238 U Age (yr) before 2012.75 Estimated sample age (yr) LH1 1.1467 ± 0.0002 0.000160 ± 7.1 × 10 −7 146.7 ± 0.23 15.3 ± 0.07 14.8 LH2 1.1470 ± 0.0002 0.000137 ± 7.0 × 10 −7 147.0 ± 0.19 13.1 ± 0.07 10.4 LH3 1.1470 ± 0.0002 0.000132 ± 9.5 × 10 −7 147.0 ± 0.17 12.6 ± 0.09 9.9 LH4 1.1469 ± 0.0002 0.000138 ± 1.1 × 10 −6 146.9 ± 0.23 13.1 ± 0.11 10.4 LH5 1.1469 ± 0.0003 0.000061 ± 5.6 × 10 −7 146.9 ± 0.27 5.8 ± 0.05 3.1 LH6 1.1469 ± 0.0002 0.000053 ± 3.3 × 10 −7 146.9 ± 0.18 5.0 ± 0.03 2.3 LH7 1.1471 ± 0.0002 0.000138 ± 8.6 × 10 −7 147.1 ± 0.24 13.2 ± 0.08 10.5 LH8 1.1468 ± 0.0002 0.000112 ± 7.3 × 10 −7 146.8 ± 0.18 10.7 ± 0.07 8.0 DOI: 10.7717/peerj.2475/table-3 Logistical constraints associated with working at mesophotic depths severely limited the number of specimens available for life-history analysis (37 Centropyge potteri , 33 Ctenochaetus strigosus , and 33 Parupeneus multifasciatus ), and the number of length estimates obtained from laser-videogrammetry surveys (21 C. potteri , 28 C.strigosus , and 90 P. multifasciatus ). Thus, the results presented here should be considered preliminary and did not warrant rigorous statistical analysis. Table 4 presents densities, average lengths, and life-history parameters for mesophotic populations from the ‘Au‘au Channel and for previously studied shallow-water populations from across the main Hawaiian Islands. The results were not consistently higher or lower in MCEs compared to shallow reefs. The net effect of these differences, when combined with size-structure data, was predicted by the Ricker model ( Everhart & Youngs, 1992 ), which we modified as described in the detailed ‘Materials and Methods’. The results presented in Table 5 indicate that biomass and egg production estimates were lower at MCE depths, and estimates for shallow depths are at least an order of magnitude higher for all except the egg production of P. multifasciatus . The parameters used in the Ricker model interact in complicated ways, making it difficult to determine reasons for the differences between MCEs and shallow depths. Given the admittedly preliminary nature of our results, we are unwilling to speculate on the cause(s) of these differences. Nevertheless, the possibility that biomass production and reproductive output of exploited fish populations are lower in MCEs deserves full consideration in future fishery management and habitat conservation efforts. Broad trophic characterizations The relative representation of different trophic groups of fishes on shallow reefs and MCEs in the NWHI is illustrated in Fig. 18 . Shallow reefs were numerically dominated by herbivores and mobile invertivores, whereas MCE fish communities were numerically dominated by planktivores (See the “NWHIFishTrophic” ( Supplemental Information 2 ; Tab 3) Worksheet of the Raw Data file). Table 4: Life history and population characteristics of exploited fishes at MCE and euphotic depths. Lengths (L) as fork length (FL) or total length (TL) are in mm, weights (W) are in g, time (t) is days, batch fecundity (BF) is number of eggs. MCE columns represent the ‘Au‘au Channel, shallow columns from previous studies throughout the main Hawaiian Islands ( Longenecker & Langston, 2008 ; Langston, Longenecker & Claisse, 2009 ). C. potteri (TL) C. strigosus (FL) P. multifasciatus (FL) MCE Shallow MCE Shallow MCE Shallow Density (#/m 2 ) 0.0024 0.0120 0.0025 0.0524 0.0287 0.0442 Mean L 77.7 70.0 96.7 99.2 122.5 133.8 L-W W = 4.99⋅10 −5 ( L ) 2.877 W = 2.28⋅10 −5 ( L ) 3.053 W = 2.13⋅10 −5 ( L ) 3.037 W = 6.51⋅10 −5 ( L ) 2.8499 W = 2.02⋅10 −5 ( L ) 2.970 W = 3.45⋅10 −5 ( L ) 2.868 Growth L t = 103.18 (1- e −0.01196( t +146.14) ) L t = 127 (1- e −0.00228( t +63.9) ) L t = 129.86 (1- e −0.00439( t +96.52) ) L t = 142.62 (1- e −0.00717( t −60.31) ) L t = 167.646 e e −0.0322504( t −87.2819) L t = 303 (1- e −0.00207( t +49.4) ) ♀L 50 73 54 79 84 136 145 L-BF BF =5.494⋅10 −12 ( L ) 7.6343 BF =0.0118( L ) 2.596 BF =1.5889⋅10 −29 ( L ) 16.1377 BF =1.2766⋅10 −5 ( L ) 4.1663 BF =1.8865( L ) 1.7271 BF =0.0018( L ) 3.092 Size-specific sex ratios %♀ =232.96 − 1.88( L ) %♀ =405.5 − 4.44( L ) %♀ =1239.1 − 10.9( L ) %♀ =5.99 + 85.49 (−.5∗(( L −95.58)∕26.92) ∧ 2) %♀ =346.76 − 1.78( L ) %♀ =141.3 − 0.617( L ) DOI: 10.7717/peerj.2475/table-4 Isotopic analysis of benthic reef fishes from different feeding guilds in both shallow and MCE habitats of the MHI revealed that carbon isotopic ( δ 13 C) values for all feeding guilds overlapped across shallow and MCE depth ranges, but some significant differences were observed ( Bradley et al., in press ). Ranges of δ 13 C values in planktivorous fish were smaller than those of benthic invertivores in both shallow and MCE communities. Omnivores showed a greater range in carbon isotopic composition at shallow depths than at MCE depths, and pooled data across O‘ahu and Maui revealed significantly lower δ 13 C values for individuals from MCEs compared to shallow individuals ( Bradley et al., in press ). In addition, significant differences in δ 13 C values were found between depths for benthic invertivores, but not in planktivores. Shallow omnivore δ 15 N values were slightly (but significantly) higher than MCE omnivores, but no significant differences for the overall populations of invertivores and planktivores were found between depths ( Bradley et al., in press ). Nitrogen isotopic ( δ 15 N) values for the majority of source and trophic amino acids ( Popp et al., 2007 ) were not significantly different between depths for any taxon with the exception of Centropyge (Pomacanthidae) and Sargocentron (Holocentridae), where δ 15 N values were lower in MCE fishes compared to shallow fish ( Bradley et al., in press ). No significant differences in trophic position calculated from amino acid isotopic compositions were found with increasing fish standard length between islands in any feeding guilds. Between depths, amino acid–based trophic positions of MCE benthic invertivores were slightly but significantly higher than those from shallow depths ( Bradley et al., in press ). For omnivores and planktivores, these results indicate that changes in nutrient sources over the depth range studied did not affect their position within the food web. The small but significantly high trophic position of benthic invertivore feeding guilds from MCEs most likely resulted from consumption of fewer macroalgal grazers on MCEs compared to shallow reefs ( Bradley, 2013 ; Bradley et al., in press ). Table 5: Estimates of biomass and egg production for exploited fishes at mesophotic and euphotic depths. MCE column represents the ‘Au‘au Channel, shallow column from previous studies throughout the main Hawaiian Islands ( Longenecker & Langston, 2008 ; Langston, Longenecker & Claisse, 2009 ). MCE Shallow C. potteri g/m 2 /yr 0.0773 1.9861 eggs/spawning event/m 2 7 26 C. strigosus g/m 2 /yr 0.0382 6.9699 eggs/spawning event/m 2 2 340 P. multifasciatus g/m 2 /yr 1.9702 19.9779 eggs/spawning event/m 2 133 155 DOI: 10.7717/peerj.2475/table-5 Figure 18: Comparison of fish assemblage trophic structure between shallow and mesophotic reefs in the NWHI. NIH, Nihoa; FFS, French Frigate Shoals; MID, Midway Atoll; PHA, Pearl and Hermes Atoll; KUR, Kure Atoll. Download full-size image DOI: 10.7717/peerj.2475/fig-18 Isotopic results show that individual fish species generally do not differ greatly in trophic position between the two reef ecosystems ( Bradley, 2013 ; Bradley et al., in press ), indicating that managing reef fish species as one group across depths may be appropriate. An exception to this general observation was found in a study of diet and movements of Galapagos sharks, Carcharinus galapagensis (Snodgrass & Heller, 1905), and Giant trevally, Caranx ignobilis (Forsskål, 1775), from a MCE at Pearl and Hermes Atoll in the NWHI. Based on stable isotopic analysis and acoustic telemetry to study diet and movements, Papastamatiou et al. (2015) found that giant trevally occupied a wide range of trophic positions potentially due to intraspecific competition. However, carbon isotopic compositions of several species of benthic feeding fish indicate that carbon flow in the two ecosystems may be distinct ( Papastamatiou et al., 2015 ; Bradley et al., in press ). While this does not alter the relative trophic position of the fish, it implies that caution should be taken when considering shallow reefs and MCEs as a single ecosystem as the flow of biomass may be different in the two ecosystems. MCEs as refugia Much has been written about the potential for MCEs to serve as refugia for shallow-reef species ( Hughes & Tanner, 2000 ; Riegl & Piller, 2003 ; Bongaerts et al., 2010 ; Hinderstein et al., 2010 ; Bridge & Guinotte, 2013 ; Kahng, Copus & Wagner, 2014 ; Holstein et al., 2015 ). Most of the discussion has focused on MCEs having reduced susceptibility to coral bleaching events due to reduced irradiance and increased thermal stability, as well as protection from storms and mechanical disturbances (such as anchor damage). To some extent, especially in the MHI where the horizontal distance of many MCEs from shore is large, distance may confer some protection from coastal impacts, such as sedimentation and pollution. In some cases, MCEs may offer protection from fishing pressure, particularly for fisheries that rely on divers or are otherwise impractical at greater depths ( Lindfield et al., 2016 ). As summarized by Bridge & Guinotte (2013) , in order to consider MCEs as refugia for inhabitants of shallow reefs, MCEs must harbor populations of species that are impacted on shallow reefs, in ways that would allow propagules from MCE populations to colonize shallow reef habitat (i.e., adequate genetic connectivity; although potential for propagule dispersion and settlement is not the only determinant of genetic connectivity). MCEs must also be more resilient to stresses that affect shallow reefs. Bongaerts et al. (2010) reviewed the literature regarding the ‘deep reef refugia’ hypothesis for Caribbean reefs and concluded that it is more likely to apply to “depth generalist” species and may serve a greater importance in the upper range of MCEs (30–60 m). This was exemplified by the coral Seriatopora hystrix in Okinawa, which was extirpated from shallow water, and later discovered at 35–47 m ( Sinniger, Morita & Harii, 2013 ). A primary goal of our research was to understand the extent of connectivity by species across the archipelago, and both genetic and trophic relationships between MCEs and nearby shallow reef habitats. In the first genetic comparison of shallow and MCE reef fishes, the damselfish Chromis verator showed no population structure across depths ( Tenggardjaja, Bowen & Bernardi, 2014 ). Thus, the initial genetic data and the high degree of shared fish species between shallow reefs and MCEs (84%, when considering the full depth range of MCEs) indicate that MCEs may function as refugia for some impacted populations on shallow reefs, especially for fishes ( Lindfield et al., 2016 ). However, biomass and egg production estimates for three exploited species ( C. potteri , C. strigosus , and P. multifasciatus ) from this study ( Table 5 ) are consistently lower for MCE populations, even though estimates for shallow populations incorporate the effects of fishing mortality. Moreover, patterns of larval dispersal between and among shallow and MCE populations are not well known. Rather than being viewed as a source for shallow-water reef fish, the MCE populations may require more protection than their shallow-water counterparts. Vertical distribution of scleractinian coral species in the Hawaiian Islands is well known for common, conspicuous species while rare, cryptic or hard to identify species are less understood. Rooney et al. (2010) and Luck et al. (2013) both reported depth ranges for scleractinian species. Based on the anecdotal observations of this study, we observed a similar pattern of species distribution where there appears to be greater species overlap between shallow reefs and upper MCEs (30–60 m) than between shallow reefs and lower MCEs (>60 m). As such, the lower MCE populations do not serve as effective scleractinian species refugia for shallow reefs. This pattern of coral segregation by depth has also been reported by Kahng, Copus & Wagner (2014) . Corals in the upper MCE may in some cases serve as refugia for shallower populations, as modeling studies indicate high larval-mediated connectivity ( Thomas et al., 2015 ; Holstein, Smith & Paris, 2016 ). Few studies have tested genetic connectivity across depths for corals. Based on a microsatellite survey of Porites astreoides in the West Atlantic, Serrano et al. (2016) showed high connectivity between shallow and deep reefs in Bermuda and U.S. Virgin Islands, but some evidence of population structure between shallow and deep reefs in Florida. Van Oppen et al. (2011) showed a restriction of gene flow between shallow and deep colonies of the brooding coral Seriatopora hystrix , but also some evidence that larvae from deep reefs may seed shallow reefs. Hence the evidence for coral refugia is equivocal at this time, and studies in Hawaii would be valuable contributions to this debate. Recently, the Caribbean coral, Porites astreoides , was shown to have similar reproductive characteristics across depths. Holstein, Smith & Paris (2016) modeled the vertical connectivity of two Caribbean species, P. astreoides (brooder) and Orbicella faveloata (broadcaster) and predicted significant contribution from both species with a high local contribution from the brooder ( Holstein et al., 2015 ). However, both of these studies were conducted in shallow reefs and the upper MCE and not in the lower MCE further suggesting less connectivity with the lower MCE. Prasetia, Sinniger & Harii (2016) examined the reproductive characteristics of Acropora tenella from the upper MCE and found the reproductive characteristics to be similar to shallow reef acroporids, but they did not examine lower MCE colonies. While MCEs in Hawai‘i may fulfill the first requirement for refugia (at least for fishes and some corals), the resilience of Hawaiian MCEs as compared to their shallow-reef counterparts remains unknown. Several abiotic factors (e.g., exposure to high light or temperature fluctuations) would intuitively cause more stress for shallow reef systems than for MCEs, but the impact of these stressors on coral resilience is not always straightforward. Corals with regular exposure to high temperatures or elevated PAR may be more tolerant of extreme conditions than corals without such exposure ( West & Salm, 2003 ; Grimsditch & Salm, 2006 ). Conversely, corals inhabiting more stable and cooler conditions on MCEs may be less resilient to temperature changes than their shallow-reef counterparts. The impacts of climate change on MCEs are not yet understood, and we are in need of additional research and predictive modeling before assumptions can be made about the resilience of MCEs as compared to shallow reefs. While vertical thermal stratification maintains some MCEs at lower temperatures than shallow reefs, MCEs may still be vulnerable to thermal anomalies that drive bleaching on shallow reefs. During a September 2014 mass-bleaching event in the NWHI, water temperatures of 24°C were recorded by divers at 60 m at Lisianski, approximately 4°C higher than typical for that depth. Mesophotic coral communities may potentially be as vulnerable to bleaching events as adjacent shallow reefs. However, our ability to predict the oceanographic conditions that cause thermal conditions at MCE depths is more limited than shallow reefs. This threat has potentially severe implications for the numerous undescribed species of algae collected from the same MCE site at Lisianski; with an increased frequency and severity of warm-water thermal anomalies, we may be at risk of losing some of these species to climate change before we even document their existence. Corals in MCEs are growing at considerably lower SI, and are potentially near the lower limit of light intensity required for photosynthesis. Water clarity was identified as a key factor in predicting the presence of MCEs, with less well-developed MCEs in areas with less light penetration ( Costa et al., 2015 ). It is conceivable that a small increase in turbidity near the surface (e.g., from coastal activities that either produce excess sedimentation directly or increase nutrient levels causing increased plankton densities) could have greater impacts on MCEs than on their shallow-reef counterparts. As has been suggested previously ( Stokes, Leichter & Genovese, 2010 ; Lesser & Slattery, 2011 ), the potential for MCEs to serve as refugia for shallow reefs likely depends on multiple factors and should be evaluated on a case-by-case basis for different taxa and different sources of disturbance. One important question that has not been addressed in the previous literature is the extent to which shallow reefs might serve as refugia for MCEs. Given the uncertainties about the relative resilience and insulation from a wide range of environmental stress factors that impact coral-reef environments between MCEs and shallow coral reefs, it is premature to make any assumptions about which habitat is more vulnerable to disturbance or which might serve as a refuge. Thus, when future studies assess the potential for MCEs to serve as refugia to replenish disturbed shallow reefs, they should also consider the implications of the reverse relationship as well. Management implications Our multi-year collaborative research on MCEs across the Hawaiian Archipelago has provided new insights on basic geophysical characteristics, patterns of biodiversity, and information on genetic and trophic connectivity that can enhance the foundations for management and conservation of MCEs and coral-reef environments in general. The ‘Au‘au Channel is the most extensive complex of MCE coral and macroalgal communities in Hawai‘i. Growing on the island’s deep slope, its fragile structure and biodiversity is currently isolated from some anthropogenic impacts, but MCEs should be considered in all future coastal zone management plans for the region. Currently, part of the ‘Au‘au Channel is listed as a Habitat Area of Particular Concern by the National Marine Fisheries Service. Given its unique geomorphology and biotic characteristics, state or federal managers should include a fully protected area. The discovery of similar MCEs off Kaua‘i, Ni‘ihau, and in the NWHI confirm that such environments occur elsewhere within the Hawaiian Archipelago and likely the broader Pacific. Although these sites have not been studied to the same extent as the ‘Au‘au Channel, it is clear from divers and remote camera surveys that the corals form fragile complexes and would be easily damaged from bottom contact. There also appears to be a rich diversity of mesophotic macroalgae in the NWHI. The need to fully protect the ‘Au‘au Channel and other MCE hotspots within the Hawaiian Archipelago is underscored by the fact that upper MCEs (30–60 m) are likely to serve a more prominent role as a refuge for corals, while the lower MCEs (>60 m) harbor unique assemblages of species with higher rates of endemism. As noted previously, MCEs may also potentially serve as spatial refugia for some overexploited shallow-reef fishes, but this potential must be evaluated on a species-by-species basis. Our research reinforces the theme that MCEs require clear water, likely due to increased levels of PAR reaching greater depths. These findings indicate that MCEs are less resilient to certain stresses than their shallow-reef counterparts, particularly surface-water clarity. Assessments of the impact of increased turbidity (e.g., from shore-based run-off or nutrient enrichment and its impact on plankton densities) should not be limited to shallow coral-reef ecosystems directly exposed to impaired water quality; it may well be the case that vast expanses of MCE habitat at the limits of photosynthetic production could also be heavily impacted. Examples of dead, coralline algal-covered plate coral formations were observed during this project and highlighted the reality of threats to MCEs. Other potential impacts to MCE habitats (e.g., fishing pressure, cable laying, placement of permanent moorings and dredging) should be considered in coastal zone management activities, especially when the full extent of MCEs has not yet been documented. For example, with the increasing push for renewable energy and integrated electric grids between islands (undersea cables, offshore windmills, wave energy structures, or other renewable technologies), areas with shallow to intermediate bathymetry, protection from large swells, and a greater distance from shore make certain MCE areas (like the ‘Au‘au Channel) ideal locations for renewable energy structures. Planning for such activities needs to consider vulnerabilities associated with MCEs. In addition to thermal stress (discussed above), climate-driven threats to MCEs may come in the form of increased frequency and severity of large storms ( Emanuel, 2005 ). Currently, major storms that form in the Eastern Tropical Pacific usually pass south of Hawai‘i, but increasing sea surface temperatures may allow these storms to move north ( Pachauri & Meyer, 2015 ). High benthic cover by corals ( Leptoseris spp.) on MCEs in the ‘Au‘au Channel is in part facilitated by shelter from wave energy provided by the islands of Maui, Lana‘i, Kaho‘olawe, and Moloka‘i ( Costa et al., 2015 ). A direct hit by a major storm may cause enough mixing of warm stratified surface waters to cause thermal stress at depth, and even a small increase of benthic sheer stress at depth may damage or destroy fragile MCE corals. Finally, ocean acidification may threaten not only corals, but also crustose coralline algae ( Jokiel et al., 2008 ). Rhodolith beds are common features of NWHI and MHI MCEs ( Spalding, 2012 ), and both crustose coralline algae and rhodoliths serve as attachment substrata for MCE algae and antipatharian corals. While there is little that local resource managers can do to alleviate large-scale climate events, it is nevertheless important to understand the potential impacts climate change can have when establishing conservation priorities. We still know little about MCEs, even in well studied areas such as Hawai‘i, and we do not yet understand the threats to or importance of MCEs in a changing climate. Research needs to be conducted to better characterize whether MCEs will be more or less vulnerable to warming and acidification, or if they will increase the resilience of coral reefs or individual taxa. A better understanding of the basic biodiversity characteristics (e.g., discovering, documenting and describing new species, improving our understanding of depth ranges and endemism, and levels of genetic and trophic connectivity between populations of conspecifics in shallow-reef and MCE habitats) is critical for making informed management decisions. The functional role of MCEs beyond direct connectivity to broader coral reef management must be integrated in future management plans or actions addressing coral reefs. While this project established core baselines for community dynamics within several regions of the Hawaiian Archipelago, it is vitally important to investigate MCE communities elsewhere throughout the Pacific to provide essential context for comparisons of patterns and processes. Most areas of the Pacific remain completely unexplored. This lack of understanding has profound implications for US waters within the Pacific with the recent listing of 15 coral species under the Endangered Species Act and an additional three species proposed for listing. Several of these species are known to exist in MCEs ( Bare et al., 2010 ) and others may be present, but not documented. Clearly the documentation of MCEs is essential for prudent management of Hawaiian resources, and comparison to other Pacific habitats will lead to a better understanding of the unique biodiversity within Hawaiian waters. Supplemental Information Detailed Materials and Methods DOI: 10.7717/peerj.2475/supp-1 Download Detailed methods with Track Changes showing alterations from original submission DOI: 10.7717/peerj.2475/supp-2 Download Raw data from this study Tab 1: Algae Depth Data; Tab 2: Fish Depth Data; Tab 3: NWHI Fish Trophic Data; Tab 4: Temperature Depth Datasets; Tab 5: Temperature Depth Data. DOI: 10.7717/peerj.2475/supp-3 Download Additional Information and Declaration Competing Interests The authors declare there are no competing interests. Author Contributions Richard L. Pyle , Ken Longenecker , Frank A. Parrish and John Rooney conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. Raymond Boland , Holly Bolick , Corinne Kane , Randall K. Kosaki , Ross Langston , Anthony Montgomery and Daniel Wagner conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. Brian W. Bowen and Heather L. Spalding conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. Christina J. Bradley and Brian N. Popp conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. Celia M. Smith conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. Animal Ethics The following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): All vertebrates (fishes) were collected in accordance with University of Hawaii IACUC protocol 09-753-5, “Phylogeography and Evolution of Reef Fishes” (PI: Dr. Brian Bowen), including collection and euthanization by spear. Field Study Permissions The following information was supplied relating to field study approvals (i.e., approving body and any reference numbers): The State of Hawai‘i Department of Land and Natural Resources developed Special Activity Permits for the University of Hawai‘i and National Marine Fisheries Service for work related to this project that occurred within State of Hawai‘i waters. All sampling procedures and experimental manipulations were reviewed as part of obtaining the field permit. Special Activity Permits do not have reference numbers. Papahānaumokuākea Marine National Monument provided research permits for field work in the Northwestern Hawaiian Islands. Data Availability The following information was supplied regarding data availability: Global Biodiversity Information Facility: Funding This paper includes results of research funded by the National Oceanic and Atmospheric Administration (NOAA) Center for Sponsored Coastal Ocean Research (Coastal Ocean Program) under award NA07NOS4780188 to the Bishop Museum, NA07NOS4780187 and NA07NOS478190 to the University of Hawai‘i, and NA07NOS4780189 to the State of Hawai‘i; submersible support provided by NOAA Undersea Research Program’s Hawai‘i Undersea Research Laboratory (HURL); funding from the NOAA Papahānaumokuākea Marine National Monument to the Bishop Museum and the Univ. of Hawai‘i Department of Botany, and funding from the NOAA Coral Reef Conservation Program research grants program administered by HURL under award NA05OAR4301108 and NA09OAR4300219, project numbers HC07-11 and HC08-06. Staff and NOAA ship vessel time for three research cruises were provided by National Marine Fisheries Service, Pacific Islands Fisheries Science Center. Additional funding for this project was provided by the State of Hawaii, Department of Land and Natural Resources, Division of Aquatic Resources. Support for additional rebreather-based surveys off Hawai‘i and elsewhere in the Pacific were provided by the Association for Marine Exploration. Life-history analysis of shallow-water fishes was funded by the Hawaii Coral Reef Initiative and the Dingell-Johnson Sportfish Restoration program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements Acknowledgements We dedicate this work to our friend, colleague, and co-author John Rooney, whose commitment to documenting MCEs in Hawaii was inspirational to us all. Our sincere gratitude goes to Kimberly Puglise for her constant support and encouragement. We thank Matt Ross, Joshua Copus, Robert Whitton, Sonia Rowley, Dave Pence, Jason Leonard, Brian Hauk, Keolohilani Lopes, Christian Clark, Greg McFall, Elizabeth Kehn, Sarah Harris, Paul Murakawa, Brett Schumacher, Russell Sparks, Skippy Hau, Brad Varney, Linda Marsh, Bill Gordon, Kim Tenggardjaja, and the staff, officers and crew of the NOAA Ships Hi‘ialakai and Ka‘imikai-o-Kanaloa . Many thanks to the pilots and support staff of HURL for their amazing assistance with the submersibles and ROV. Jacqueline Padilla-Gamino, Xavier Pochon, Zachary Forsman, Melissa Roth, Robert Toonen, and Ruth Gates were instrumental with coral molecular analyses. Alison Sherwood, Roy Tsuda, Isabella Abbott, and Gerald Kraft assisted with macroalgal identifications. The dating analysis of the marked Leptoseris hawaiiensis coral was conducted by the Cohen Laboratory at the Woods Hole Oceanography Institution. Sonia Rowley provided very helpful editorial comments and insights. The findings and conclusions in this article are those of the author(s) and do not necessarily represent the views of the US Fish and Wildlife Service or NOAA. This is the School of Ocean and Earth Science and Technology contribution number 9814, and Contribution No. 2016-016 to the Hawaii Biological Survey.
A team of sixteen researchers has completed a comprehensive investigation of deep coral-reef environments, known as mesophotic coral ecosystems, throughout the Hawaiian Archipelago. The study, published in the open-access journal PeerJ, spanned more than two decades and involved a combination of submersibles, remotely operated vehicles, drop-cameras, data recorders, and advanced mixed-gas diving to study these difficult-to-reach environments. The researchers documented vast areas of 100% coral-cover and extensive algal communities at depths of 50-90 meters (165-300 feet) extending for tens of square kilometers, and found that the deep-reef habitats are home to many unique and distinct species not found on shallow reefs. The findings of the study have important implications for the protection and management of coral reefs in Hawaii and elsewhere. "This is one of the largest and most comprehensive studies of its kind," said Richard Pyle, Bishop Museum researcher and lead author of the publication. "It involved scientists in many different disciplines and from multiple federal, state, and private organizations working together with a range of different technologies across the entire Hawaiian Archipelago." The primary objective of the study was to characterize deep coral reef habitat, known as "mesophotic coral ecosystems" or the coral-reef "Twilight Zone". Coral reefs at depths of 30 to 150 meters (100 to 500 feet) are among the most poorly explored of all marine ecosystems on Earth. Deeper than conventional scuba divers can safely venture, and shallower than most submersible-based exploration, these reefs represent a new frontier for coral-reef research. To document these elusive deep coral reefs, the team used a wide range of advanced technology, including multibeam bathymetry mapping, mixed-gas closed-circuit rebreather diving, towed and remotely operated camera systems, a variety of environmental sensors for recording light, temperature, water movement and other parameters, and two research submersibles operated by the Hawai'i Undersea Research Laboratory. One of the novel approaches taken during the project was to combine rebreather divers and submersibles together on coordinated dives. "Free-swimming divers and submersibles don't often work side-by-side on scientific research projects," said Pyle. "Submersibles can go much deeper and stay much longer, but divers can perform more complex tasks to conduct experiments and collect specimens. Combining both together on the same dives allowed us to achieve tasks that could not have been performed by either technology alone." A major focus of the study was to document extensive areas of 100% coral cover at depths of 90 meters (300 feet) or more off the islands of Maui and Kaua'i. In particular, vast expanses of continuous coral cover, extending for tens of square kilometers, exist in many sites in the 'Au'au channel off the southwest side of Maui. The reefs are dominated by stony, reef-building corals in the genus Leptoseris, a plate-like coral specialized for deep-reef environments. "These are some of the most extensive and densely populated coral reefs in Hawai'i," said Anthony Montgomery, a U.S. Fish and Wildlife biologist and co-author of the study who previously represented the Hawai'i State Department of Land and Natural Resources during most of the project. "It's amazing to find such rich coral communities down so deep." In addition to the corals, the area is also home to extensive algae meadows that support unique communities of fishes and invertebrates. More than seventy species of macroalgae inhabiting the deep reefs were identified during the study, and several more new species have not yet been assigned formal scientific names. Both corals and algae depend on sunlight to drive photosynthesis, and the study attributed the existence of many of the deep reef habitats to exceptionally clear water. Macroalgae beds such as this Microdictyon setchellianum at a depth of 64 meters (210 feet) off Pearl and Hermes Atoll play a critical role in the ecology of deep coral-reef ecosystems. Nearly every fish in this image is a species endemic to the Hawaiian Islands. Credit: Greg McFall "We found that the diversity of macroalgal species actually peaked at around 90 meters [300 feet] deep," said Heather Spalding of the Department of Botany at the University of Hawai'i at Mānoa and a co-author of the study. "These extensive algae meadows represent a major component of the deep-reef communities, and play a fundamentally important role in the overall ecology." Another interesting finding of the study is that the rate of endemism - species found nowhere else on Earth - increases substantially on the deep reefs. Whereas only 17% of the fishes surveyed at depths less than 30 meters (100 meters) are species endemic to the Hawaiian Islands, more than half of the species below 70 meters (230 feet) are Hawaiian endemics. The rate of endemism increases even more in the Northwestern Hawaiian Islands, where 100% of the fishes inhabiting some of the deep reefs are found only in Hawai'i. "The extent of fish endemism on these deep coral reefs, particularly in the Northwestern Hawaiian Islands, is astonishing," said Randall Kosaki, NOAA's Deputy Superintendent of the Papahānaumokuākea Marine National Monument and a co-author of the study. "We were able to document the highest rates of endemism of any marine environment on Earth." The food web supporting the fishes on deep reefs was studied using advanced stable isotope methods, which revealed small but important differences in the ecology of fish living on deep and shallow reefs. "We used these methods because more traditional approaches require large numbers of specimens," said Brian N. Popp, University of Hawai'i at Mānoa Professor of Geology and Geophysics in the School of Ocean and Earth Science and Technology. "Our results are helping us better understand the relationship between the ecology of deep and shallow coral reef fish communities." The results of the study have important implications for conservation management. In addition to the rich and unique biodiversity inhabiting these environments, deep coral reefs may serve as a refuge for certain species that are more heavily impacted on shallow coral reefs. "With coral reefs facing a myriad of threats," said Kimberly Puglise, an oceanographer with NOAA's National Centers for Coastal Ocean Science, "the finding of extensive reefs off Maui provides managers with a unique opportunity to ensure that future activities in the region, such as cable laying, dredging dump sites, and deep sewer outfalls, do not irreparably damage these reefs." The research, which spanned more than two decades and encompassed the entire 2,590-kilometer (1,600-mile) extend of the Hawaiian Archipelago, was primarily supported by NOAA's National Centers for Coastal Ocean Science, Papahānaumokuākea Marine National Monument, Coral Reef Conservation Program, Office of Ocean Exploration and Research, and the Pacific Islands Fisheries Science Center, as well as, the Hawai'i Undersea Research Laboratory and the State of Hawai'i.
10.7717/peerj.2475
Medicine
Mapping functional connectivity in 3-D artificial brain model by analyzing neural signals
Hyogeun Shin et al, 3D high-density microelectrode array with optical stimulation and drug delivery for investigating neural circuit dynamics, Nature Communications (2021). DOI: 10.1038/s41467-020-20763-3 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-20763-3
https://medicalxpress.com/news/2021-03-functional-d-artificial-brain-neural.html
Abstract Investigation of neural circuit dynamics is crucial for deciphering the functional connections among regions of the brain and understanding the mechanism of brain dysfunction. Despite the advancements of neural circuit models in vitro, technologies for both precisely monitoring and modulating neural activities within three-dimensional (3D) neural circuit models have yet to be developed. Specifically, no existing 3D microelectrode arrays (MEAs) have integrated capabilities to stimulate surrounding neurons and to monitor the temporal evolution of the formation of a neural network in real time. Herein, we present a 3D high-density multifunctional MEA with optical stimulation and drug delivery for investigating neural circuit dynamics within engineered 3D neural tissues. We demonstrate precise measurements of synaptic latencies in 3D neural networks. We expect our 3D multifunctional MEA to open up opportunities for studies of neural circuits through precise, in vitro investigations of neural circuit dynamics with 3D brain models. Introduction Neural circuit dynamics is known as spatiotemporally varying activity patterns of synaptically-wired neurons that become active or silent. The investigation of neural circuit dynamics is essential for deciphering the functional connectivities among the regions of the brain for identifying the mechanisms of circuit dysfunctions related to brain diseases. While microphysiological systems (MPS; tissues/organs-on-chips) have emerged as increasingly promising tools in vitro for augmenting drug developments and for elaborating physiological and pathological states of the body 1 , such efforts for the brain have focused on reconstructions of neural networks or circuits on chips. The needs of these models in vitro continue growing because the models become complementary to animal experiments and can accomplish what in vivo tests cannot. Recently, the developments of in vitro platforms have provided controllable environments for measuring inter-neuronal dynamics 2 . For example, the comparisons of neural dynamics between healthy and diseased model cells via two-dimensional (2D) cultures demonstrated the potential mechanisms associated with brain disorder-induced circuit dysfunctions 3 , 4 , 5 . However, 2D cell cultures, which are still used extensively, inherently cannot recapitulate the structure and functions of three-dimensional (3D) living tissues 6 . Especially for brain or neural tissues, a surge of interest in 3D cultures has occurred with the hope of developing both physiological and pathological models in vitro with the utilization of brain organoids, microphysiological systems (i.e., brain-on-chips), or 3D-printed, engineered tissues 7 , 8 , 9 , 10 , 11 . Specifically, the assembly of silk-based modular scaffolds seeded with cortical neurons allowed the building of multi-layered, 3D cortical tissue 12 . In addition, the directional alignment of the collagen microfibrils enabled the reconstruction of a functional hippocampal neural circuit in vitro at a 3D tissue scale 13 . Despite the emerging advancements of engineered 3D neural circuit models, technologies for both precisely monitoring and modulating neural activities within the neural circuit models in vitro have not been developed yet. Calcium imaging or planar extracellular electrophysiology with a 2D microelectrode array (MEA), which are the tools that are commonly used in 2D cultures of neurons in vitro, remain the mainstream methodologies for monitoring neural activities in 3D in vitro models 14 , 15 , 16 , 17 , 18 , 19 , 20 . A significant disadvantage of these measurement techniques is the difficulty of analyzing the neuronal connections and the dynamics of the neural network in a 3D microenvironment. Similarly, the investigation of neural circuits in vivo remains limited due to the nature of 3D connectivity 21 . As an excellent alternative, 3D MEAs have provided an opportunity to study neural networks in 3D brain models in vitro 22 , 23 . However, the 3D MEAs that have been reported to date have limitations in monitoring the neural circuit dynamics due to both low density 23 and the randomly arranged 22 recording sites. The previous 3D MEAs were also only capable of stimulating the surrounding neurons electrically; thus, stimulating specific cell types has been challenging 24 , 25 . However, the MEA with localized optical stimulation and drug delivery capabilities would help map functional connectivity in neural circuits in vitro by cell-type-specific stimulation and neurochemical modulation 26 . In addition, in 3D MEAs, the compact system is required to monitor the growth phases of developing neural networks in a temporally-resolved manner, for instance, by daily recordings 27 , 28 , 29 , 30 . This feature has been an advantage of the analyses of developing neuronal connections on a 2D MEA 27 , 28 , 29 , 30 , 31 , 32 that can be accommodated in an incubator. Therefore, an ideal 3D MEA that can be used to investigate the neural circuit dynamics in vitro must satisfy the following requirements: spatial coverage across the total volume of an engineered 3D in vitro model, design flexibility according to types and sizes of 3D in vitro models (e.g., engineered neural tissues, organoids), high spatial resolution to analyze the functional connectivity among neurons in 3D in vitro models, localized optical and chemical stimulation capabilities for accurate modulations, and compact integration for temporally-resolved measurements in an incubator. To address the challenges listed above, we present a 3D multifunctional MEA system integrated with a 3D high-density microelectrode array, a thin optical fiber coupled with a small light-emitting diode (LED) and microfluidic channels, both of which are embedded in a shank for precise modulation of neural networks, and a miniaturized incubating and recording system for daily recordings of the developing neural networks (Fig. 1 ). The high-density array of electrodes integrated on the multi-shank structure of the 3D MEA allows the dynamics of the neural network to be measured from a compartmentalized neural tissue. The thin optical fiber and microfluidic channels integrated on our 3D MEA enable precise investigation of the functional connectivity between different neuronal groups through locally optical stimulation and drug delivery. Due to its miniaturized packaging, the incubating and recording system provides a suitable environment for the investigation of temporal evolutions in the dynamics of developing neural networks. Consequently, our 3D multifunctional MEA offers pivotal functions for the precise analysis of 3D brain models in vitro. In order to demonstrate the capabilities of our 3D multifunctional MEA, we analyze the temporal evolutions in neural circuit dynamics over two weeks within a compartmentalized 3D neural tissue where the functional connectivity formed between two different populations of cortical neurons. Also, we measure the synaptic latencies and transmission velocities of neural networks within a compartmentalized 3D neural tissue, which is enabled by both a high density of electrodes and precise stimulating modulations with localized optical stimulation and drug delivery capabilities of our multifunctional 3D MEA. Furthermore, we measure the synaptic latency and transmission velocity from an in vitro 3D culture of neurons. We expect that this 3D multifunctional MEA can open up various opportunities for studies of both neural circuits and neurological disorders through precise investigations of neural circuit dynamics with 3D brain models in vitro. Fig. 1: 3D high-density multifunctional microelectrode array (MEA) system. a Schematic illustrations showing three 2D multifunctional MEAs before stacking and bonding (left), assembled 3D high-density multifunctional MEA with a PDMS fluidic interface and the multifunctional shank for optical and chemical stimulations (middle), and the application to 3D neural network model in vitro compartmentalized with two somatic and neurite regions (right). b Photograph of the 3D multifunctional MEA (left; scale bar, 5 mm) and scanning electron microscopy (SEM) image of the 3D electrode array (right; scale bar, 1 mm). c SEM image of the multifunctional shank with thinned optical fiber (blue), embedded glass (green) and outlet of microchannels underneath the embedded glass layer (left; scale bar, 100 μm), and SEM and optical images of the recording shanks with platinum (Pt) electrodes (right; scale bar, 50 μm). d Photograph of packaged 3D multifunctional MEA integrated with small light-emitting diode (LED) and flexible printed circuit (FPC) connector on a printed circuit board (PCB). (Scale bar, 10 mm). e Photograph of the 3D multifunctional MEA system with a custom microdrive and a PDMS 3D culture chamber in an acrylic enclosure (left; scale bar, 10 mm), and 3D rendered confocal fluorescence image of the compartmentalized two-group neural network at day 14 in vitro (DIV) showing neurites (green; Tuj-1), astrocytes (red; GFAP), and nuclei (blue; DAPI). Scale bar, 100 μm. Fabrication and packaging of the MEA system were independently repeated at least ten times with similar results to ensure reproducibility, and the representative images are shown in the figure. Full size image Results Design and fabrication of the 3D multifunctional MEA Our 3D multifunctional MEA consists of a three-by-six array of shanks, i.e., one multifunctional shank and 17 recording shanks. We integrated 63 recording microelectrodes evenly throughout the 18 shanks and embedded both a thinned optical fiber (~60 μm in diameter) and five parallel microfluidic channels in the multifunctional shank (Fig. 1 a and 1b ). The integration of all the electrodes, the optical fiber, and the microfluidic channels allowed measurements of neural activities across the whole region within an engineered 3D neural tissue, as well as measurements of the local modulations of the neural networks at a specific site via optical and chemical stimulations (Fig. 1a ). Each shank was 6 mm long, which was enough to provide space between the 3D neural tissue and the printed circuit board (PCB) of a packaged device. Our intent was to prevent accidental short circuits of the PCB immersed in the culture medium, as well as potential contamination of the medium. To maintain the 3D neural tissue’s structural fidelity with 3D MEA implanted, we aimed to minimize the probe’s dimensions. The multifunctional shank’s width was 145 μm, the recording shank’s width was 63 μm, and the shanks’ thicknesses was 40 μm. The proportion of volume occupied by our 3D MEA was only 2.63% of the total volume of the 3D neural tissue, which suggests that the pre-inserted 3D MEA would affect 3D neural networks’ formation in a nearly negligible manner. When viewed from the bottom, the shanks of our 3D MEA formed a matrix with three rows and six columns (Figs. 1 a and 1b ). The spacings of the rows and columns were 500 and 360 μm, respectively. We integrated either three or four recording microelectrodes (20 × 20 μm 2 ) on the tip of each shank, and the distance between two adjacent electrodes was 85 μm. We designed the 3D MEA to cover the dynamics of neural network within a monolithic, yet compartmentalized, 3D neural tissue construct (1.85 × 1 × 0.3 mm 3 ) comprised of a neurite region between two somatic (Fig. 1 a and 1b ). Notably, the volumetric coverage to which the microelectrodes corresponded was approximately 114 sites ⋅ mm -3 . The density of the recording sites was ~33 sites ⋅ mm −3 , which was significantly higher than that of a 3D MEA that was reported recently 23 . We devised a 3D multifunctional MEA with three layers of 2D MEAs that were fabricated separately using our previous microengineering processes 26 , 33 , 34 (Supplementary Fig. 1 ). Specifically, we assembled a 3D configuration by bonding the three 2D MEAs consecutively with different body sizes (Fig. 1a and Supplementary Fig. 2 ). A highlight of fabricating the multifunctional shank of the 2D MEA was when the microfluidic channels for the delivery of the chemicals were embedded directly in a thin shank 34 . We placed an outlet that had a width and a length of 30 μm and 12 μm, respectively, at the tip of the multifunctional shank where each of the three 20 μm wide, 12 μm high microchannels ended (Fig. 1c ). Note that our implementation readily allowed the highly flexible configuration of the microelectrodes by precise adjustment of the positions of the shank on the 2D MEA arrays, as well as in a 3D setting, e.g., a setting with inter-body spacers in between 33 , 35 (Supplementary Fig. 2 ). Overall, our design and fabrication of the 3D multifunctional MEA provides the capability of local, multimodal manipulation of neural networks during millimeter-scale monitoring of neural activities across the entire domain of 3D neural tissues in vitro. Packaging and characterization of the 3D multifunctional MEA Figure 1d shows the 3D multifunctional MEA that we packaged for the in vitro experiments, which was done by bonding on a custom-designed PCB for electrical connections to an external recording system and assembly with a custom microdrive. Then, we bonded a polydimethylsiloxane (PDMS) microfluidic chip onto the 3D MEA as a fluidic interface for the delivery of drugs. Then, we bonded a small LED to the end of the fiber. The LED provided a simple operating environment in which only two thin electrical wires were required instead of any external light sources, such as a laser. After the packaging, we enhanced the neural recording capability by increasing the effective surface areas of the platinum (Pt) electrodes by electrodepositing Pt-black 36 on them (Supplementary Fig. 3a ). We evaluated three crucial functions of our 3D multifunctional MEA before analyzing the dynamics of the neural network. First, we measured the electrical impedance of all 63 of the microelectrodes. We confirmed that the average electrical impedance after the electrodeposition of the Pt-black at 1 kHz (0.015 ± 0.004 MΩ) became two orders of magnitude lower than that of bare Pt (1.761 ± 0.346 MΩ) (Supplementary Fig. 3b ). This significantly reduced the impedance, i.e., ~44 times lower than iridium (Ir) electrodes that were the same size 26 , which was much more advantageous for the extracellular measurements of neural activities, acquiring higher signal-to-noise ratios. To evaluate the Pt-black electrodes’ long-term stability, we measured the electrical impedance of Pt-black electrodes submerged in 1× phosphate-buffered saline (PBS) in an incubator at 37 °C. We confirmed that the impedance remained nearly constant from day 1 (0.017 ± 0.004 MΩ) to 14 (0.018 ± 0.004 MΩ) without degradation (Supplementary Fig. 3c ). Next, we measured the flow rates through the microfluidic channels for three different conditions, i.e., with the 3D MEA exposed to air, with the 3D MEA inserted in cell-free collagen, and with the 3D MEA inserted in cell-seeded collagen. We applied pressure between 50 and 200 kPa from an inlet of the PDMS chip. We chose 0.25% [w/v] of collagen type I because it has used extensively as a scaffold type and concentration for engineered tissue constructs 37 , including neural tissues 13 . For the cell-laden collagen, we seeded primary rat cortical neurons (E18) at a density of 4 × 10 7 cells ⋅ mL −1 , which was identical to the in vitro experimental conditions used throughout this study. At a pressure of 100kPa, the flow rates at these conditions were 0.557 ± 0.007 μL ⋅ min −1 (in air), 0.542 ± 0.011 μL ⋅ min −1 (in cell-free collagen), and 0.539 ± 0.015 μL ⋅ min −1 (in neuron-seeded collagen), all of which were statistically insignificant and similar to the calculation based on hydraulic resistance 26 , i.e., 0.568 μL ⋅ min −1 (Supplementary Fig. 3d ). These data verify that we were able to control the infused volume precisely based on the predictive calculation. Last, to predict the volumetric coverage of optical stimulation, we measured output optical power at the tip of the fiber and then simulated the distribution of transmitted light. First, referring to the LED datasheet, we calculated the luminous efficiency of the LED as 5.15% (Supplementary Fig. 3e and the detailed calculation in Supplementary Note. 1). As a result, we confirmed that the LED’s output optical power was 51.5 mW when 1 W of input electrical power was supplied to the LED. Upon 51.5 mW from the LED, the output optical power from the fiber tip was 0.15 mW, which corresponded to a light coupling efficiency of 0.29% (Supplementary Fig. 3e and the detailed calculation in Supplementary Note. 1). The light coupling efficiency was 0.29%, lower than 1.42%, as we previously reported 26 , but the proposed probe was integrated with a non-coherent LED instead of a coherent laser as a light source. The LED’s integration provided a compact and straightforward configuration while the LED exhibited inherently lower optical power due to poor directionality. Also, the optical power measured from the fiber tip corresponded to an optical power density of 76 mW ⋅ mm −2 , which surpassed the minimum optical intensity (1 mW ⋅ mm −2 ) to activate channelrhodopsin-2 (ChR-2) 38 . Based on the measured output optical power (0.15 mW) transmitted from the LED when the input electrical power was 1 W, corresponding to the optical power of 51.5 mW, we used the Monte Carlo simulation 39 , 40 to profile the volume of transmitted light in collagen. (More detailed simulation method is provided in the “Methods “section) We confirmed that the irradiance at a distance of 220 μm from the end of the fiber was greater than 1 mW ⋅ mm −2 , which is the threshold intensity for the activation of channelrhodopsin-2 (ChR2) (Supplementary Fig. 3 f). In addition, by inspecting the distribution of light in the x – y plane, we confirmed that the volumetric coverage that was stimulated did not extend beyond the tip of the shank. These data indicate that our 3D MEA allowed optical stimulation locally at the desired site (Supplementary Fig. 3f ). Assembly of the monitoring system and the 3D neural culture To enable real-time recording of neural activities within engineered neural tissues, we devised a miniaturized cubicle in which we assembled the 3D multifunctional MEA (Fig. 1e ). The miniaturized cubicle consisted of the following: a custom-designed microdrive with stainless steel to adjust and hold the vertical position of our 3D MEA, a PDMS culture chamber with a well (2.5 × 1.5 × 0.5 mm 3 ) to confine the neuron-seeded collagen scaffold and to supply culture medium, and an acrylic enclosure (10 × 8 × 8 cm 3 ) to minimize the undesired evaporation of the medium over culture periods and during measurements of the dynamics of the neural network. The custom-designed microdrive consisted of a moving part and bottom plate assembled by inserting two screws into the connection holes and tightening them (Supplementary Fig. 4a ). By turning a long screw in the center of the moving part of the microdrive, the vertical position of the 3D MEA could be adjusted readily and precisely (Supplementary Fig. 4b ). The overall system’s size was small and, the configuration, including the 3D multifunctional MEA, was straightforward (Supplementary Figs. 5 and 6 ). In addition, because the MEMS-based batch process allows for fabricating 20–30 electrode arrays from a single wafer, our system’s scaling-up to perform multiple simultaneous experiments would be reasonably feasible. For the fabrication of 3D neural network model in our culture chamber as a first step, we immobilized the PDMS culture chamber on a bottom plate of the custom microdrive with a thin layer of uncured PDMS as an interfacial adhesive (Supplementary Fig. 6a ). After autoclaving the culture chamber attached to the microdrive, we coated polydopamine on the inner surfaces of the PDMS well to adhere to the collagen scaffold. Then, we used two small screws to assemble the 3D MEA on a moving part of the microdrive (Supplementary Fig. 6b ). We found that the timing of the 3D MEA loading into the collagen scaffold was the most critical variable. More specifically, the 3D MEA should be loaded before initiating the gelation of the collagen in order to allow the uniform distribution of the collagen microfibrils near the shanks (Supplementary Fig. 7a and Supplementary Movie 1 ). In contrast, it was difficult to penetrate the 3D MEA by the insertion of the 3D MEA into collagen after complete gelation because of the fibrous, viscoelastic properties of the collagen, and the result exacerbated the structural deformation (Supplementary Fig. 7b and Supplementary Movie 1 ). We developed two types of models of a 3D neural network, i.e., a single-group neural network and a compartmentalized two-group neural network (Supplementary Fig. 8 ). We used the single-group model for the analysis of the connectivity among the small units (i.e., individual neurons), and we used the two-group model for the analysis of the connectivity among the larger units at the network level. The compartmentalized neural tissue was formed by first consecutively filling cell-free collagen into a central compartment and then adding neuron-seeded collagen into the two side compartments at intervals of 20 min. In other words, we loaded the neuron-seeded collagen into the side compartments after partial gelation of the cell-free collagen in the central compartment for 20 min. We utilized ~125 μm-thick polyester (PET) films to separate the compartments in the well. (See Supplementary Fig. 8 and the more detailed protocol in the “Methods” section.) We noted that removal of the PET sheets in less than 40 min (e.g., 30 min; 10 min after loading in the side compartments) caused bleed-through between the side and central zones. In the miniaturized cubicle, we confirmed that the primary rat cortical neurons in collagen maintained high viability (Supplementary Fig. 9 and Supplementary Movie 2 ) and formed structural connectivities with the transiently maturing outgrowth of neurites over two weeks in both the single-group and two-group neural network models (Supplementary Figs. 10 , 11 and Supplementary Movies 3 , 4 ). Interestingly, we found small portions of GFAP-positive astrocytes in both types of neural tissues, although we isolated neuron-rich populations from embryonic brains (E18) (Supplementary Figs. 10 , 11 , and 12 ). We also note that our compartmentalization approach was successful based on the neuronal nuclei that remained in the two somatic regions, while neurites spread through the central neurite region (Supplementary Figs. 11 , 12 and Supplementary Movie 4 ). We note that 0.25% [w/v] collagen allows for uniform neuronal density throughout the full thickness of 300 μm because z -stack imaging through 100 μm showed uniform seeding density in both current and our previous studies 13 . However, lower concentrations of collagen or Matrigel lead to seeded neurons’ sedimentation by gravity before the scaffolds’ gelation. Unfortunately, due to the high cell seeding density (i.e., 4 × 10 7 cell ⋅ mL −1 ) used in this study, which was similar to the cell density in the rat’s brain 41 , the z -stack imaging was limited to 100 µm with conventional imaging methods such as a confocal microscope. Advanced 3D volumetric imaging techniques such as a tissue clearing approach could serve as an excellent solution to overcome this limitation. Dynamics of neural network in 3D neural culture To analyze the dynamics of neural networks according to the formation of functional connections between neurons in 3D, first, we measured the neural activities in the single-group model for up to 14 days in vitro (DIV). Spontaneous activities started showing up from DIV 6 on some electrodes, and both the neurons’ firing rates (Fig. 2a , color-mapped raster plots in Fig. 2b and mean spike rates in Fig. 2c ) and the number of active electrodes (Fig. 2d ) increased as we continued recording daily for 14 days. (See Supplementary Fig. 13 .) These data are consistently similar to those from 2D and 3D in vitro neural models 27 , 28 , 29 , 30 , 31 , which reaffirmed that sufficient maturation, primarily including substantial neurite outgrowth and inter-neuronal synapse formation, is essential for the functional activity of neurons cultured in vitro 27 , 42 , 43 , 44 . Also, a thorough inspection of each electrode’s signals indicated that the neurons’ firing rates in 3D increased globally within the entire neural tissue (Supplementary Fig. 14 ). Fig. 2: Temporal evolutions of spontaneous activities in the single-group 3D neural network model according to the formation of functional connections between neurons. a Representative spontaneous activities at days in vitro (DIV) 6 and 14: traces of field potential (left) and overlay spike plots (right). b Color-mapped raster plots showing spontaneous activities recorded from 63 electrodes of the 3D multifunctional MEA from DIV 6 to 14. c – j Bar graphs displaying temporal evolutions of spontaneous activities from DIV 6 to 14 ( n = 5 independent samples for all data): mean spike rate ( c ; * P = 0.0102 between DIV 10 and 11), number of active electrodes ( d ), mean burst rate ( e ; * P = 0.032 between DIV 10 and 11), burst duration ( f ), number of spikes in burst ( g ), inter-spike interval (ISI) in burst activity ( h ), inter-burst interval (IBI) ( i ), percentage of burst spikes in total spikes ( j ). Data are presented as mean values +/− s.e.m. Statistical significance was tested with a two-tailed unpaired t -test. Source Data is available as a Source Data file for Fig. 2b-j . Full size image In addition to the individual activities, our 3D high-density MEA made it possible to measure the frequency of the in vitro burst activity, i.e., the repeated high-frequency firing of neurons. Burst activity is an essential factor in neuronal communications both in vitro and in vivo 45 , 46 . While the neurons’ burst activity gradually increased until DIV 10, the overall frequency of the burst activity and the ratio of the burst activity within each neuron’s firing pattern increased after DIV 10 (Fig. 2 e– 2j ). These results support the functional connections among neurons formed in 3D. Although, in general, the burst activity in vitro could be observed by increasing the density of cell seeding, our 3D MEA, with its densely integrated electrodes, dramatically increased the probability of capturing the burst activity, and it also enhanced the spatial resolution in precisely analyzing neural networks in vitro. We also analyzed the synchronized activity that is known to occur in mature neural networks in vitro. We used the synchrony method, which assigns a score between zero (the lowest) and one (the highest), indicating the level of the synchronization to each pair of electrodes 32 , 47 . A high synchronization score suggests the formation of a functional neural network among the neurons around two electrodes. We confirmed that, as the culture period increased, the synchronization among active electrodes became more pronounced, and the number of synchronized electrodes also increased (Fig. 3 a– e and Supplementary Fig. 15 ). These data indicated that the number of synapses among the neurons in the single-group neural network model in vitro increased tremendously after DIV 10. We also applied the Louvain algorithm 32 , 48 , a graphical network analysis, for 3D visualization of the functional neural network between neurons. We set each electrode as a node, synchronization between electrodes as weight, and merged them into possible networks when the synchronization score was greater than 0.5 between nodes. We found that a small number of networks started forming from DIV 8 (Fig. 3b ). Intriguingly, the number of networks peaked at DIV 9, and this was followed by slight decreases later in vitro (Fig. 3f and Supplementary Fig. 15 ). These data indicated that short connections form increasingly between adjacent neurons over an early stage before DIV 10, and the short connections become relatively large networks that become mature by decreasing the number of networks while connections per network increased continuously (Fig. 3 , Supplementary Fig. 15 , and Supplementary Table 1 ). Fig. 3: Analysis of neural network dynamics based on spontaneous activities in the single-group 3D neural network model. a – e Color-mapped cross-correlation matrices displaying synchronized scores between electrodes ( i ) and 3D network maps showing connectivities with node degrees, as well as correlations between nodes ( ii ), based on spontaneous activities at days in vitro (DIV) 6 ( a ), 8 ( b ), 10 ( c ), 12 ( d ), and 14 ( e ). The color of the node indicates the network index connected among electrodes; for example, the purple nodes at DIV 14 indicate that the nodes are in the same network. Node degree indicates the number of connected electrodes from each electrode; for example, the greatest value of 1 represents that the electrode is connected with all electrodes. The colors of the lines indicate the correlation between electrodes. f Plots showing the number of networks (red triangle and dotted line; ** P = 0.002 between DIV 9 and 10) and the number of connected electrodes per network (blue circle and dotted line) from DIV 6 to 14. Data are presented as mean values +/− s.d. with individual data points (blue outlined circle; n = 3 networks for DIV 11, 12, and 14; n = 4 networks for DIV 8, 9, 10, and 13). Statistical significance was tested with a two-tailed unpaired t -test. Source Data is available as a Source Data file for Fig. 3a–f . Full size image Dynamics of neural network with optical and chemical stimuli For the precise investigation of the synaptic connectivity before and after the formation of the functional neural network in 3D, we utilized optogenetics as a verifying tool capable of cell type-specific stimulations 38 . After expressing a light-sensitive ion channel, i.e., channelrhodopsin-2 (ChR2), in the primary rat cortical neurons cultured within collagen by a viral infection with AAV-EF1α-ChR2-eGFP (Supplementary Fig. 16 and Supplementary Movie 5 ), We successfully measured activated neural signals from ChR2-neurons by light stimulation (0.2 Hz with a 50% duty cycle, 76 mW ⋅ mm −2 with a total power of 0.15 mW) despite the stimulation artifact by the photovoltaic effect during the transition between the light ON and OFF 49 , 50 (Supplementary Fig. 17 ). After observing the activated neural signals by light stimulation, we stimulated the ChR2-neurons locally around the multifunctional shank while recording from all of the shanks of our 3D MEA. At DIV 6, only the neurons around the optical stimulation site correspondingly fired more (Fig. 4 a, 4c , and Supplementary Fig. 18a ). In contrast, when the optical stimulation occurred, the neurons at DIV 14 fired enormously throughout all of the neural tissue, as well as around the stimulation site (Fig. 4 b, 4d , and Supplementary Fig. 18b ). These results indicated that most of the neurons that had matured over two weeks in the functional network can be activated through synaptic connections (Fig. 4e and Supplementary Table 2 ) despite a slight decrease in spontaneous activity (i.e., neural activities during LED-off). This reduced activity is conceivably attributed to a potential effect on viability and neural activities by the ChR2-viral infection 51 , 52 , compared with the activities from the uninfected cell culture shown in Fig. 2 . Fig. 4: Analysis of synaptic connectivity before and after forming functional neural networks in the single-group 3D neural network model. a – b Raster plots showing neural activities recorded from 63 electrodes during local optical stimulations (0.2 Hz, 50% duty cycles) at days in vitro (DIV) 6 ( a ) and 14 ( b ). The light blue rectangle indicates the onset of light. The dark blue rectangle represents the electrodes on the light-transmitted multifunctional shank. c – d Firing rate of 3D cultured neurons near the multifunctional and other recording shanks during LED off (white)-cycle and on (blue)-cycle of the optical stimulations at DIV 6 ( c ; ** P = 0.0058 near the multifunctional shank; ns P = 0.058 near the other recording shanks) and 14 ( d ; ** P = 0.0072 near the multi-functional shank; ** P = 0.0080 near the other recording shanks). e 3D network maps between electrodes based on neural activities by the optical stimulations at DIV 6 (left) and 14 (right). Node color, degree, and line color indicate network index connected among electrodes, the number of connected electrodes from each electrode, and the correlation between electrodes, respectively. f – h Raster plots showing neural activities recorded from 63 electrodes during local optical stimulation before ( f ) and after ( g ) CNQX/AP5 injection, and after wash-out CNQX/AP5 ( h ) at DIV 14. i – k Firing rate of 3D cultured neurons near the multifunctional and other recording shanks during LED off (white)-cycle and on (blue)-cycle of the optical stimulations before ( i ; **** P = 0.000079 near the multifunctional shank; *** P = 0.0008 near the other recording shanks) and after ( j ; ** P = 0.0041 near the multifunctional shank; ns P = 0.0707 near the other recording shanks) CNQX/AP5 injection, and after wash-out CNQX/AP5 ( k ; ** P = 0.0012 near the multifunctional shank; *** P = 0.0005 near the other recording shanks) at DIV 14. l – n 3D network maps between electrodes based on neural activities by the optical stimulations before ( l ) and after ( m ) CNQX/AP5 injection and after washing-out CNQX/AP5 ( n ) at DIV 14. Data are presented as mean values +/− s.d. with individual data points (white circle; n = 6 stimulation trials for all data). Statistical significance was tested with a two-tailed unpaired t -test. Source Data is available as a Source Data file for Fig. 4a–n . Full size image To demonstrate the chemical activation of neurons by synaptic transmission combined with optogenetic modulation, at DIV 14, we infused synaptic blockers of excitatory synaptic transmission through the microfluidic channels in the multifunctional shank. Specifically, we injected 1 μL of CNQX (20 μM) and AP5 (50 μM) for 4 min at a flow rate of 0.25 μL ⋅ min −1 and waited 30 min to ensure that sufficient diffusion had occurred. The spontaneous neural activities are known to recover quickly upon exposure to CNQX/AP5 53 , but the synaptic transmissions are blocked until washed out 54 . Likewise, the mean firing rate during LED off remained similar before and after the CNQX/AP5 injection (Fig. 4 i and 4j ). Whereas neural activities (i.e., active electrodes and firing rate) were synchronized away from the stimulation site before the chemical silencing (Fig. 4 f, 4i , and Supplementary Fig. 18c ), when the light pulses were applied, the region of synchronized neural activities became limited predominantly to the region around the site of the optical stimulation (Fig. 4 g, 4j , and Supplementary Fig. 18d ). After washing out the synaptic blockers with fresh medium three times for 30 min followed by 1 h of stabilization, the neural activities re-synchronized throughout the 3D neural tissue (Fig. 4 h, 4k , and Supplementary Fig. 18e ). In addition, our network analysis indicated that the blockage of the excitatory synaptic receptors segregated more extensive networks (Fig. 4l ) into several smaller networks (Fig. 4m and Supplementary Table 3 ); similarly, the synaptic networks recovered after the wash-out (Fig. 4n ). Network dynamics between two compartmentalized neural groups We examined the functional connectivity between two compartmentalized neural groups as an in vitro model for investigating neural circuit dynamics. The culture chamber for the two-group neural network model consisted of two side somatic regions separated by a central neurite region (Fig. 5a ). Six shanks were located in each region to measure neural activities throughout the compartmentalized 3D neural tissue, while the shank with the stimulation capabilities was in the first somatic region (Fig. 5a ). By immunostaining, visualization indicated that we had successfully reconstructed the soma-neurite-soma structure only with the dense outgrowth of neurites across the two somatic regions at DIV 14, starting from DIV 6 (Fig. 1e and Supplementary Fig. 11 ). Similar to the single-group model, the firing rate and the number of active electrodes increased substantially after DIV 10 (Supplementary Figs. 19 and 20 ). Also, our 3D MEA enabled the measurement of synchronized activity at DIV 9, which indirectly indicated that the functional connectivity between the two groups of cortical neurons had begun to form (Supplementary Fig. 19 ). Fig. 5: Analysis of functional connectivity between compartmentalized somatic regions in the two-group 3D neural network model. a Schematic illustrations of the 3D multifunctional MEA inserted in the compartmentalized 3D neural tissue consisting of two somatic regions (pink) and a neurite region (orange). Six shanks are positioned in each region, and the multifunctional shank is located near a corner of the first somatic region. The blue rectangle indicates the onset of light. b – d Changes in neural activities by locally optical stimulation at days in vitro (DIV 6) in the two-group 3D neural network model. b Raster plots displaying neural activities recorded from 63 electrodes of the 3D MEA within the compartmentalized regions during local optical stimulations (0.2 Hz, 50% duty cycles). The light blue rectangle indicates the onset of light. The dark blue rectangle represents the electrodes on the light-transmitted multifunctional shank. c 3D visualization of shank map in the compartmentalized neural network model (top) and z -averaged, color-mapped increase in firing rate during LED on-cycles, compared with that during LED off-cycles (bottom), corresponding to sub-regions near each shank as depicted in the top panel. Black indicates no signals recorded from the electrodes. d Firing rate of 3D cultured neurons at the stimulation site, in the first somatic, neurite, and the second somatic regions during LED off (white)-cycle and on (blue)-cycle of the optical stimulations (* P = 0.0443 at the stimulation site; ns P = 0.3409 in the first somatic region). e – g Changes in neural activities by locally optical stimulation at DIV 10 in the two-group 3D neural network model. e Raster plots; ( f ) 3D visualization of shank map and z -averaged, color-mapped increase in firing rate during LED on-cycles; ( g ) Firing rate of 3D cultured neurons at each region (* P = 0.0220 at the stimulation site; * P = 0.0444 in the first somatic region; ns P = 0.4506 in the neurite region; * P = 0.0436 in the second somatic region). h – j Changes in neural activities by locally optical stimulation at DIV 14 in the two-group 3D neural network model. h Raster plots; ( i ) 3D visualization of shank map and z -averaged, color-mapped increase in firing rate during LED on-cycles; ( j ) Firing rate of 3D cultured neurons at each region (* P = 0.0218 at the stimulation site; * P = 0.0229 in the first somatic region; * P = 0.0214 in the neurite region; * P = 0.0228 in the second somatic region). Data are presented as mean values + /− s.d. with individual data points (white circle; n = 6 stimulation trials for all data). Statistical significance was tested with a two-tailed unpaired t -test. Source Data is available as a Source Data file for Fig. 5b–j . Full size image To validate the formation of functional connectivity, we performed the spatially-resolved, local optical stimulation to the neurons expressing ChR2, followed by temporally-resolved daily measurements. At DIV 6, only neurons around the stimulation site were activated by the light pulses (Fig. 5b–d and Fig. 6a–c ). Then, the optically-activated neurons gradually increased in the first somatic region between DIV 7 and 9 (Supplementary Fig. 21 ). At DIV 9, some neurons in the second somatic region became active soon after the optical stimulation of the first somatic region, which confirmed the formation of the functional connectivity between the two different neuronal populations (Supplementary Figs. 21 and 22 ). At DIV 10, we observed both significant increases in the firing rate in both regions by the optical stimulation and synchronized activities (Fig. 5e–g , Fig. 6d–f and Supplementary Figs. 22 and 23 ). Also, we observed a higher firing rate in the first than the second somatic region due to active burst activity recorded from electrodes (E5 and E6) around the stimulation site. This result could be attributed to enhanced local network formation in the first somatic region by repetitive neural activation through daily optical stimulation 55 . With increasing connectivity between the two somatic regions via the neurite region from DIV 11 to DIV 13 (Supplementary Figs. 22 , 23 , and 24, and Supplementary Table 4 ), more vigorous synchronized activities, neuronal firing rates, and increased number of networks occurred in all of the regions at DIV 14 (Fig. 5h–j and Fig. 6g–i ). Our 3D high-density MEA also made it possible to measure the extracellular action potentials in the neurite region from DIV 11 to 14, which indicated that the propagation of the action potentials had occurred between the two different neuronal populations 56 (Supplementary Figs. 21 , 23 , and 24 ). Fig. 6: Temporal evolutions of neural network dynamics in the compartmentalized two-group 3D neural network model. Color-mapped cross-correlation matrices displaying synchronized scores between electrodes ( a , d , and g ), top-down view ( xy -plane; b , e , and h ) and 3D view ( c , f , and i ) of network maps showing connectivities with node degrees, as well as correlations between nodes, based on neural activities by optical stimulation at days in vitro (DIV) 6 ( a – c ), 10 ( d – f ), and 14 ( g – i ). Node color, degree, and line color indicate network index connected among electrodes, the number of connected electrodes from each electrode, and the correlation between electrodes, respectively. Source Data is available as a Source Data file for Fig. 6a–i . Full size image Significantly, we were able to analyze the synaptic latency between the two neural groups in a 3D setting at DIV 14 (Fig. 7a–c ). The synaptic latency serves as direct evidence of the functional connectivity between the two regions. We estimated the synaptic latency by comparing relative time points of the signals recorded from each electrode with a reference timestamp at the optical stimulation (Fig. 7d ). We found that the synaptic latency was shorter than 10 ms, i.e., it ranged from 2 to 8 ms in all three regions with the exception of the stimulation site (Fig. 7 d and 7e ). However, more delayed synaptic latency was evident in the second somatic region, which indicated that the longer the distance from the stimulation site was, the more delayed the signal became. We confirmed that the most prolonged synaptic latencies in the first and second somatic regions were 6 ms (between electrode 1 on S1 and electrode 21 on S6) and 8 ms (between electrode 1 on S1 and electrode 57 on S16) (Figs. 7 f and 7g ). Considering the previously reported typical synaptic latency of approximately 1 to 4 ms 57 , we inferred that multiple synapses could connect the neurons near electrodes 21 or 57. Also, signal delays varied along the z -axis on some shanks (e.g., shank 15; S15) (Fig. 7 d and 7e ). This observation suggested that neurons at different positions along the z -axis could receive different synaptic inputs. To summarize, the distribution of synaptic latency in each region showed signal delays of approximately 3 to 8 ms between the two neural networks (Fig. 7h ). Fig. 7: Synaptic latency and transmission velocity between two somatic regions in the compartmentalized two-group 3D neural network model. a – c Transiently spiking signals of neurons recorded from 63 electrodes of the 3D MEA by optical stimulation in the first somatic ( a ; electrodes 1–21), neurite ( b ; electrodes 22–42), and the second somatic ( c ; electrodes 43–63) regions at DIV 14. The blue triangle indicates the onset time of the optical stimulation. d 3D color-mapped synaptic latency at each electrode, relative to the optical stimulation on shank 1. Black-colored circle indicates no signals recorded from the electrodes. The light blue indicates transmitted light from the shank 1. e Synaptic latency at each electrode (white circle) in the first somatic (shanks 1-6), neurite (shanks 7–12), and the second somatic (shanks 13–18) regions. The red line indicates mean synaptic latency of the signal recorded from each electrode on each shank. Data are presented as mean values +/− s.d. with individual data points (white circle; n = 3 signal recorded electrodes for shank 4, 5, 9, 10, 11, 12, 14, 15, 16, 18 and n = 4 signal recorded electrodes for shank 1, 2, 3, 6, 7, 8, 13, 17). f – g 3D visualization of the shank map from the stimulation site to electrode 21 on shank 6 in the first somatic region ( f ) and electrode 57 on shank 16 in the second somatic region ( g ) (red dotted line in left panel). Transiently spiking signals of neurons recorded from electrodes 1 and 21 ( f ), and from electrode 1 and 57 ( g ) (right panel) to display synaptic latency within the first somatic region from the stimulation site. The blue triangle indicates the onset time of the optical stimulation. The light blue indicates transmitted light from the shank 1. h Color-mapped mean synaptic latency at each shank. i Scatter plot of distance to all electrodes from the stimulation site depending on synaptic latency. The dotted line and shaded area indicate the best fit of linear regression and the 95% confidence level, respectively. The slope of the linear regression is the synaptic transmission velocity. j Scatter plot of distance from the stimulation site to the electrodes along the longitudinal (orange; orange arrow in inset image) and transverse (green; green arrow in inset image) directions depending on synaptic latency. k Bar graph showing the synaptic transmission velocity along the transverse and longitudinal directions. Data are presented as mean values +/− s.d. with individual data points (white circle; n = 7 signal recorded electrodes for the transverse direction and n = 19 signal recorded electrodes for the longitudinal direction).** P = 0.0010. Statistical significance was tested with a two-tailed unpaired t -test. Source Data is available as a Source Data file for Fig. 7e, h-k . Full size image Interestingly, synaptic transmission along a longitudinal direction (i.e., the xz-plane when y = 0) took longer than that along a transverse direction (i.e., the yz-plane when x = 0) (Fig. 7 d and 7h ). As an excellent representative, for two locations approximately 1 mm away from the optical stimulation site, the relative mean signal delays were 4 ms on shank 5 (S5) and 2.35 ms on shank 8 (S8) (Fig. 7d ). Accordingly, the slope of a linear regression curve in the synaptic latency versus the distance from the optical stimulation can be considered to be the synaptic transmission velocity. We found that the synaptic transmission velocities ranged from 230 ± 42.6 mm ⋅ s −1 irrespective of directions, 449 ± 106 mm ⋅ s −1 along the longitudinal direction, and 202.6 ± 69.8 mm ⋅ s −1 along the transverse direction (Fig. 7 i and 7j ). The synaptic transmission velocity along the longitudinal direction was about twice as fast as that along the transverse direction (Fig. 7k ). These results indicated that the neurons between the two groups were connected in a complex manner via one or multiple synapses. This difference would result from the neurite region’s presence along the signal propagation path in our 3D culture model. More specifically, the neurite region’s placement, combined with our multifunctional 3D MEA, played a pivotal role in estimating faster transmission velocity along the longitudinal than the transverse direction. The density of synapses in the neurite region was much lower than that in the soma regions. The synaptic latency represents the required time for the signal transmission from the pre-synapse to the post-synapse, which is known to be longer than the signal transmission along neurites (more specifically, axons) 58 . Therefore, we hypothesized that the synaptic transmission velocity would inversely proportional to the synaptic density. In other words, because the number of synapses along the longitudinal path (the neurite region) was much smaller, the longitudinal transmission was significantly less impeded than the transverse transmission. We compared the synaptic latency along the longitudinal direction between the soma and neurite regions to prove this hypothesis. We confirmed that the longitudinal velocity in the neurite region was faster than that of the somatic regions (Supplementary Fig. 25 ). Furthermore, we compared the longitudinal and transverse velocities in the first somatic region, where the number of synapses was similar in both directions. We reaffirmed that the synaptic transmission velocity along the two directions was similar (Supplementary Fig. 26 ). Overall, our 3D multifunctional MEA enabled the neural network dynamics by the direct confirmation of the functional connectivity between two different populations of neurons cultured in 3D with a high spatiotemporal resolution. A further application to human-derived organoids Organoids are 3D cell aggregates with architecture and functionality similar to those of a living organ through self-renewal and self-organization by stem cells 59 . Therefore, human-derived organoids are in the spotlight as next-generation in vitro model 59 , 60 . However, functional analysis of human-derived brain or spinal cord organoids has still been mostly performed on a 2D MEA 29 , 30 , which possesses an inherent limitation for investigating the functional connectivity inside organoids with 3D structures. To verify our system’s applicability in human-derived organoids, we exploited spinal cord organoids derived from human induced pluripotent stem cells (iPSCs) (Supplementary Fig. 27a–c ). Considering the organoids’ size (~700 μm in diameter), which is similar to the size of each somatic or neurite region, we measured neural activities in mature (i.e., 3-month-old) organoid using a needle-type MEA integrated with 16 electrodes and microfluidic channels (Supplementary Fig. 27c ). We successfully measured neural activities in the organoid from all the 16 electrodes (Supplementary Fig. 27d–e ). Notably, neural signals inside the organoid were synchronized around most electrodes (Supplementary Fig. 27f ). This result indicates that neurons in the organoid were inter-connected robustly. To confirm the functional connectivity and the capability of monitoring temporal evolutions, we injected tetrodotoxin (TTX), which blocks neurons’ sodium channels, through the microfluidic channel of the MEA. Immediately after the TTX injection, the organoid’s neural activities ceased dramatically (Supplementary Fig. 27d and 27g ), and accordingly, the robust interconnections between neurons also disappeared (Supplementary Fig. 27h and Supplementary Table 5 ). This demonstration reveals that our multifunctional MEA system can also be applied readily to 3D organoids to monitor and modulate intra-organoid’s neural activities, which could be further extended to evaluate pharmaceutical candidates’ safety and efficacy with patient-derived organoids. Discussion The accurate modulation capability with a high-density electrode array of the existing neurological tools such as 2D and 3D MEAs has been a significant barrier for analyzing neural circuit dynamics in developing 3D neural tissues in vitro. To overcome this barrier, we developed a miniaturized system with simultaneous capabilities of 3D culture, daily recording, and local stimulations, all integrated within an incubator. Specifically, we presented a 3D high-density multifunctional MEA system with functions that are essential for precise analysis and modulation of neural activities in engineered 3D neural tissues in vitro, including high-density and large-scale electrical recording capability, a 3D volume, local optical stimulation, and drug delivery capabilities. The high-density microelectrodes on the 3D shank array structure enabled the investigation of the dynamics of neural networks formed by neuronal connections in vitro with real-time recording over a two-week culture period. Also, the optical and chemical stimulation capabilities served as an integrated verification tool for functional connectivities between neurons both in a single group and in two compartmentalized neural groups. Notably, we were able to measure the synaptic latency and corresponding synaptic transmission speed within a 3D neural tissue where two different populations of cortical neurons formed functional networks in vitro. Also, our 3D multifunctional MEA inserted 3D neural tissues in vitro would serve as a powerful toolset to study the effects of pharmacological intervention on neural networks, for example, by simultaneously delivering various biochemical factors, such as inter-neuronal receptor agonists and antagonists, through the microfluidic channels and by providing real-time analyses of synaptic activities. These enabling technologies would allow for reconstructing more complex models of neural circuits and enhancing the development of human disease models in vitro. We have developed the 3D multifunctional MEA primarily for investigating 3D neural circuit dynamics in engineered 3D neural tissues in vitro, which was impossible with our previous 2D multifunctional MEA 26 . The 3D high-density electrode array implemented by applying the stacking method 35 enabled the precise mapping of functional connectivity between neurons in the entire 3D neural tissue volume. Also, we improved the recording performance, compared with the previous device 26 , by electroplating Pt black on electrodes 36 . Furthermore, we integrated a small LED for optical stimulation, which eliminated external light sources and enabled the system’s use in an incubator. In addition, a two-photon microscopy system is also a useful tool for selectively modulating and measuring neural activities in three dimensions 61 . However, this system has a relatively complex configuration requiring a separate incubation system for applying to 3D in vitro models. In addition, the system not only has a low temporal resolution (30 fps) but also covers a limited field of view (FOV) of 240 × 240 × 300 μm 3 due to image acquisition through the lens, which is only 3% compared with the measurable volume of our system (1850 × 1000 × 300 μm 3 ). Thus, our 3D multifunctional MEA system exhibits more advantageous features for investigating in vitro 3D neural networks’ functionality than previously reported systems. Also, depending on the 3D in vitro model’s size, at least recording shank array of our 3D MEA can be readily scaled up without further complicating the system. However, the multifunctional shank expansion can serve as a technical hurdle by complicating the overall system. For example, the integration of LED-coupled fiber in each shank requires several optical interfaces. Also, integrating multiple pumps and tubes into probe shanks for independent drug deliveries requires cumbersome fluidic interfaces. These limitations can be overcome by attaching μLED 62 , 63 in each shank and applying active valves 64 , 65 for on-device fluidic control. In addition, as neurotechnologies continue to advance, our system may still be improved further with a few additional features. For example, a higher number of microelectrodes within the same volume of an engineered neural tissue could enable more sophisticated observations, such as the propagation of axonal action potential 66 . Also, the integration of light source array onto the tip of every shank would allow more complex circuit studies in vitro. Furthermore, the integration of complementary metal-oxide-semiconductor-based (CMOS-based) microelectrode array 67 or monolithic μLED array 68 in our 3D MEA configuration would extend useful applicability in 3D brain models in vitro. Finally, our system can be readily applied to the functional mapping of various neural circuits in vivo, like the previous 2D multifunctional MEA 26 . Applying our 3D MEA to in vivo applications would enable the investigation of complex functional connectivity among three or more brain regions. Furthermore, we can derive extensive neuronal information from the 3D neural models by analyzing the correlation between structural and functional networks using our 3D MEA system and advanced 3D volumetric imaging techniques such as a clearing method 69 . In conclusion, we expect our 3D multifunctional MEA to open up opportunities for studies of neural circuits by investigating functional neuronal networks in 3D neural models, including in vivo. Methods Fabrication and packaging of the 3D multifunctional MEA The fabrication of a 3D multifunctional MEA is divided mainly into three steps: (1) the fabrication of three 2D multifunctional MEAs, one of which is to be integrated with the microfluidic channels through the microelectromechanical systems (MEMS) process, (2) the assembly of the 3D multifunctional MEA by stacking the 2D MEAs, and (3) the formation of electrical, fluidic, and optical interfaces. First, the 2D multifunctional MEAs integrated with the microfluidic channels were fabricated by the previously developed fabrication process 34 . We formed side (25 μm high and 25 μm wide) and center (25 μm high and 10 μm wide) cavities in a four-inch silicon-on-insulator (SOI) wafer with 40 μm-thick top silicon using deep reactive ion etching (DRIE). In a vacuum, the SOI wafer with the cavities was bonded anodically to a 500 μm-thick borosilicate glass wafer (Borofloat ® 33, Schott). The glass wafer was thinned by 100 μm by chemical mechanical polishing (CMP), which was followed by reflowing at 750 °C for 90 min using rapid thermal annealing (RTA; KVR-4000, Korea Vacuum Tech, Ltd.) to fill the cavities partially, which allowed for producing the microfluidic channels underneath the embedded glass layer. After the reflow of the glass, unnecessary glass was removed using the CMP. Next, a 400 nm-thick first passivation layer (SiO 2 ) was deposited. Then, a 20 nm-thick titanium (Ti) layer and a 300 nm-thick gold (Au) layer were deposited sequentially, patterned, and etched for the formation of signal lines. Then, the second passivation layer (SiO 2 ) was deposited, and microelectrode areas were opened by reactive ion etching (RIE). The microelectrode areas were patterned selectively with Ti and platinum (Pt) by depositing a 20 nm-thick Ti layer and a 150 nm-thick Pt layer, followed afterwards by the lift-off process. After patterning of the top Si layer in the shape of 2D MEA, the structure was released from the backside by DRIE. Second, the 3D multifunctional MEA was formed by stacking and bonding the 2D MEAs. Each 2D MEA was designed with a different body size to provide sufficient bonding margins; for example, the body of the first layer MEA at the bottom was 3 mm wider than that of the second layer MEA. The second layer MEA was bonded manually on the first layer MEA at the bottom using a fast-curing epoxy under a microscope. Likewise, the third layer was glued to the second layer. Last, the fabricated 3D multifunctional MEA was packaged to provide fluidic, electrical, and optical interfaces. First, we fabricated a PDMS microfluidic chip as a fluidic interface bridging between the inlet of the MEA and the drug delivery system. A degassed mixture of an elastomer base and a curing agent with a weight ratio of 10:1 was poured into a metal mould with a bottom surface patterned with a fluidic pathway and cured at 80 °C for 1 h. A peeled-off PDMS chip was bonded on the body of the 3D MEA by oxygen plasma using a plasma generator (Covance-MP, Femto Science). Then, the 3D MEA was bonded on a custom PCB using the fast-curing epoxy for electrical and mechanical connections with a microdrive system. The electrical pads on the body of the 3D MEA were wire-bonded to pads on the custom PCB, and two flexible printed circuit (FPC) connectors were soldered for electrical connections between the 3D MEA and the Intan recording system (RHD USB interface board with RHD 64-Channel Recording Headstages, Intan Technologies). We thinned the multimode optical fiber with a 50-μm in diameter core and a cladding layer that was 125 μm in diameter (GIF50, Thorlabs) in 49% [w/w] hydrofluoric acid (HF) solution to a diameter of about 60 μm. The thinned optical fiber was aligned on the shank embedded with the microfluidic channels under a microscope. After aligning the fiber on the shank, the fiber was fixed using an UV-curable epoxy (NOA 148, Norland Products, Inc.). To provide optical stimulations in an incubator, we coupled small blue LED (XQ-E High Intensity LED, Cree, Inc.) at the end of the fiber and filled the gap between the optical fiber and the LED using the UV-curable epoxy as a reflective index matching material. Then, black ink was applied on the UV-curable epoxy to block any light leaking from the LED. Finally, a biocompatible epoxy (EPO-TEK 320, Epoxy Technology, Inc.) was applied at the body of the 3D MEA to protect the bonding wire. Characterizations of the 3D multifunctional MEA To electroplate Pt-black on the Pt microelectrodes to enhance the effective surface areas, we used a mixture of 3% [w/v] hexachloroplatinic acid hydrate (HCPA), 0.025 N HCl, and 0.025% [w/v] lead acetate in deionized (DI) water as an electroplating solution 36 in which the 3D MEA, a Pt wire, and an Ag/AgCl wire were immersed. With three electrode configurations (i.e., working electrode (WE): 63 Pt microelectrodes; counter electrode (CE): Pt wire; reference electrode (RE): Ag/AgCl wire), Pt-black particles were electroplated selectively on the Pt microelectrodes of the 3D MEA by applying an electrical potential (0.2 V from the CE to WE, 35 s) using a potentiostat (PalmSens3, PalmSens). After electroplating, we evaluated the functional characterizations of the 3D multifunctional MEA by the measurement setup reported previously 26 . Briefly, to measure impedance on the Pt and Pt-black microelectrodes, electrochemical impedance spectroscopy (EIS) was performed in 1× phosphate-buffered saline (PBS) with a saturated calomel electrode (CHI 151, CH Instruments, Inc). The impedance of the 63 microelectrodes was measured in a frequency sweep mode (10 Hz–10 kHz) using an impedance analyzer (nanoZ, Neuralynx). We used a pressure-driven drug delivery system for fast response time. To measure flow rates through the embedded microfluidic channels of the 3D MEA, we connected the interfacial PDMS chip on the 3D MEA to mass flow controllers (MFC, National Instruments) through Tygon tubing (ID: 0.5 mm, OD: 1.5 mm; S-54-HL) and a 23-gauge needle. To adjust the input pressure precisely, we connected an electro-pneumatic regulator (ITV0051-2BL, SMC Pneumatics) from a nitrogen tank. By injecting 1× PBS through the microfluidic channels of 3D MEA in three different environments (air, 0.25% [w/v] neuron-seeded collagen, and 0.25% [w/v] cell-free collagen), we measured the distances the liquid moved in the tubing. To measure the optical power output from the end of the fiber on the 3D MEA, we used a photodetector (918D, Newport, Inc.) coupled with an optical power meter (1936-R, Newport, Inc.). The end of the fiber was placed near the photodetector, and we measured the output power while the LED remained turned on. Fluctuations in measured power were within ±0.002 mW. We used the Monte Carlo simulation to profile the distribution of the light transmitted from the fiber 39 , 40 . We simulated a collagen scaffold with a domain size of 383 × 250 × 120 (matrix of voxels) and with a voxel size of 0.3 × 0.3 × 0.3 mm 3 . We applied an absorption coefficient of 0.3 (1 ⋅ mm −1 ), a scattering coefficient of 29 (1 ⋅ mm −1 ), the anisotropy of 0.89, and a reflective index of 1.34. A light source was located at voxel (83, 83, 0 matrix of voxels) with a light angle of 21.9°. The light source launched 3.7 × 10 11 photons ⋅ ms −1 , corresponding to 0.15 mW at 473 nm. Configuration of the 3D multifunctional MEA system We devised a miniaturized cubicle for measuring neural activities in the growing 3D neural network model in an incubator. The miniaturized incubating structure consisted of (1) a custom-designed microdrive, (2) a PDMS culture chamber with a well, and 3) an acrylic enclosure. The custom microdrive was fabricated by mechanical machining, and it was composed of moving and supporting parts. The entire structure was made of stainless steel to prevent corrosion of the custom-designed microdrive in a CO 2 incubator. The custom-designed microdrive had both moving and supporting parts. The moving part had a mover (31 × 5 × 10 mm 3 ) with two 1 mm holes to fix the 3D MEA, a 20 × 1.5 mm screw in the center for controlling the height of the mover, and two supporting cylinders on both sides. The maximum working distance of the mover was 20 mm, and one revolution of the screw allowed a 0.3 mm vertical movement. The supporting part was used to tightly fix the moving part and integrate the culture chamber with the 3D MEA. The culture chamber was made of PDMS, the volume of the cell culture was 2.5 × 1.5 × 0.5 mm 3 , and the overall size was 20 × 20 × 10 mm 3 . The culture chamber was positioned below the 3D MEA during horizontal alignment on a microscope, followed by gluing with uncured PDMS. This step was essential to locate the 3D MEA in the compartmentalized culture area. In particular, we placed six shanks per each area (i.e., two somatic regions and a neurite region) separated each by a PET film. And then, we precisely controlled the position of the PDMS chamber with a resolution of better than 50 μm by pushing the chamber using a linear actuator (M-561D, Newport) with a custom structure along x -axis and y -axis on the microscope. Also, to precisely control the height of 3D MEA in the 3D neural network model, we lowered the MEA using the custom microdrive under the microscope until it touched the bottom surface of the culture chamber and bent slightly. And then, we slowly raised the MEA until it was fully straightened. As a result, the tip of the 3D MEA was in contact with the bottom surface of the culture chamber. The acrylic enclosure (10 × 8 × 8 cm 3 ) prevented undesirable evaporation of the culture medium and contamination, and it was large enough to accommodate the custom microdrive with the culture chamber and the 3D MEA. Top and bottom holes were used to insert the FPCB cable and to fix the microdrive, respectively. Two holes (1 mm in diameter) were drilled through each side of the enclosure for supplying O 2 and CO 2 . Staining of collagen microfibrils To observe distributions of collagen microfibrils when the 3D MEA was inserted before or after the collagen loading, we stained the collagen microfibrils with 5-(and-6)-carboxytetramethylrhodamine succinimidyl ester (TAMRA; Invitrogen) as reported previously 13 . Briefly, 5 μM TAMRA was mixed in 1× PBS and was applied to the culture chamber. After incubating at room temperature for 1 h, the culture chamber was rinsed three times with 1× PBS. On a confocal laser scanning microscope (LSM 800, Carl Zeiss), we observed stained collagen microfibrils near a shank of 3D MEA and z -stacked images (stack size: 25 μm; step size: 1 μm). Preparation of two types of models of 3D neural networks Pregnant Sprague Dawley (SD) rats (embryo 18; E18) were purchased (DBL Co., Ltd.) and sacrificed for 3D neural cultures. We followed a previously reported protocol 13 for harvesting the rats’ primary cortical neurons. Briefly, embryos from the pregnant SD rats were decapitated, and the cerebral cortex was dissected and removed. The cerebral tissue that was extracted was treated with papain solution to dissociate the cells. Then, the dissociated cells were counted in order to calculate the volume of the cell suspension that would be required for the desired seeding density in collagen. Then, 0.5 mL of collagen solution (2.5 mg ⋅ mL −1 ) was prepared in an ice bucket. Specifically, 133 μL of a collagen stock (354249, Collagen type I rat tail high concentration, 9.40 mg ⋅ mL −1 , CORNING) were transferred to a 1.5 mL microtube, and 50 μL of 10× Dulbecco Modified Eagle Medium (DMEM; Sigma-Aldrich) and 207 μL of 1× DMEM were added sequentially, and then the compounds were mixed thoroughly. For the cell-free collagen, 307 μL of 1× DMEM were added. Then, 10 μL of 0.5 N NaOH were added to neutralize the solution, i.e., make the pH ~7. 100 μL of the cell suspension were added to reach the seeding density of 4 × 10 7 cells ⋅ mL −1 for similar cell density to in vivo 41 , and this was followed by gentle mixing. In the case of the model of the single-group 3D neural network, neuron-seeded collagen was loaded in the cell culture well and then gelated completely in the CO 2 incubator at 37 °C for 30 min. For the two-group 3D neural network model, two 125 μm-thick polyester (PET) films were inserted into grooves in the cell culture region to create three temporary compartments. The cell-free collagen was loaded in the central compartment, and it was gelated partially in the CO 2 incubator at 37 °C for 20 min. Then, the neuron-seeded collagen was loaded in the side compartments and was gelated in the CO 2 incubator at 37 °C for 20 min. Finally, the PET films were removed carefully, and 1.5 mL of the culture medium, consisting of neurobasal Plus medium supplemented with 2% [v/v] B27 Plus supplement (Invitrogen), 2 mM Glutamax-I (GIBCO) and 1% [v/v] penicillin-streptomycin (P/S; GIBCO), were applied in the cell culture chamber. Half of the medium was replaced with a fresh medium after two days for uninfected cell culture (Figs. 2 – 3 and Supplementary Figs. 9 – 15 ). Subsequently, the medium was replaced entirely with a fresh medium daily to provide sufficient nutrients for the cells. For ChR2-infected culture (Figs. 4 – 7 and Supplementary Figs. 16 – 26 ), the culture medium was fully replaced after one day with a fresh medium that contained 5 μL of AAV-EF1α-hChR2(H134R)-eGFP virus (1.25 × 10 12 GC ⋅ mL −1 , KIST Virus Facility). After two more days, half of the medium was replaced with a fresh medium. Subsequently, the medium was replaced entirely with a fresh medium daily to provide sufficient nutrients for the cells. We observed ChR2-infected neurons on a confocal laser scanning microscope (LSM 800, Carl Zeiss) at DIV 6 and DIV 14, emitting green fluorescence, and z -stacked images were acquired (stack size: 25 μm; step size: 1 μm). In this work, we cultured for up to 14 days to observe the changes in connectivity among neurons according to maturation because we started observing cell death after 14 days due to the lack of nutrient supply. Because of both the higher cell seeding density than other typical 3D in vitro neural models 13 , 15 , 22 , 23 and continuous proliferation of small portions of non-neuronal cells (e.g., glia), included during the isolation of primary neurons from embryonic brains, we needed to change the culture medium more frequently for sufficient supply of nutrients after 14 days. We note that the 3D culture for up to 14 days was sufficient for synapse formation. However, the culture period could be more extended by lowering the cell seeding density in collagen for 28–60 days, as previously reported 13 . Generation of human spinal cord organoids The generation of human spinal cord organoids (hSCOs) was performed according to the previously described protocol 70 . Briefly, hiPSC colonies were treated with SB431542 (10 μM, TOCRIS, 1614) and CHIR99021 (3 μM, SIGMA, SML1046) for 3 days to induce caudal neural stem cells 71 . On day 3 of chemical treatments, the colonies were gently detached from dish and allowed to form neural spheroids in the medium supplemented with basic fibroblast growth factor (bFGF) for 4 days. Subsequently, neural spheroids were cultured for additional 8 days with medium containing retinoic acid (RA) without bFGF. After then, neural spheroids were progressively matured into spinal cord-like organoids in the maturation medium (1:1 mixture of DMEM/F-12 and neurobasal medium (Life Technologies, 21103-049); the media contained 0.5% N2, 2% B27, 0.5% NEAA, 1% P/S, 0.1% β-mercaptoethanol, 1% GlutaMAX (Life Technologies, 35050-061), and 0.1 μM RA). Live/dead cell viability assay Cell viability was assessed from samples of the single-group 3D neural model at DIV 14. The samples were submerged in 1× PBS containing 0.5 μg ⋅ mL −1 calcein-acetoxymethyl (calcein-AM; Sigma-Aldrich) and 2 μg ⋅ mL −1 propidium iodide (PI; Sigma-Aldrich) in the CO 2 incubator at 37 °C for 30 min. After washing with 1× PBS three times for 30 min each time, we acquired z -stacked images (stack size: 50 μm; step size: 1 μm) that had live (green-fluorescent) and dead (red-fluorescent) cells on the confocal laser scanning microscope (LSM 800, Carl Zeiss). Immunofluorescence staining and imaging To visualize the structural connectivity of the single-group and two-group 3D neural network models at DIV 3, 6, and 14, we stained neurites with neuron-specific class III beta-tubulin (mouse anti-Tuj-1, 1:200, T8678, Sigma-Aldrich) antibody and astrocyte with glial fibrillary acidic protein (chicken anti-GFAP, 1:200, AB5541, Sigma-Aldrich) antibody. First, the samples were fixed in 4% [w/v] paraformaldehyde (PFA) in 1× PBS for 4 h at room temperature on a shaker. After washing with 1× PBS five times, each for 30 min, the samples were blocked in a blocking solution that contained 0.1% [v/v] Triton X-100 and 3% [w/v] bovine serum albumin (BSA) in 1× PBS for 24 h at 4 °C on the shaker. After washing with 1× PBS three times, each for 30 min, the samples were incubated with the primary antibodies (Tuj-1 and GFAP) in the blocking solution for 48 h at 4 °C on the shaker. After washing with 1× PBS five times, each for 30 min, the samples were incubated with secondary antibodies (goat anti-mouse conjugated Alexa Fluor 488, 1:200, A-11001, Invitrogen; goat anti-chicken conjugated Alexa Fluor 647, 1:200, ab150171, Abcam) in the blocking solution for 24 h at 4 °C on the shaker. After washing with 1× PBS five times, each for 30 min, the samples also were incubated with 4’,6-diamidino-2-phenylindole (DAPI; 1:1000, D1306, Invitrogen) in the blocking solution for 6 h at room temperature on the shaker. Finally, after washing with 1× PBS five times, each for 30 min, the samples were mounted on glass slides. Fluorescence images were acquired through a ×20 objective lens on the confocal laser scanning microscope (LSM 800, Carl Zeiss). 3D images of the two-group 3D neural network model were acquired by rendering z -stacked (stack size: 75 μm; step size: 1 μm) and tile-scanned images. To visualize neuronal and non-neuronal population in human-derived spinal cord organoid, we fixed the organoids in 4% [w/v] PFA in 1× PBS at 4 °C overnight on a shaker. After washing with 1× PBS five times, each for 10 min, the fixed organoids were then incubated in 30% [w/v] sucrose in 1× PBS at 4 °C on the shaker. After freezing on dry ice, each organoid was sectioned to 40-μm thickness. The sliced organoids were incubated in a blocking solution that contained 0.2% [v/v] Triton X-100 and 3% [w/v] BSA in 1× PBS at room temperature for 1 h on the shaker. Then, the sliced organoids were incubated with the following primary antibodies in the blocking solution at 4 °C overnight on the shaker.: NeuN (rabbit anti-NeuN, 1:1000, ABN78, Millipore), microtubule-associated protein 2 (MAP2; chicken anti-MAP2, 1:5000, AB5543, Millipore), neurofilament-m (NF-M; mouse anti-NF-M, 1:250, 2H3, DSHB), and astrocyte with glial fibrillary acidic protein (GFAP; rat anti-GFAP, 1:500, 13-0300, Invitrogen). After washing with 1× PBS five times, each for 10 min, the primary antibody-conjugated organoids were incubated with the following secondary antibodies in the blocking solution for 30 min at room temperature on the shaker: donkey anti-rabbit conjugated Cy3, 1:500, 711-165-152, Jackson; donkey anti-chicken conjugated Alexa Fluor 488, 1:500, 703-545-155, Jackson; donkey anti-mouse conjugated Alexa Fluor 488, 1:500, A21202, Invitrogen; donkey anti-rat conjugated Cy3, 1:500, 712-166-150, Jackson. After washing with 1× PBS five times, each for 10 min, the secondary antibody-conjugated organoids were also incubated with Hoechest33342 (1:1000) in the blocking solution at room temperature for 30 min on the shaker. Finally, after washing with 1× PBS five times, each for 10 min, the immunostained organoids were mounted on glass slides. Fluorescence images were acquired through a ×25 objective lens on a confocal laser scanning microscope (TCS SP8, Leica). Electrophysiology We measured the neural activity daily from the single-group and two-group 3D neural network models shortly before replacing the culture medium with fresh medium because neurons often stop firing for several hours after the medium are exchanged 72 . We recorded the neural activity every day for 10 min to record spontaneous activity (the first 5 min) and activated signals by local optical stimulation (the next 5 min). All electrophysiological recordings were performed in the CO 2 incubator at 37 °C through an RHD USB interface board with the RHD 64-channel recording headstage. The recorded signals were filtered and digitized through the Intan USB interface board software (20 kS ⋅ s −1 per channel, 300 Hz high pass filter, 6 kHz low pass filter). For optical stimulation to the 3D neural model, we used a custom-designed LED drive that adjected the stimulation cycle and pulse width through slide switches. The custom LED drive was connected to the stimulation pin of the 3D MEA with thin electrical wire for light delivery through LED-coupled fiber and to the digital input port of the RHD USB interface board for reading the stimulation time. The applied stimulation frequency was 0.2 Hz, and the pulse width was 2.5 s with a duty cycle of 50%. For the chemical stimulation to the 3D neural model, we used a pressure-driven drug delivery system, an USB-powered small power supply (Analog Discovery 2, Digilent, Inc.), and an 11.1 V Li–Po battery. The small power supply and the battery were connected to the electro-pneumatic regulator of the pressure-driven drug delivery system for pressure control and power supply. Also, the small power supply was connected to the digital input port of the RHD USB interface board for reading the injection time. Using the settings above, the drugs were injected into the 3D neural culture through the microfluidic channels of the 3D MEA. In Fig. 4 , after observing the response of neurons by optical stimulation, we injected 1 μL of culture medium mixed with 20 μM CNQX (0190, Tocris Bioscience) and 50 μM AP5 (0106, Tocris Bioscience) at a flow rate of 0.25 μL min −1 for 4 min. After 30 min for additionally sufficient diffusion of CNQX/AP5 in an incubator, we resumed recording neural signals with optical stimulation to observe neural activities’ changes. And then, to remove the remaining CNQX/AP5 in the culture chamber, the culture medium was entirely replaced with fresh medium three times in a fume hood. Furthermore, we waited for 1 h to stabilize the neurons. After removing CNQX/AP5, we repeated recording neural signals with optical stimulation to observe the recovery of neural activities. To monitor neural activities in human-derived spinal cord organoid, we fixed a customized PDMS chamber (diameter of 30 mm and height of 10 mm) with a groove (diameter of 1 mm and height of 0.3 mm) on the bottom plate of the microdrive. After placing the mature (3 month-old) organoid in the groove under a microscope, we slowly inserted the MEA into the organoid using the microdrive. Once inserted correctly, we fixed the microdrive on the acrylic enclosure and measured neural activities in the organoid inside an incubator. After measuring spontaneous activities for 3 min, we injected 6 μM TTX through the microfluidic channels at a flow rate of 0.25 μL min −1 for 3 min to suppress the neural activities. After the TTX injection, we continued measuring the neural activities for additional 3 min to observe temporal evolutions. Signal analysis Previous reported MATLAB spike-sorting algorithm 26 was used to detect neural spikes. We calculated the signal-to-noise ratio (SNR) by dividing the mean of the peak amplitude by the standard deviation of the background noise. Then, we set an amplitude threshold that exceeded three times the level of the noise (approximately 50 μV) and extracted neural signal data. Each signal was displayed on a raster plot, a color-mapped raster plot, and a 2D/3D electrode map. Also, each signal was counted and displayed on a bar plot. Burst activity was analyzed using the ISI N -threshold method 73 with an ISI threshold of 0.1 s and a minimum number of spikes per burst of 3. All statistical analyses were evaluated by the Student’s t -test using GraphPad Prism. We analyzed the synchronized scores between the electrodes and networks based on a previously reported method 32 . Synchrony between electrodes was analyzed using Pyspike. Briefly, the closer the spikes from the two electrodes matched, the closer the synchronized score was to 1, which represents a high degree of synchrony. Conversely, the greater the mismatch was between the spikes from the two electrodes, the closer the synchronized score was to 0, which represents a high degree of asynchrony. For visualization of the community among neurons, we represented the network among the electrodes by the Louvain algorithm using a custom code. We set the electrode as a node (e.g., circles in the network map), and we set the degree of synchronization between the electrodes as an edge (e.g., lines in the network map). We matched the position of the node with the position of the actual electrode. Also, the links with synchronized scores less than 0.5 were filtered out. Nodes with the same color (i.e., electrodes) represented the same community network. The color-mapped synchronization score ranged from 0 (blue) to 1 (red). In addition, the larger the number of connected electrodes becomes, the larger the size of the node becomes. Ethical statements All procedures except human-derived pluripotent stem cell (PSC) related experiments were conducted according to the animal welfare guidelines approved by the Institutional Animal Care and Use Committee of the Korea Institute of Science and Technology. The human PSC related experiment was approved by the Korea University Institutional Review Board. Statistical analysis All statistical analyses were performed in MATLAB (Math Works), Python (Python Software Foundation), or GraphPad Prism (GraphPad Software), using the Student’s t -test. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that all data supporting the findings of this study are available within the article and its supplementary information files or from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The data analysis for this study mainly performed using Python 3.7 and the open source packages Py spike and python-louvain, which are available at and . Custom code used for visualization of 3D network maps is freely available at ( ).
The human brain is less accessible than other organs because it is covered by a thick, hard skull. As a result, researchers have been limited to low-resolution imaging or analysis of brain signals measured outside the skull. This has proved to be a major hindrance in brain research, including research on developmental stages, causes of diseases, and their treatments. Recently, studies have been performed using primary neurons from rats or human-derived induced pluripotent stem cells (iPSCs) to create artificial brain models that have been applied to investigate brain developmental processes and the causes of brain diseases. These studies are expected to play a key role to unlocking the mysteries of the brain. In the past, artificial brain models were created and studied in 2D; however, in 2017, a research team from KIST developed a 3D artificial brain model that more closely resembled the real brain. Unfortunately, due to the absence of an analytical framework for studying signals in a 3D brain model, studies were limited to analyses of surface signals or had to reform the 3D structure to a flat shape. As such, tracking neural signals in a complex, interconnected artificial network remained a challenge. The Korea Institute of Science and Technology (KIST) announced that the research teams of Doctors Il-Joo Cho and Nakwon Choi have developed a analysis system that can apply precise non-destructive stimuli to a 3D artificial neural circuit and measure neural signals in real-time from multiple locations inside the model at the cellular level. The 3D multifunctional system for measuring neural signals is in the form of a 50μm-wide needel shaped silicon probe array (about half the width of a human hair) integrated with 63 microelectrodes. When this system is inserted in the artificial brain model, it is capable of simultaneously measuring signals from multiple locations inside the neural circuit. The probe contains an optical fiber and drug-delivery channels, enabling precise stimulation of neurons using light or drugs. By measuring functional changes in the brain model in response to these stimuli, the model can be used to study brain function and brain diseases. A researcher of KIST is looking at a three-dimensional multifunctional electrode chip developed by Dr. Il-Joo Cho and Dr. Nakwon Choi Credit: Korea Institute of Science and Technology(KIST) Using this system to stimulate neural circuits in the artificial brain model optically and simultaneously measure the spread of the response signal in multiple locations, the research team demonstrated that the propagation speed of neural signals were different according to directions inside the 3D brain modeln. In addition to structural brain maps, which can be constructed using electron microscopy, this study demonstrated the possibility of constructing 3D functional brain maps that show how different circuits are functionally connected within complex artificial brain networks. Dr. Choi, from KIST, stated that, "The newly developed system allows us to study various developmental brain disorders and the causes of and treatments for brain diseases." Co-PI Dr. Cho added, "This system enables functional measurements from 3D artificial brain models, which was previously impossible. We expect that the development of this system will help to radically reduce the time required to develop drug or treatments for various brain diseases."
10.1038/s41467-020-20763-3
Space
Discovered: Fast-growing galaxies from early universe
Rapidly star-forming galaxies adjacent to quasars at redshifts exceeding 6, Nature (2017). nature.com/articles/doi:10.1038/nature22358 Journal information: Nature
http://nature.com/articles/doi:10.1038/nature22358
https://phys.org/news/2017-05-fast-growing-galaxies-early-universe.html
Abstract The existence of massive (10 11 solar masses) elliptical galaxies by redshift z ≈ 4 (refs 1 , 2 , 3 ; when the Universe was 1.5 billion years old) necessitates the presence of galaxies with star-formation rates exceeding 100 solar masses per year at z > 6 (corresponding to an age of the Universe of less than 1 billion years). Surveys have discovered hundreds of galaxies at these early cosmic epochs, but their star-formation rates are more than an order of magnitude lower 4 . The only known galaxies with very high star-formation rates at z > 6 are, with one exception 5 , the host galaxies of quasars 6 , 7 , 8 , 9 , but these galaxies also host accreting supermassive (more than 10 9 solar masses) black holes, which probably affect the properties of the galaxies. Here we report observations of an emission line of singly ionized carbon ([C ii ] at a wavelength of 158 micrometres) in four galaxies at z > 6 that are companions of quasars, with velocity offsets of less than 600 kilometres per second and linear offsets of less than 100 kiloparsecs. The discovery of these four galaxies was serendipitous; they are close to their companion quasars and appear bright in the far-infrared. On the basis of the [C ii ] measurements, we estimate star-formation rates in the companions of more than 100 solar masses per year. These sources are similar to the host galaxies of the quasars in [C ii ] brightness, linewidth and implied dynamical mass, but do not show evidence for accreting supermassive black holes. Similar systems have previously been found at lower redshift 10 , 11 , 12 . We find such close companions in four out of the twenty-five z > 6 quasars surveyed, a fraction that needs to be accounted for in simulations 13 , 14 . If they are representative of the bright end of the [C ii ] luminosity function, then they can account for the population of massive elliptical galaxies at z ≈ 4 in terms of the density of cosmic space. Main We used the Atacama Large Millimeter Array (ALMA) to survey the fine-structure line of singly ionized carbon ([C ii ] at 158 μm) and its underlying continuum emission in high-redshift quasars in the southern sky (declination of less than 15°). The [C ii ] line, a strong coolant of the interstellar medium, is the brightest far-infrared emission line at these frequencies 9 , 15 , 16 . It arises ubiquitously in galaxies and is therefore an ideal tracer of gas morphology and dynamics in quasar hosts. The far-infrared continuum emission is associated with the light from young stars that has been reprocessed by dust and is therefore a measure of the dust mass and puts constraints on the star-formation rate of the host galaxies. The parent sample includes 35 luminous (rest-frame 1,450-Å absolute magnitude of less than −25.25 mag) quasars at z > 5.95 (for which the redshifted [C ii ] line would fall in ALMA band 6), most of which were selected from the Pan-STARRS1 survey 17 ; of these, 25 targets were observed with ALMA, all in single pointings with similar depth (0.6–0.9 mJy per beam per 30 km s −1 channel). The survey resulted in a very high detection rate (>90%) in both the continuum and the line emission from the host galaxies of the quasars. We searched the data cubes (in projected sky position and frequency or redshift) for additional sources in the quasar fields. The field of view of ALMA at these frequencies is about 25″, or 140 physical kiloparsecs at the mean redshift of the quasars (assuming a Lambda cold dark matter cosmology with Hubble constant H 0 = 70 km s −1 Mpc −1 , mass density Ω m = 0.3 and vacuum density Ω Λ = 0.7). The detection algorithm and strategy follows previous work with ALMA data 18 . We imposed a conservative significance threshold of 7 σ (corresponding to a [C ii ] luminosity of L [C ii ] ≈ 10 9 L ⊙ , where L ⊙ is the luminosity of the Sun), which excludes any contamination from noise peaks. This search resulted in the discovery of four bright line-emitting sources around four of the targeted quasars ( Fig. 1 ). The modest frequency differences with respect to the nearby quasars, the brightness of the lines compared to the underlying continua, and the lack of optical and near-infrared counterparts (which suggests that the companion sources reside at high redshift; see Fig. 1 ) imply that the detected lines are also [C ii ]. Furthermore, chance alignments of low-redshift CO emitters are expected to be more than 20 times rarer at these fluxes 18 . These newly detected galaxies are also seen (at various degrees of significance) in their dust continuum emission. The line and continuum fluxes are comparable to, and in some cases even brighter than, those of the quasars (see Table 1 ), although the companion sources are not detected in near-infrared images (which sample the rest-frame ultraviolet emission). Any potential accreting supermassive black holes in these companions would therefore be at least one order of magnitude fainter than the quasars, or strongly obscured (see Fig. 1 ). Figure 1: Images and spectra of the quasars and their companion galaxies discovered in this study. a , The dust continuum at 1.2 mm from ALMA is shown by red contours, which mark the ±2 σ , ±4 σ , ±6 σ , … isophotes, with σ = (81, 86, 65, 63) μJy per beam (left to right). The images were obtained with natural weighting, yielding beams of 1.20″ × 1.06″, 0.74″ × 0.63″, 1.24″ × 0.89″ and 0.85″ × 0.65″ (left to right), shown as black ellipses. The grey scale shows the near-infrared images of the Y- + J- (left) or J-band (otherwise) flux of the fields, obtained with (left to right) the WFC3 instrument on the Hubble Space Telescope, the LUCI camera on the Large Binocular Telescope (LBT), the SofI instrument on the European Southern Observatory (ESO) New Technology Telescope or the GROND instrument on the Max Planck Gesellschaft (MPG)/ESO 2.2-m telescope. The quasars are clearly detected in their rest-frame ultraviolet emission, which is probed by these images, but their companion galaxies are not, implying that any accreting black holes, if present, are either intrinsically faint or heavily obscured. b , The continuum-subtracted ALMA [C ii ] line maps are shown as black contours, which mark the ±2 σ , ±4 σ , ±6 σ , … isophotes, with σ = (0.13, 0.11, 0.15, 0.03) Jy km s −1 per beam (left to right). The colour scale shows the image of the 1.2-mm continuum flux density. Black ellipses are as in a . The width of each image in a and b corresponds to 15″ (about 80 kpc at the redshift of the quasars). c , Spectra of the [C ii ] emission and underlying continuum emission of the quasars and their companions. The channels used to create the [C ii ] line maps are highlighted in yellow. The spectra are modelled as a flat continuum plus a Gaussian line (red lines). The velocity differences Δ v between the quasar and the companion galaxy, derived from the line fit, are listed at the top of each column. The ALMA observations were carried out in compact array configuration between 27 January and 27 March 2016, in conditions of modest precipitable water vapour columns (1–2 mm). In each observation, 38 to 48 of the 12-m antennas were used, with on-source integration times of about 10 min. Nearby radio quasars were used for calibration. Typical system temperatures ranged between 70 K and 130 K. PowerPoint slide Full size image Table 1 Measured and derived quantities for the quasars and their companions Full size table Two quasars (J0842+1218 and J2100−1715) have a companion source at about 50 kpc in projected separation, with line-of-sight velocity differences of 440 km s −1 and 40 km s −1 , respectively. This result suggests that the respective quasar–companion pairs lie within a common physical structure, and might even be at an early stage of interaction. The [C ii ] lines in these quasar companions have luminosities of about 2 × 10 9 L ⊙ . The marginally resolved, beam-deconvolved size of the [C ii ]-emitting region is about 7 kpc and 5 kpc in these two galaxies. A Gaussian fit of the line profile yields linewidths of 370 km s −1 and 690 km s −1 , comparable to those of submillimetre galaxies at lower redshift 9 , 19 . The implied dynamical masses of the companions within the [C ii ]-emitting regions are in the range (1–3) × 10 11 M ⊙ (where M ⊙ is the mass of the Sun; see Table 1 ). The dust continuum is only marginally detected in the companion source of J0842+1218, whereas it is clearly seen in the companion source of J2100−1715. The other two quasars, PSO J231.6576−20.8335 and PSO J308.0416−21.2339 (hereafter, PJ231−20 and PJ308−21), have [C ii ]-bright companions at much smaller projected separation, about 10 kpc. The companion source of PJ231−20 has very bright [C ii ] emission and far-infrared continuum emission, whereas that of PJ308−21 is fainter in the [C ii ] line and is only marginally detected in the continuum. Most remarkably, the [C ii ] emission in the companion of PJ308−21 stretches over about 25 kpc (4.5″) and about 1,000 km s −1 towards and beyond the quasar host, suggesting that the companion is undergoing a tidal disruption due to interaction or merger with the quasar host (see Fig. 2 ). This extent is twice as large as the interacting groups around the submillimetre galaxy AzTEC-3 and the nearby ultraviolet-selected galaxy LBG-1, at z = 5.3 (ref. 12 ). Figure 2 is therefore a map of the earliest known merger of massive galaxies, 820 Myr after the Big Bang. Figure 2: Velocity structure in the system PJ308−21. a , Continuum-subtracted [C ii ] channel maps of PJ308−21 and its companion (contours). The underlying continuum is shown in colour. The velocity zero point is set by the redshift of the quasar ( z = 6.2342). Each panel corresponds to 10″ × 10″, or about 50 kpc × 50 kpc. Contours mark the ±2 σ , ±4 σ , ±6 σ , … isophotes. The black ellipse shows the synthesized beam. b , Velocity field (colour scale) of PJ308−21. The iso-velocity lines are marked in white (in units of km s −1 ). c , Position–velocity diagram along the white line in b . A clear velocity gradient is observed in the [C ii ] emission that extends over 4.5″ (about 25 kpc) and more than 1,000 km s −1 , connecting the companion source in the east with the host galaxy of the quasar and extending even further towards the west. PowerPoint slide Full size image Modelling the dust emission as a modified black body with a dust opacity index of β = 1.6 and dust temperature of T dust = 47 K (ref. 20 ), we find that the far-infrared luminosities (corrected for the effects of the cosmic microwave background) of the quasars and their companions are in the range (4–100) × 10 11 L ⊙ , with corresponding far-infrared-derived star-formation rates between 80 M ⊙ yr −1 (for the companion of PJ308−21) and about 2,000 M ⊙ yr −1 (for the quasar PJ231−20; see Table 1 ). The dust mass 21 is M dust ≈ (10 8 –10 9 ) M ⊙ , or higher if the dust is not optically thin at 158 μm or if its temperature is lower than assumed. For typical gas-to-dust ratios of about 100 (ref. 22 ), this dust mass yields gas masses of (10 10 –10 11 ) M ⊙ . In Fig. 3a we show the [C ii ]-to-far-infrared luminosity ratio as a function of the far-infrared luminosity. This key diagnostic shows the contribution of the [C ii ] line to the cooling of the interstellar medium: in local spiral galaxies, [C ii ] is responsible for approximately 0.3% of the entire luminosity of the galaxy; in ultra-luminous infrared galaxies and high-redshift starburst galaxies, its contribution can be a factor of 10 lower 9 , 15 , 23 . The quasars and their continuum-bright companions in our sample have low [C ii ]-to-far-infrared luminosity ratios (about 0.1% or less), whereas the companions of J0842+1218 and PJ308−21 have higher ratios (at least 0.15%), closer to the parameter space occupied by normal star-forming galaxies in the local Universe 24 . Figure 3: Intensely star-forming galaxies in the earliest galactic overdensities. a , The [C ii ]-to-far-infrared luminosity ratio ( L [C ii ] / L FIR ), a key diagnostic of the contribution of the [C ii ] line to cooling in the star-forming interstellar medium, as a function of the far-infrared luminosity ( L FIR , in units of the luminosity of the Sun L ⊙ ). Sources from the literature (refs 5 , 9 , 12 , 23 , 24 , 25 and references therein) are shown with small symbols: blue triangles for local ( z < 1) galaxies; orange triangles for high-redshift ( z > 1) sources; and red diamonds for very high-redshift ( z > 6) quasars. The large yellow and red filled circles highlight sources at z > 6 from this work, with 1 σ error bars; arrows mark the 3 σ limits. The quasars examined here appear towards the far-infrared-bright end of the plot, consistent with other quasars observed at these redshifts. Two of the companion sources (of J2100−1715 and PJ231−20) fall in the same regime as the quasars; however, two companions (of J0842+1218 and PJ308−21) populate a different area of the plot, where less-extreme star-forming galaxies are found. b , The cumulative number of [C ii ]-bright companion sources identified in our survey (yellow filled circles, with Poissonian 1 σ uncertainties) compared with the constraints from the luminosity function set by blind-field searches of [C ii ] at high redshift (orange 25 and grey 26 dashed lines) as a function of the sky-projected distance from the quasars. We adopt a cylindrical volume centred on the quasar and with depth corresponding to a difference of ±1,000 km s −1 in redshift space. The ALMA field-of-view is also shown for reference (black dotted line). There is an excess by many orders of magnitudes compared with the general field expectations; however, the observed counts can be explained if the limiting case of quasar–Lyman-break-galaxy clustering measured at z ≈ 4 is assumed. In this case, the excess in the galaxy number density at radius r due to large-scale clustering, ξ ( r ), is modelled as ξ ( r ) = ( r 0 / r ) γ , with a scale length of co-moving Mpc ( h = 0.7 in the adopted cosmology) fitted for quasar–galaxy pairs at z ≈ 4 at a fixed slope γ = 2.0 (ref. 27 ; orange shaded area). PowerPoint slide Full size image In Fig. 3b we show the average number of [C ii ]-bright galaxies that were observed within a given distance from a quasar in our survey. The detection of four such galaxies in 25 targeted fields exceeds the expected count rates from the (coarse) constraints (approximately 2 × 10 −4 co-moving Mpc −3 at L [C ii ] > 10 9 L ⊙ ) that are currently available on the [C ii ] luminosity function at z > 6 (refs 25 , 26 ) by orders of magnitudes (the survey volume within ±1,000 km s −1 from the quasars is only about 400 co-moving Mpc −3 ). However, the high number of companion sources might be reconciled with the [C ii ] luminosity function constraints if large-scale clustering of galaxies and quasars is accounted for (such as in the quasar–Lyman-break-galaxy correlation function at z ≈ 4 (ref. 27 ) shown in Fig. 3b ). Bright, high-redshift quasars therefore represent ideal beacons of the earliest dark matter overdensities (local peaks in the number of galaxies per unit volume compared to the average field). Together with the host galaxies of the quasars, the newly discovered objects (the four companion galaxies) are the observational manifestation of rapid, very early star formation in massive halos. If representative of the bright end of the [C ii ] luminosity function, then they are sufficiently common to explain the abundance of massive galaxies (approximately 1.8 × 10 −5 co-moving Mpc −3 ) that already existed by z ≈ 4 (ref. 1 ). These galaxies cannot be accounted for by the much more numerous, but an order of magnitude less star-forming, z > 6 galaxies that are typically found in deep Hubble Space Telescope images 4 , for which sensitive observations have ruled out strong dust-reprocessed emission 28 , 29 . If an accreting supermassive black hole is present in any of these sources, then it is either much fainter than the nearby quasars, or heavily reddened. This property makes these companion galaxies unique objects for studying the build-up of the most massive systems in the first billion years of the Universe: from an observational perspective, the absence of a blinding central light source enables in-depth characterization of these massive star-forming objects. Moreover, their interstellar medium, far-infrared luminosities and implied star-formation rates are less affected by any feedback processes from the central supermassive black hole. Future observations of these companion galaxies with the James Webb Space Telescope have the promise to accurately constrain their stellar masses, a key physical parameter given the young age of the Universe. Such a measurement is very difficult in the host galaxies of quasars, owing to their compact emission and the enormous brightness of their central accreting supermassive black holes. Data Availability The datasets generated and analysed during this study are available from the corresponding author on reasonable request. The ALMA observations presented here are part of the project 2015.1.01115.S ( ).
A team of astronomers including Carnegie's Eduardo Bañados and led by Roberto Decarli of the Max Planck Institute for Astronomy has discovered a new kind of galaxy which, although extremely old—formed less than a billion years after the Big Bang—creates stars more than a hundred times faster than our own Milky Way. Their findings are published by Nature. The team's discovery could help solve a cosmic puzzle—a mysterious population of surprisingly massive galaxies from when the universe was only about 10 percent of its current age. After first observing these galaxies a few years ago, astronomers proposed that they must have been created from hyper-productive precursor galaxies, which is the only way so many stars could have formed so quickly. But astronomers had never seen anything that fit the bill for these precursors until now. This newly discovered population could solve the mystery of how these extremely large galaxies came to have hundreds of billions of stars in them when they formed only 1.5 billion years after the Big Bang, requiring very rapid star formation. The team made this discovery by accident when investigating quasars, which are supermassive black holes that sit at the center of enormous galaxies, accreting matter. They were trying to study star formation in the galaxies that host these quasars. "But what we found, in four separate cases, were neighboring galaxies that were forming stars at a furious pace, producing a hundred solar masses' worth of new stars per year," Decarli explained. "Very likely it is not a coincidence to find these productive galaxies close to bright quasars. Quasars are thought to form in regions of the universe where the large-scale density of matter is much higher than average. Those same conditions should also be conducive to galaxies forming new stars at a greatly increased rate," added Fabian Walter, also of Max Planck. "Whether or not the fast-growing galaxies we discovered are indeed precursors of the massive galaxies first seen a few years back will require more work to see how common they actually are," Bañados explained. Decarli's team already has follow-up investigations planned to explore this question. The team also found what appears to be the earliest known example of two galaxies undergoing a merger, which is another major mechanism of galaxy growth. The new observations provide the first direct evidence that such mergers have been taking place even at the earliest stages of galaxy evolution, less than a billion years after the Big Bang.
nature.com/articles/doi:10.1038/nature22358
Biology
Echo from the past makes rice paddies a good home for wetland plants
Takeshi Osawa et al, Paddy fields located in water storage zones could take over the wetland plant community, Scientific Reports (2020). DOI: 10.1038/s41598-020-71958-z Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-71958-z
https://phys.org/news/2020-10-echo-rice-paddies-good-home.html
Abstract Land use change could affect not only local species richness but also community assemblies. Essentially, the possible patterns of plant community assemblies are nonrandom species loss (nestedness) and species turnover. Plant community assemblies in human-mediated land use have a combination of both nestedness and turnover. This is because of historical effects that cause nonrandom species loss due to previous and/or original habitat quality and because of direct effects of human activities that cause species turnover. We investigated the complexity of the process of plant community assemblage in a paddy field, which is a typical agricultural land use in the monsoon season in central Japan. Using multi-temporal plant monitoring records, we tested the relationship between the ratio of species nestedness/turnover through multi-temporal and both the original habitat conditions and the extent of human modification. The findings revealed that paddy fields that originated from wetland habitat had a high nestedness ratio, whereas paddy fields that were largely consolidated had a high turnover ratio. Thus, we could divide the community assembly processes in human-mediated land use based on original habitat conditions and human activities. This concept could help land managers establish conservation and/or restoration plans that take into account community assembly. Introduction Human land use impacts global biodiversity at hierarchical levels, ranging from genes to ecosystems, resulting in many ecosystems having become severely degraded 1 . Many conservation scientists are focusing on the effects of land use as factors driving habitat degradation for biodiversity and ecosystem functions and are seeking tools to counteract their degradation and loss 2 , 3 , 4 . Although conservation research has focused typically on either individual species or species groups in terms of their diversity and/or richness 5 , 6 , the impact of human activities on community assemblies goes beyond just either eradicating some species from the species pool or reducing species richness 2 , 7 . Habitat degradation caused by land use could cause community and metacommunity structures to collapse by inhibiting the ecological processes of assemblies 2 , 7 . In essence, species loss and species turnover are the only processes required to generate all the possible patterns of community assembles 8 . Nestedness, namely, nonrandom species loss, describes the proportion of species in a species assembly that is a subset of a more species-rich assembly 2 , 9 . Nestedness is one of the most frequently used indices to explain patterns of community assemblages 2 , 10 . On the other hand, turnover can describe the proportion of species turnover in assembly processes 2 , 11 , 12 . Nestedness and turnover are antithetic (though not mutually exclusive) ecological processes that produce different patterns of community structure 8 , 13 , 14 . Plant communities in habitat that is maintained over the long-term, for example, historical seminatural grasslands, are characterized by high species diversity and show little turnover, indicating that high multi-temporal species nestedness has occurred 15 . Thus, habitat change resulting from human activities could lead to large turnover of species; in other words, it could result in low multi-temporal species nestedness. However, although human land use change could alter the components of plant communities dramatically and inhibit their recovery 4 , plant communities can occasionally retain their components following habitat degradation, for example, through either fragmentation or reduction in area 16 . This situation is often called “extinction debt,” whereby plant species can initially survive habitat change but may subsequently become extinct without habitat modification 17 . Current plant communities in human-mediated habitats will have been established by both nested species from the original community, including extinction debt, and by species turnover from sources external to the original communities and driven by human activities. Thus, there is a complex combination of nonrandom species loss and turnover for community assemblies within human-mediated land use. Wetlands are generally habitats with high biodiversity, and they are one of the habitats suffering the greatest decline worldwide 18 , 19 . Paddy fields are a typical seminatural land use for rice cropping agriculture in monsoon Asia, and they provide several ecosystem services other than rice production 20 , 21 , 22 . One of the important ecosystem services of paddy fields is provision of habitat for several wetland species 20 , 22 , 23 , 24 . In monsoon Asia, paddy fields originate mainly from the floodplain 24 , which is characterized by both spatial and temporal heterogeneity 25 . Floodplains provide substantial habitat variety for vegetation 26 , 27 . Thus, there is a variety of previous (original) habitat type/quality among current paddy fields as wetland habitat. Although paddy fields occupy the same land use category, individual paddy fields can have different types of plant communities with different assemblage processes that have been influenced both by their original habitats and by current human activities 4 , 24 . We studied the complexity of plant community assemblage processes in paddy fields, which can act as alternative wetland habitats. We predicted that the importance of nonrandom species loss and turnover for plant communities in paddy fields could be assessed on the basis of both their original environmental conditions and the current human activities associated with them. In respect of original environmental conditions, we predicted that paddy fields that have originated from wetland have plant communities with a nested structure based on wetland species, because that type of paddy field is considered to be same as wetland habitat that has been maintained over the long-term. Meanwhile, paddy fields that have originated from non-wetland areas have plant communities that have undergone a relatively large species turnover from the original assemblage because that type of paddy field is considered to be habitat that has been changed from non-wetland to wetland. We used terrain condition which could indicate the surface water storage to find the original wetland potential. In respect of the current human activities, we predicted that agricultural modernization practices, land consolidation in particular, increase the species turnover because of their drastic habitat modification effects 4 , 28 . Previous studies have shown that land consolidation has a negative effect on biodiversity 4 , 28 , 29 , 30 . To test our hypotheses, we analyzed the relationship between multi-temporal transition of changing plant communities and location conditions namely terrain condition of paddy fields in central Japan. Results In the first survey term, in 2002, there were a total of 558 species. Among these, 114 species were wetland plants, and 444 species were non-wetland plants (Table 1 ). In each 1-km grid, there were 89.79 ± 36.34 (mean ± S.D.) species in total, 22.09 ± 8.03 wetland species, and 67.697 ± 32.68 non-wetland species respectively (Table 1 ). In 2007, in the second survey term, there were 552 species in total, 109 of which were wetland plant species, 443 of which were non-wetland plant species (Table 1 ). There were 96.41 ± 30.59 species in total, 20.69 ± 8.17 wetland species, and 75.72 ± 26.08 non-wetland species in each 1-km grid (Table 1 ). In 2012, there were 469 species, 96 species were wetland plant species, and 373 species were non-wetland plant species (Table 1 ). In each 1-km grid, there were 72.00 ± 33.10 species in total, 18.97 ± 8.45 wetland species, and 53.03 ± 27.62 non-wetland species (Table 1 ). Table 1 Summary of plant species numbers for each survey term. Full size table Generalized linear models (GLMs) analysis of the species number revealed that both field consolidation and flow accumulation values (FAVs) were negatively correlated for all species in all survey terms (Table 2 ). The same trend was evident for non-wetland plants (Table 2 ). On the other hand, wetland plants number exclude FAV values in 2012 were not significant correlations (Table 2 ). FAV value in 2012 for wetland plants was negatively correlated (Table 2 ). Table 2 GLM and Wald’s test for the number of plant species in each survey term. Full size table GLM analysis for species nestedness/turnover ratio revealed that both consolidation and FAV values were negatively correlated for nestedness ratios in all species for all multi-temporal combinations (Table 3 ). These trends were similar for non-wetland plants, excluding FAV in both 2002–2012 and 2007–2012, and the effects were not significant (Table 3 ). On the other hand, the nestedness ratio for wetland plants in 2002–2007 revealed a different trend; FAV was positively correlated with the nestedness ratio (Table 3 ; bold with italic). Other results were the same as the results for non-wetland plants (Table 3 ). Table 3 GLM and Wald’s test for the ratio of nested plant numbers in each survey term. Full size table Discussion In this study, we tested the effects of both original environmental conditions of and current human activities in paddy fields on the processes of plant community assemblage. We found that paddy fields that originated from wetland had a high nestedness ratio of multi-temporal plant community structure during the early phase of the survey term. Also, we found that plant communities that were established in consolidated paddy fields had high turnover ratios throughout the survey terms. Thus, the findings basically supported our prediction that species community assemblages in paddy fields could be evaluated based on both the original environmental conditions of the paddy fields and the current human activities going on in them. The study showed that plant species numbers with multi-temporal from 2002 and 2007 to 2012 were basically decreasing. From 2007 to 2012, more than 80 species, including both wetland and non-wetland plants, showed decreased numbers. This finding indicated that the basic habitat quality for plant species diversity of the paddy fields in this region had been degraded. One possible explanation of this degradation is agricultural abandonment. In seminatural ecosystems in Japan including paddy fields, abandonment has promote succession of vegetation for secondary forest which caused expanding the limited numbers of species 7 , 30 . Also abandonment could changes in water conditions 3 , 28 . Thus, agricultural abandonment could lead to a decrease of wetland and/or grassland specific plant diversity 3 , 7 , 28 , 30 . Over the study term, the area of agricultural abandonment in Japan increased from 3,430,000 ha (2000) to 3,960,000 ha (2010) in total 3 . The baseline of plant species diversity in paddy fields in the Tone river basin has been decreasing in recent decades. The numbers of both all species and non-wetland plants in each survey term were negatively influenced by both FAV and the field consolidation ratio. Whereas wetland plants number was not influenced these exclude 2012 on FAV. These results indicated that areas of high FAV that can hold a large amount of water have low-quality habitat for non-wetland plants. Thus, we were able to conclude that the FAV value that we used as the index of wetland potential would be reasonable. The results regarding field consolidation were interesting because they indicated that wetland plants were not affected strongly by consolidation. Consolidation work in paddy fields had a severely degrading effect on the plant species diversity 4 , 28 , 30 . Consolidation work can not only change water conditions but also alter nutrient conditions 31 ; thus, in theory, it can affect both wetland and non-wetland plants. One possible explanation for our result is that wetland plants in our study area had already recovered to a stable stage by 2002, which was the earliest term in our study. As used consolidation data for 2001, the consolidation works were conducted before 2001. Additionally, consolidation work in Japan was conducted aggressively from the 1960s to the 1990s 4 , so much of the consolidation work in the study area might have been carried out conducted more than 10 years prior to the study. Immediately following consolidation work, many plants will have been removed. However, many of plants can return and recover after consolidation work through the supply of propagules from the surrounding areas 4 . The landscapes of our study areas are dominated by paddy fields, which provide a potential source of propagules for wetland plants. Additionally seed bank of wetland plants are often prolong long time more than decades 32 . Seed bank in the paddy fields also could contribute the species recovery. Once wetland plant communities have recovered, they can be maintained in the paddy fields. The nestedness ratio of wetland plants in the early term (2002–2007) of the study was positively influenced by FAV. This finding clearly supports our prediction that paddy fields that originate from wetland have plant communities with a nested structure of wetland species. However, on the latter term, FAV could not contribute to the nested structure of the wetland plant communities. One possible reason for this is that abandonment could have caused a decline of species in this area. Abandonment in Japan can promote succession of vegetation for secondary forest, changing the water conditions 3 , 7 , 28 , 30 , resulting in a decline of wetland plants in abandoned paddy fields and an increase of plants that do not prefer wetland habitats. In short, species turnover is likely to occur in abandoned paddy fields. In 2012, FAV was negatively correlated with the number of wetland plants. Also in 2012, the number of wetland plants were decreased. These suggest that succession of vegetation was in progress during this term, and thus, the wetland habitat was being degraded. The field consolidation ratio was negatively correlated with the nestedness ratio in all cases. These findings support our prediction that consolidation could increase species turnover. Agricultural modernization, such as field consolidation, can alter the water and nitrogen conditions and the disturbance regime, and thus, is one of the main drivers of reduced biodiversity 4 , 7 , 28 , 30 , 31 . In our study, the numbers of both all species and non-wetland plant species were negatively correlated with the consolidation ratio. Therefore, consolidation cause species turnover from wetland species to limited numbers of species which adapted consolidated conditions. Uchida et al. (2018) suggested that agricultural modernization work, such as consolidation, could result in biotic homogenization; namely, decreasing beta diversity due to habitat simplification. In our study, a similar pattern may have occurred; for example, after consolidation, paddy fields as wetland habitat make similar community which could have some specific species. Although plants can recover to some extent after consolidation work 4 , consolidation is one of the main drivers of decreasing biodiversity and affects not only species diversity but also the community assembly process. The findings of our study indicate that the plant species community assembly process in human-mediated habitats, which includes a combination of both nonrandom species loss and turnover for plant community, could estimate based on both their original environmental conditions and current human activities. Although we tested this only in paddy fields in this study, the basic concept could apply to other land areas, such as forests. For example, some parts of plantation forests might originate from primary forest and others might not. According to our theory, plantation forests that originate from long-term maintained forest could have nested community assemblies from the original community. This idea could be applied to biodiversity conservation and restoration project such as the idea of extinction debt 33 . However, estimation of the original environmental conditions presents challenges. Although a terrain component —flow accumulation value—was useful when estimating the wetland condition for paddy fields, other land uses require other methods to estimate their original habitats. If we can find a method to estimate the original habitat for not only wetland but also other habitat types we could establish conservation and restoration plans that consider not only species number but also community assembly. Methods Study site The study was conducted in the Tone river basin, central Japan (Fig. 1 ). The Tone river is Japan’s second-longest river, running through the entire Kanto plain in central Japan. The Tone river basin is covered mainly by rice paddies and also contains arable fields other than rice, seminatural grasslands, coppice forests, farm villages, and urban areas 29 . The Tone river basin is located in the Kanto plain which is the largest plain field in Japan (approximately 170,000 km 2 ), including large floodplains. Thus, this area have a variety of both terrain conditions and agricultural modernization works. Figure 1 Location of the Tone river basin and monitoring sites. Full size image Plants community data The Institute for Agro-Environmental Sciences, NARO, Japan conducted the program for monitoring biodiversity, including birds 29 and plants 34 , in each of the thirty-two 1-km 2 grids in the Tone river basin in 2002. In this program, the Tone river basin was initially divided into one hundred 1-km square grids (hereafter, 1-km grid), and each square was classified into one of four major land use types in the region: (1) midstream paddy; (2) downstream lowland paddy; (3) plateau and valley-bottom paddy; and (4) urban fringe 29 . Then, eight grids were selected randomly as study sites from each land use type, making a total of 32 grids (Fig. 1 ). The grids were more than 5 km apart (Fig. 1 ), so they were spatially independent of each other. In this study we used the plant monitoring records from the program. In the plant monitoring program there were three terms—2002, 2007, and 2012—of vegetation survey based on the Braun–Blanquet approach in each 1 km grid 35 . In each survey, approximately 20 quadrats measuring 1 m 2 were placed randomly in each 1-km grid in each survey term and the coverage ratios of all plant species in four hierarchies—(1) tall tree, (2) semi-tall tree, (3) shrub, and (4) grasses—were recorded. In this study, we used only the grasses class without abundance and the presence or absence of species records in the grasses class. We pooled all the species records within each 1-km grid for analysis. All plant monitoring data are available as Open Data (CC BY 4.0) at github space own by Dr. N. Iwasaki who was the member of this monitoring program ( , accessed at 25, May 2020). Dividing wetland plants and non-wetland plants To test our hypothesis, we needed to divide the plants that typically grow in wetlands (hereafter, wetland plants) and those that typically grown in non-wetlands (hereafter, non-wetland plants) to evaluate the habitat quality of paddy fields as wetland. To this end, we used a published checklist of wetland plants in Japan (Shutoh et al. 2019; , accessed at 25, May 2020). This checklist defined 8,358 Japanese vascular plants as wetland and aquatic plants according to their habitat requirements and the “wetland” definition of the Ramsar Convention (Ramsar Convention Secretariat 2016, , accessed at 25, May 2020). We used this checklist to identify the wetland plant species in the monitoring records. Land use, terrain condition, and human activity A digitized land use map for paddy fields in 2009 that relatively matched the plant monitoring terms (2002, 2007, and 2012) was prepared from the National Land Numerical Information (National Land Information Division, MLIT of Japan: , accessed at 25, May 2020). These map data were developed using both topographic maps and satellite imaging data, with the land use labeled on the basis of nationwide land use classifications, including paddy fields, at approximately 100-m grid resolution (National Land Information Division, MLIT of Japan: , accessed at 25, May 2020). A FAV, which was ascertained by accumulating the weights of all cells that flowed into each downslope cell, was used to define the concave areas (ESRI, , accessed at 25, May 2020); lower elevations and valley areas had a higher FAV because they could potentially store more water, whereas higher ridge areas had low FAVs (Fig. 2 ). We used FAV to define the wetland potential, as this value could reflect the water accumulation from upper areas to lower areas, which strongly relates to the natural process of wetland formation 36 . We considered that terrain variable could reflect the geographical conditions of paddy field namely potentially wetland habitat for their intact ecosystem. We considered high FAV areas to have high potential of wetland habitat for their intact ecosystem. We calculated FAV value on a whole for mainland Japan; therefore, that range could cover the entire basin which overlapped with our target areas. The FAV was calculated using ArcGIS 10.5 with Spatial analyst (ESRI, Redlands, CA, USA) using a 50-m digital elevation model from the Japanese Map Centre ( , accessed at 25, May 2020). The FAV and paddy field maps were overlaid, and the total FAVs for paddy fields in each 1-km grid were calculated to determine the potentiality of the paddy fields in the 1-km grid being wetland. If a paddy field had an extremely high FAV within the basin which included the paddy field, that paddy field could have been a wetland because that area could store a large amount of water naturally. Figure 2 Conceptual image of the flow accumulation value to indicate the potential of wetland. Full size image The proportional area of field consolidation as current human activity was calculated for each grid square using digital polygon data on the shape of farmland, as derived from aerial imagery collected in 2001 by the Ministry of Agriculture, Forestry, and Fisheries (MAFF), Japan. We obtained data on land leveling in agricultural areas from MAFF ( , accessed at 25, May 2020) and used these data as an index of consolidated farmland because land leveling is one of the important components of agricultural consolidation in Japan 4 , 23 . Generally, agricultural consolidation in Japan involves land leveling, which integrates small, patchy farmland areas. Each polygon was assigned a status of “leveled” or “not leveled” according to its current status. Using ArcGIS, we calculated the ratio of consolidation for paddy fields in each 1-km grid that had survey sites. Statistical analysis We performed the two types of analysis used in this study with the statistical package R version 3.5.2 (R development core Team, , accessed at 17, Feb. 2020). First, we tested the species number in each 1-km grid using GLM with Poisson distributions (log link) and a Wald test 37 . The response variables were total species number, number of wetland plants, and number of non-wetland plants in each 1-km grid in each survey term. Explanatory variables were the log-transformed FAV values for the paddy fields and consolidation ratio of the paddy field within the 1-km grid. The aim of this analysis was to assess the effects on species diversity of both the original environmental condition of and current human activities in the paddy fields. Prior to the GLM analysis, all explanatory variables were tested for multicollinearity by calculating the variance inflation factors (VIFs) 38 ; no significant multicollinearity was found (VIF < 10 for all variables). Second, we tested the number of both nested and turnover species from previous survey terms using GLMs with Poisson distributions (log link) and a Wald test 37 . We set three multi-temporal combinations: 2002–2007, 2002–2012, and 2007–2012. We used species number which observed both term as response variables, and used species number of latter term as offset term. Thus, we tested the ratio of species that were nested from the previous community, namely, non-turnover rates. Our analysis included all plants, both wetland plants and non-wetland plants, in each 1-km grid. Explanatory variables were the same as the species number analysis. We predicted that grids that were potentially wetland grids, namely, high FAV grids, would have a large ratio of nested wetland species. Furthermore, we predicted that heavy consolidated grids would have a large ratio of species turnover that could be negatively influenced. Data availability Data availability in this study are shown in the “ Method ” section.
Researchers from Tokyo Metropolitan University studied the biodiversity of wetland plants over time in rice paddies in the Tone River basin, Japan. They found that paddies that were more likely to have been wetland previously retained more wetland plant species. On the other hand, land consolidation and agricultural abandonment were both found to impact biodiversity negatively. Their findings may one day inform conservation efforts and promote sustainable agriculture. The Asian monsoon region is home to a vast number of rice paddies. Not only have they fed its billions of inhabitants for centuries, they are also an important part of the ecosystem, home to a vast array of wetland plant species. But as the population grows and more agricultural land is required, their sheer scale and complexity beg an important question: What is their environmental impact? A team from Tokyo Metropolitan University led by Associate Professor Takeshi Osawa and their collaborators have been studying how rice paddies affect local plant life. In their most recent work, they investigated the biodiversity of wetland plants in rice paddies around the Tone River basin Japan. The Tone River is Japan's second longest river, and runs through the 170,000 square kilometer expanse of the Kanto plains. Previous studies have looked at how a particular species or group of species fare in different conditions. Instead, the team turned their attention to the range of species that make up the plant community, with a particular focus on the number of wetland and non-wetland species present. Changes were tracked over time using extensive survey data from 2002, 2007 and 2012. They found that not all rice paddies are equal when it comes to how well they support original wetland species. In fact, there was a correlation between how likely it was that the land was wetland before it was put to agricultural use, and the number of wetland species that were retained over time. Here, the team measured this using flow accumulation values (FAVs) for different plots of land, a simple metric showing how easily water could accumulate. Importantly, this kind of approach might help researchers to predict how amenable new rice paddies are to the local wetland flora by calculating a simple number using the local terrain. However, they also found that things like land consolidation and agricultural abandonment could also have a negative impact. The emerging story is that both current human use and original geographical conditions play an important role in deciding how amenable rice paddies are for the original wetland ecosystem. The team believes that the same approach could be applied to different locations such as plantation forests which were (or were not) originally woodland. After all, many nations are turning to large scale tree planting to offset carbon emissions. The ability to systematically assign how new land usage might impact local ecosystems could greatly help restoration and conversation efforts.
10.1038/s41598-020-71958-z
Space
Cosmology group finds measurable evidence of dark matter filament
A filament of dark matter between two clusters of galaxies, Nature (2012) doi:10.1038/nature11224 Abstract It is a firm prediction of the concordance cold-dark-matter cosmological model that galaxy clusters occur at the intersection of large-scale structure filaments1. The thread-like structure of this ‘cosmic web’ has been traced by galaxy redshift surveys for decades2, 3. More recently, the warm–hot intergalactic medium (a sparse plasma with temperatures… Press release Journal information: Nature
http://dx.doi.org/10.1038/nature11224
https://phys.org/news/2012-07-cosmology-group-evidence-dark-filament.html
Abstract It is a firm prediction of the concordance cold-dark-matter cosmological model that galaxy clusters occur at the intersection of large-scale structure filaments 1 . The thread-like structure of this ‘cosmic web’ has been traced by galaxy redshift surveys for decades 2 , 3 . More recently, the warm–hot intergalactic medium (a sparse plasma with temperatures of 10 5 kelvin to 10 7 kelvin) residing in low-redshift filaments has been observed in emission 4 and absorption 5 , 6 . However, a reliable direct detection of the underlying dark-matter skeleton, which should contain more than half of all matter 7 , has remained elusive, because earlier candidates for such detections 8 , 9 , 10 were either falsified 11 , 12 or suffered from low signal-to-noise ratios 8 , 10 and unphysical misalignments of dark and luminous matter 9 , 10 . Here we report the detection of a dark-matter filament connecting the two main components of the Abell 222/223 supercluster system from its weak gravitational lensing signal, both in a non-parametric mass reconstruction and in parametric model fits. This filament is coincident with an overdensity of galaxies 10 , 13 and diffuse, soft-X-ray emission 4 , and contributes a mass comparable to that of an additional galaxy cluster to the total mass of the supercluster. By combining this result with X-ray observations 4 , we can place an upper limit of 0.09 on the hot gas fraction (the mass of X-ray-emitting gas divided by the total mass) in the filament. Main Abell 222 and Abell 223, the latter a double galaxy cluster in itself, form a supercluster system of three galaxy clusters at a redshift of z ≈ 0.21 (ref. 13 ), separated on the sky by about 14′. Gravitational lensing distorts the images of faint background galaxies as their light passes massive foreground structures. The foreground mass and its distribution can be deduced from measuring the shear field imprinted on the shapes of the background galaxies. Additional information on this process is given in the Supplementary Information . The mass reconstruction in Fig. 1 shows a mass bridge connecting Abell 222 and the southern component of Abell 223 (Abell 223-S) at the 4.1 σ significance level. This mass reconstruction does not assume any model or physical prior probability distribution on the mass distribution. Figure 1: Mass reconstruction of Abell 222/223. The background image is a three-colour-composite SuprimeCam image based on observations with the 8.2-m Subaru telescope on Mauna Kea, Hawaii during the nights of 15 October 2001 (Abell 222) and 20 October 2001 (Abell 223) in the V-, R c - and i′-bands. We obtained the data from the SMOKA science archive ( ). The full-width at half-maximum (FWHM) of the stellar point-spread function varies between 0.57″ and 0.70″ in our final co-added images. Overlaid are the reconstructed surface mass density (blue) above κ = 0.0077, corresponding to , and significance contours above the mean of the field edge, rising in steps of 0.5 σ and starting from 2.5 σ . Dashed contours mark underdense regions at the same significance levels. Supplementary Fig. 1 shows the corresponding B-mode map. The reconstruction is based on 40,341 galaxies whose colours are not consistent with early-type galaxies at the cluster redshift. The shear field was smoothed with a 2′ Gaussian. The significance was assessed from the variance of 800 mass maps created from catalogues with randomized background galaxy orientation. We measured the shapes of these galaxies primarily in the R c -band, supplementing the galaxy shape catalogue with measurements from the other two bands for galaxies for which no shapes could be measured in the R c -band, to estimate the gravitational shear 25 , 26 . Abell 222 is detected at about 8.0 σ in the south, and Abell 223 is the double-peaked structure in the north seen at about 7 σ . Black rectangles are regions on the sky not covered by the camera. PowerPoint slide Full size image To show that the mass bridge extending between Abell 222 and Abell 223 is not caused by the overlap of the cluster halos but is in fact due to additional mass, we also fitted parametric models to the three clusters plus a filament component. The clusters were modelled as elliptical Navarro–Frenk–White (NFW) profiles 14 with a fixed mass–concentration relation 15 . We used a simple model for the filament, with a flat ridge line connecting the clusters, exponential cut-offs at the filament endpoints in the clusters, and a King profile 16 describing the radial density distribution, as suggested by previous studies 17 , 18 . We show in the Supplementary Information that the exact ellipticity has little impact on the significance of the filament. The best-fit parameters of this model were determined using a Monte Carlo Markov chain and are shown in Fig. 2 . The likelihood-ratio test prefers models with a filament component with 96.0% confidence over a fit with three NFW halos only. A small degeneracy exists in the model between the strength of the filament and the virial radii of Abell 222 and Abell 223-S. The fitting procedure tries to keep the total amount of mass in the supercluster system constant at the level indicated by the observed reduced shear. Thus, it is not necessarily the case that sample points with a positive filament contribution indeed have more mass in the filament area than has a three-clusters-only model. This is because the additional filament mass might be compensated for with lower cluster masses. We find that the integrated surface mass density along the filament ridge line exceeds that of the clusters-only model in 98.5% of all sample points. Figure 2: Posterior probability distributions for cluster virial radii and filament strength. Shown are the 68% and 95% confidence intervals on the cluster virial radii r 200 (within which the mean density of the clusters is 200 times the critical density of the Universe) and the filament strength κ 0 . The confidence intervals are derived from 30,000 Monte Carlo Markov chain sample points. The filament model is described by κ ( θ , r ) = κ 0 {1 + exp[(| θ | − θ l )/ σ ] + ( r / r c ) 2 } −1 , where the coordinate θ runs along the filament ridge line and r is orthogonal to it. This model predicts the surface mass density at discrete grid points from which we computed our observable, the reduced shear, via a convolution in Fourier space. The data cannot constrain the steepness of the exponential cut-off at the filament endpoints σ and the radial core scale r c . These were fixed at their approximate best-fit values of σ = 0.45 megaparsecs and r c = 0.54 megaparsecs. The data also cannot constrain the cluster ellipticity and orientation. These were held fixed at the values measured from the isodensity contours of early-type galaxies 13 . The ratios of minor to major axes and the position angles of the ellipses are (0.63, 0.69, 0.70) and (65°, 34°, 3°) for Abell 222, Abell 223-S, and Abell 223-N, respectively. We further explore the impact of cluster ellipticity on the filament detection in the Supplementary Information . PowerPoint slide Full size image This indicates that the data strongly prefers models with additional mass between Abell 222 and Abell 223-S and that this preference is stronger than the confidence level derived from the likelihood-ratio test. The difference is probably due to the oversimplified model, which is not a good representation of the true filament shape. The data, on the other hand, is not able to constrain more complex models. Extensions to the simple model that we tried were replacing the flat ridge line with a parabola and replacing the King profile with a cored profile leaving the exponent free. The latter was essentially unconstrained. The parabolic ridge line model produced a marginally better fit that was, however, statistically consistent with the flat model. Moreover, the likelihood-ratio test did not find a preference for the parabolic shape. The virial masses inferred from the Monte Carlo Markov chain are lower than those reported earlier for this system 10 , which were obtained from fitting a circular two-component NFW model to Abell 222 and Abell 223. In contrast to this approach, our more complex model removes mass from the individual supercluster constituents and redistributes it to the filament component. Reproducing the two-component fit with free concentration parameters, as done in the previous study 10 , we find (where is the mass of the Sun): , which is in good agreement with ref. 10 , , which overlaps the 1 σ error bars of the earlier study 10 . Throughout, all error bars are single standard deviations. The detection of a filament with a dimensionless surface mass density of κ ≈ 0.03 is unexpected. Simulations generally predict the surface mass density of filaments to be much lower 10 and undetectable individually 18 . These predictions, however, are based on the assumption that the longer axis of the filament is aligned with the plane of the sky and that we look through the filament along its minor axis. If the filament were inclined with respect to the line-of-sight and we were to look almost along its major axis, the projected mass could reach the observed level. A timing argument 19 , 20 can be made to show that the latter scenario is more plausible in the Abell 222/223 system. In this argument we treat Abell 223 as a single cluster and neglect the filament component, so that we have to deal only with two bodies, Abell 222 and Abell 223. The redshifts of Abell 222 and Abell 223 differ by Δ z = 0.005, corresponding to a line-of-sight separation of 18 megaparsecs if the redshift difference is entirely due to Hubble flow. Let us assume for a moment that the difference is caused only by peculiar velocities. Then at z = ∞, the clusters were at the same location in the Hubble flow. We let them move away from each other with some velocity and inclination angle with respect to the line-of-sight and later turn around and approach each other. The parameter space of total system mass and inclination angle that reproduces the observed configuration at z = 0.21 is completely degenerate. Nevertheless, to explain the observed configuration purely with peculiar velocity, this model requires a minimum mass of with an inclination angle of 46°, where the error on the mass is caused solely by the uncertainty of the Hubble constant. Because this is more than ten standard deviations above our mass estimate for the sum of both clusters, we infer that at least part of the observed redshift difference is due to Hubble flow, and that we are looking along the filament’s major axis. The combination of our weak-lensing detection with the observed X-ray emission of 0.91 ± 0.25 keV warm–hot intergalactic medium plasma 4 allows us to constrain the hot gas fraction in the filament. Assuming that the distribution of the hot plasma is uniform and adopting a metallicity of , the mass of the X-ray-emitting gas inside a cylindrical region with radius 330 kiloparsecs centred on (01 h 37 min 45.00 s, 12° 54′ 19.6″; see Fig. 3 ) with a length along our line-of-sight of 18 megaparsecs, as suggested by our timing argument, is . The assumption of uniform density is certainly a great simplification. Because the X-ray emissivity depends on the average of the squared gas density, a non-uniform density distribution can lead to strong changes in the X-ray luminosity. Thus, if the filament consists of denser clumps embedded into lower-density gas (as has been observed in the outskirts of the Perseus cluster 21 ), or even if there is a smooth non-negligible density gradient within the region used for spectral extraction, then our best-fit mean density will be overestimated. The quoted gas mass should therefore be considered as an upper limit, and the true mass could be as small as one-third of this value. Figure 3: Surface mass density of the best fit parametric model. The surface mass density distribution of the best fit parameters in Fig. 2 was smoothed with a 2′ Gaussian to have the same physical resolution as the mass reconstruction in Fig. 1 . The yellow crosses mark the end points of the filament model. These were determined from the visual impression of the filament axis in Fig. 1 . The Monte Carlo Markov chain is not able to constrain their location. In the model, the filament ridge line is not aligned with the axis connecting the centres of Abell 222 and Abell 223-S. This is a fairly common occurrence ( ∼ 9%) for straight filament but may also indicate some curvature, which occurs in ∼ 53% of all intercluster filaments 17 and is not included in our simple model. Overlaid are X-ray contours from XMM-Newton observations 4 (red) and significance contours of the colour-selected early-type galaxy density 10 (beige), showing the alignment of all three filament constituents. The black circle marks the region inside which the gas mass and the filament mass were estimated. PowerPoint slide Full size image We estimated the total mass of the filament from the reconstructed surface mass-density map and the model fits within the region where we measured the gas mass. The conversion of dimensionless surface mass density to physical units requires knowledge of the source redshifts. We randomly sampled galaxies with our R c -band magnitude distribution from photometric redshift catalogues 22 . The mean redshift of these random catalogues is z s = 1.2. We emphasize that for a cluster at z = 0.21, the error in mass caused by the uncertainty of the redshift distribution is small. An error as large as Δ z s = 0.2 causes only a 5% error. In the reconstructed κ -map, the mass inside the extraction circle is , where the error is small owing to the highly correlated noise of the smoothed shear field inside the extraction aperture. For the parametric model fit, the inferred mass is higher but consistent within one standard deviation: . The corresponding upper limits on the hot-gas fractions vary between f X-ray = 0.06 and 0.09, a value that is lower than the gas fraction in galaxy clusters 23 . This is consistent with the expectation that a significant fraction of the warm–hot intergalactic medium in filaments is too cold to emit X-rays detectable by the European Space Agency’s X-ray Multi-Mirror Mission (XMM-Newton) space telescopes 24 .
(Phys.org) -- As time passes and more research is done, more evidence is compiled supporting the theory that suggests that dark matter is a real thing, even though no direct evidence for its existence has ever been found. Instead, the evidence comes about as measurements of other phenomenon are taken, generally involving gravitational pull on objects in the universe we can see that cannot be explained by other means. One of these instances is where weak gravitational lensing occurs, which is where light appears to bend as it passes by large objects. Theory suggests that in cases where lensing occurs but there is no detectable object behind its cause, the reason for it is dark matter exerting a gravitational influence. That has been the case with what are known as filaments; gravitational effects that connect galactic superclusters, keeping them bound together. Now Jörg Dietrich and colleagues have added credence to the theory by finding a measurable example of lensing in one specific supercluster that cannot be attributable to a visible object. They outline their findings in their paper published in the journal Nature. Abell 222/223 is a galactic supercluster system in the constellation Cetus. It’s made up of two parts, 222 and 223, separated by a gas cloud and something else that cannot be seen. In looking at data collected by telescopes used to study the supercluster in prior research efforts, Dietrich and his team found that lensing occurred as light behind the gas cloud made its way to us by passing between the two parts. But after careful study and mathematical analysis, they found that the observable matter that existed in the gas cloud could only account for about nine percent of the mass required to cause the degree of lensing that was occurring. Because there was nothing else in the area, the only possible explanation was that dark matter in the shape of a filament was the cause. The results from this study are doubly interesting; one because they strengthen all of the theories surrounding dark matter, and two, because the team has found a means of not just demonstrating an example of dark matter at work, but have done so in a way that is so precise that they were able to determine the actual shape of a dark matter filament. This second part came about as measurements of lensing were taken at different parts of the area between 222 and 223 showing different degrees of light bending, a feat that was only possible because of the unique way the supercluster is situated relative to us, allowing a nearly straight on view.
doi:10.1038/nature11224
Space
AI reveals unsuspected math underlying search for exoplanets
Keming Zhang et al, A ubiquitous unifying degeneracy in two-body microlensing systems, Nature Astronomy (2022). DOI: 10.1038/s41550-022-01671-6 Journal information: Nature Astronomy
https://dx.doi.org/10.1038/s41550-022-01671-6
https://phys.org/news/2022-05-ai-reveals-unsuspected-math-underlying.html
Abstract While gravitational microlensing by planetary systems 1 , 2 provides unique vistas on the properties of exoplanets 3 , observations of a given two-body microlensing event can often be interpreted with multiple distinct physical configurations. Such ambiguities are typically attributed to the close–wide 4 , 5 and inner–outer 6 types of degeneracy, which arise from transformation invariances and symmetries of microlensing caustics. However, there remain unexplained inconsistencies (see, for example, ref. 7 ) between the aforementioned theories and observations. Here, leveraging a fast machine learning inference framework 8 , we present the discovery of the offset degeneracy, which concerns a magnification-matching behaviour on the lens axis and is formulated independently of caustics. This offset degeneracy unifies the close–wide and inner–outer degeneracies, generalizes to resonant topologies and, upon reanalysis, not only appears ubiquitous in previously published planetary events with twofold degenerate solutions, but also resolves prior inconsistencies. Our analysis demonstrates that degenerate caustics do not strictly result in degenerate magnifications and that the commonly invoked close–wide degeneracy essentially never arises in actual events. Moreover, it is shown that parameters in offset-degenerate configurations are related by a simple expression. This suggests the existence of a deeper symmetry in the equations governing two-body lenses than previously recognized. Main In search of new types of microlensing degeneracy, we analysed the posterior parameter distribution of a large number of simulated two-body microlensing events that exhibited multimodal solutions. With over 100 planetary microlensing events observed so far, new degeneracies have indeed been serendipitously found in routine data analysis (see, for example, ref. 9 ). However, while an exhaustive search on examples of multimodal event posteriors to constrain the existence of unknown degeneracies is plausible, such an endeavour has been computationally prohibitive with the current status quo microlensing data analysis approaches. Thankfully, the recent application of likelihood-free inference (LFI) (see ref. 10 for an overview) to two-body microlensing 8 has accelerated calculation of microlensing posteriors to a matter of seconds, thus allowing posteriors for a large number of simulated events to be acquired with minimal computational cost. The key to the accelerated inference is the use of a neural density estimator (NDE), which is a particular type of neural network capable of modelling distributions that are complex and multimodal. Here, the NDE learns a mapping from microlensing light curves directly to posteriors, allowing future inferences to be made with the NDE alone in mere seconds. Following ref. 8 , we trained an NDE on 691,257 events simulated in the context of the Roman Space Telescope microlensing survey 11 so that our results would be directly relevant. The posteriors for a large number of randomly generated events are then produced with the NDE. To identify events with multimodal solutions, we applied a clustering algorithm 12 , which separates each posterior into discrete modes. The exact maximum likelihood solution within each posterior mode is then calculated with an optimization algorithm ( Methods ). Visual inspection of multimodal NDE posteriors revealed three apparent regimes of degeneracy: the inner–outer degeneracy, the close–wide degeneracy and degeneracies that involve the resonant caustic, which have also been previously observed (see, for example, refs. 7 , 13 ) and studied 14 . The close–wide degeneracy states that the central caustic shape is invariant under the s ↔ 1/ s transformation for ∣ 1 − s ∣ ≫ q 1/3 (ref. 14 ) and q ≪ 1 (Extended Data Fig. 1a,c ), where q refers to the planet-to-star mass ratio and s refers to their projected separation normalized to the angular Einstein radius ( \({\theta }_{\mathrm{E}}=\sqrt{\kappa M{\pi }_{\mathrm{rel}}}\) ), which is the characteristic microlensing angular scale. Here, κ = 4 G /( c 2 au), M is the total lens mass and π rel = au/ D rel is the lens–source relative parallax. Interestingly, we found that most cases of apparent close–wide degeneracies do not exactly abide by the expected s ↔ 1/ s relation even though most are in the ∣ 1 − s ∣ ≫ q 1/3 regime, where it is expected to hold. We also noticed that for degenerate events involving one resonant caustic, the source trajectory always passed to the front end of the resonant caustic for wide–resonant degenerate events, and the back end for close–resonant degenerate events. To explore potential connections among these apparently discrete regimes of degeneracies, and to better understand the reason why the expected s ↔ 1/ s relation of the close–wide degeneracy is almost never satisfied, we examined maps of magnification differences between pairs of lenses with the same mass ratio ( q = 2 × 10 −4 ), keeping lens B fixed at s B = 1/1.1 and changing the projected separation s A of lens A. The sequence of magnification difference maps in Fig. 1a–h immediately reveals the continuous evolution of a vertically extended ring structure where the magnification difference vanishes (see also Extended Data Figs. 2 and 3 ). This null ring originates near the primary star and grows increasingly large with increasing deviation from the close–wide degenerate configuration of s A = 1/ s B , at which point the null contracts to a singular point (see Extended Fig. 4 for a zoom-in). We may thus expect null-passing trajectories (cyan arrows in Fig. 1a–h ) to have degenerate magnifications, which is confirmed by light curves shown in Fig. 1i–p . Fig. 1: The manifestation of the offset degeneracy in source-plane magnification difference maps (top) and light curves (bottom). a – h , Maps of magnification differences from lens B with fixed s B = 1/1.1 to lens A with changing s A specified in each subplot. The mass ratio is fixed at q = 2 × 10 −4 for all configurations. All magnification difference maps are shown on the same scale, specified in the colour bar to the right. Lens A caustics are shown in green and lens B caustics are shown in blue. The black, oval-shaped ring with first decreasing and then increasing sizes in a – h is the null, where the magnification difference between lenses A and B vanishes. The evolution of the null ring is continuous with the progression of the lens A caustic into the resonant regime ( e , f , g ) and further into a wide topology ( h ). i – p , Light curves for null-crossing trajectories (cyan arrows in a – h ), under lens A (green), lens B (blue), and the s A = 1/ s B = 1.1 solution (red) expected from the close–wide degeneracy. Light curves are shown as relative deviations from the corresponding point-source point-lens (PSPL) model. n , s A = 1.11 instead of the s A = 1/ s B value of f to demonstrate the offset degeneracy for caustic-crossing events: both caustic-crossing length and magnification patterns are matched for the offset solution but not for the close–wide solution. Full size image It is also immediately clear from Fig. 1f why the close–wide pair of configurations ( s A = 1/ s B ) does not result in degenerate magnifications for any trajectory shown: the magnification differs everywhere on the lens axis except for the singular null point. Thus for any given trajectory, close to or far from the central caustic, one can always move the null to the location of the source by shifting the planet location, to have the magnifications match exactly on the lens axis. For caustic-crossing trajectories, the vertical extension of the null, located within the caustic (Extended Data Fig. 4c ), also allows the width of the caustic to be matched (Fig. 1f ). We also found that both the location and shape of the null are independent of q for q ≪ 1, thus allowing the above discussion to also hold in the ∣ 1 − s ∣ ≫ q 1/3 regime (Extended Data Fig. 5 ) of the close–wide degeneracy. This demonstrates that the above localized degeneracy does not arise due to the imperfect matching of the central caustic shapes, but is a fundamental behaviour of the lensing system in the limit of q ≪ 1. We name this phenomenon the offset degeneracy to refer to the source–null matching principle where the null is created by an offset of the planet location on the binary axis. Notably, we found that the location of the null on the star–planet axis is well described by a simple expression: $${x}_{{{{\rm{null}}}}}=\frac{1}{2}\left({s}_{\mathrm{A}}-1/{s}_{\mathrm{A}}+{s}_{\mathrm{B}}-1/{s}_{\mathrm{B}}\right).$$ (1) The numerically determined x null (Fig. 2 ) shows that deviations from this analytic prescription are consistently less than 5% except for extreme separation (|log 10 ( s )| ≳ 0.5) cases where sources do not pass close to either caustic and therefore do not yield substantial planetary perturbation of practical interest. This expression can be interpreted as the midpoint between the locations x c = s A,B − 1/ s A,B of the planetary caustics, which arises from the perturbative picture of planetary microlensing 2 . However, the fact that such an expression holds well into the resonant regime, for which there are no planetary caustics at all, and persists through caustic topology changes, suggests the existence of much deeper symmetries in the gravitational lens equation for mass ratios of q ≪ 1 than had previously been appreciated, and should be explored in future work. Fig. 2: Deviation (Δ x null ) of numerically derived, exact null position from the analytic form (equation ( 1 )) for changing s A against three values of fixed s B < 1, normalized to the separation between the two (implied) planetary caustics: ∣ ( s A − 1/ s A ) − ( s B − 1/ s B ) ∣ . Δ x null is calculated for q = 2 × 10 −4 but was found to be independent of q for q ≪ 1 (Extended Data Fig. 5 ). The x axis shows log 10 ( s A ) scaled to log 10 ( s B ) such that −1 corresponds to the close–wide degenerate case of s A = 1/ s B (gold star), 0 corresponds to s A = 1 and 1 corresponds to the asymptotic inner–outer degenerate case where s A = s B (brown hexagon). The coordinate origin is set to s q/(1 + q) from the primary for s < 1 and s −1 q/(1 + q) for s > 1, which describe the location of the central caustic and account for the non-differentiability at s A = 1. Source data Full size image We now consider the relationship between the offset degeneracy and the two previously known mathematical degeneracies. First, the offset degeneracy is a magnification degeneracy while the two previous degeneracies are caustic degeneracies. Our analysis demonstrates that degenerate caustics do not strictly result in degenerate magnifications. Furthermore, by setting x null = 0 in equation ( 1 ), we immediately recover the s A = 1/ s B relation of the close–wide degeneracy. This suggests that the close–wide degeneracy is more suitably viewed as a transition point of the offset degeneracy where the central caustics happen to be degenerate. On the other hand, while the inner–outer degeneracy implies an expression similar to equation ( 1 ) 6 , it arises from the symmetry of the Chang–Refsdal 15 approximation to the planetary caustics 16 . However, cases attributed to the inner–outer degeneracy are often not in the pure Chang–Refsdal regime 7 , in which case the planetary caustics are asymmetrical. Also, even in the Chang–Refsdal regime, in observed events the source trajectory is fixed and passes equidistant from two different planetary caustics, rather than two sides of the same caustic. Therefore, the offset degeneracy not only resolves inconsistencies and unifies the two previously known degeneracies into a generalized regime, but also relaxes the ∣ 1 − s ∣ ≫ q 1/3 condition required by both cases. Because of this unifying feature, we expected the offset degeneracy to be ubiquitous in past events with twofold degenerate solutions and speculate that a large number of cases may have been mistakenly attributed to the close–wide degeneracy. Therefore, we systematically searched for previously published events with twofold degenerate solutions satisfying q A ≃ q B ≪ 1 (see Methods ). We found 23 such events, and then first compared the intercept of the source trajectory on the star–planet axis to the location of the null predicted with equation ( 1 ). We also invert equation ( 1 ) to predict one degenerate s A from the other s B : $${s}_{\mathrm{A}}=\frac{1}{2}\left(2{x}_{0}-({s}_{\mathrm{B}}-1/{s}_{\mathrm{B}})+\sqrt{{\left[2{x}_{0}-({s}_{\mathrm{B}}-1/{s}_{\mathrm{B}})\right]}^{2}+4}\right),$$ (2) where x 0 = u 0 /sin( α ) is the intercept of the source trajectory on the binary axis, u 0 is the impact parameter and α is the angle of the source trajectory with respect to the binary axis. As shown in Fig. 3 , the source trajectory always passes through the null location on the star–planet axis as predicted by equation ( 1 ). Additionally, equation ( 2 ) accurately predicts one degenerate solution from the other. The fact that equation ( 1 ) applies for a wide range of α confirms that the offset degeneracy accommodates oblique trajectories, although proximity to planetary caustics might break the degeneracy (for example, KMT-2016-BLG-1397 17 ). Thus we conclude that equations ( 1 ) and ( 2 ) will be useful in the analysis of future events with offset-degenerate solutions. Fig. 3: Offset degeneracy reanalysis of 23 systematically selected events in the literature with twofold degenerate solutions. a , The source trajectory always passes close to the null intercept on the star–planet axis ( x null ), as predicted by equation ( 1 ). The x axis shows the source trajectory intercept on the star–planet axis, calculated from u 0 and α . The y axis shows the prediction for x null using equation ( 1 ) and reported values of s A and s B . Event labels as shown in the legend are the event abbreviations: for example, KMT162397 means KMT-2016-BLG-2397. The inset shows a zoom-in of the central boxed region. b , The x and y axes show the smaller and larger values of the degenerate solutions, referred to as s min,max . Circles are reported values of s min,max whereas triangles are s max values predicted with equation ( 2 ) of the offset degeneracy and s min , α and u 0 . The colour coding follows the legend in a . Circles and triangles largely coincide for all cases, demonstrating the predictive power of the offset degeneracy. Sizes of circles and triangles are scaled to the expected null location, x 0 = u 0 /sin( α ), to show the correlation between larger size and greater distance from the dash–dotted diagonal line, which represents the exact close–wide degeneracy where s min = 1/ s max . Cases typically understood as inner–outer— s A,B > 1 or s A,B < 1—are found outside the box bounded by the dashed lines. Cases close to the dashed lines but far from their conjunction correspond to resonant–close/wide degeneracies. Cases within the dashed box and not on the diagonal line do not belong to either close–wide or inner–outer degeneracies. The inset shows a zoom-in of the region boxed by solid lines. Error bars are marginalized 1 σ posterior intervals. Uncertainties for the predicted x null are propagated from the uncertainties of whichever of s min and s max gives rise to a smaller uncertainty on x null . Source data Full size image Given its apparent ubiquity, it is reasonable to ask why the offset degeneracy has only been discovered over two decades after the first in-depth explorations of degeneracies in two-body microlensing events were made 4 , 5 , 16 . One reason may be the early strategic focus on high-magnification ( u ≪ 1) events 4 , 18 , where deviations from s ↔ 1/ s were small, and the cause was not explored in detail. Recently, deviations from s ↔ 1/ s in semiresonant topology events have led to explicit discussions on the applicability of the close–wide degeneracy in the resonant regime and potential connections to the inner–outer degeneracy 7 , 14 . Nevertheless, as we have shown, the resonant condition itself does not cause the deviation from s ↔ 1/ s , but only allows it to be noticeable ( Methods ). To our advantage, the novel technique of ref. 8 based on machine learning presented us with a large number of degenerate events in the non-resonant ∣ 1 − s ∣ ≫ q 1/3 regime that deviated from the s ↔ 1/ s expectation, but also did not conform to the inner–outer degeneracy. These ‘intermediate’ offset-degenerate events ultimately allowed us to recognize the continuous and unifying nature of the offset degeneracy, showcasing another instance of new theoretical insight guided by machine learning (cf. ref. 19 ). As the next-generation surveys further expand the sensitivity limit from space 20 , the offset degeneracy will increasingly manifest. Methods The Z21 fast inference technique Zhang et al. 8 (Z21 hereafter) presented an LFI approach to binary microlensing analysis that allowed an approximate posterior for a given event to be computed in seconds on a consumer-grade GPU, compared with the hours-to-days timescales on CPU clusters that are typically required for status quo approaches. We summarize the Z21 approach at the high level here, and refer the reader to the original paper for details. The Z21 method is likelihood free in that it does not iteratively perform simulations to compute the likelihood, which is typical for sampling-based inference methods. Instead, Z21 directly learns the posterior probability as a conditional distribution \({\hat{p}}_{\phi }(\theta | x)\) with an NDE, where ϕ are the NDE parameters, θ the binary microlensing (2L1S) parameters and x the input light curve. The NDE is essentially a mapping that takes a light curve as input and produces a specified number of discrete posterior samples. Such a mapping is trained on a large number of simulations ( x i , θ i ) with parameters drawn from a wide prior, and the ϕ are optimized to maximize the expectation of this conditional probability under the training set data distribution. The mapping learned can thus be applied to any given event unseen during training as long as it is within the prespecified prior. This specific approach to LFI is called amortized neural posterior estimation, where ‘amortized’ refers to the process of paying all the simulation cost upfront so that inferences of future events do not require additional simulations. After training, the NDE alone generates posterior samples for any future event at a rate of ~10 6 s −1 on a consumer-grade GPU, or ~10 5 s −1 on an eight-core CPU, effectively carrying out inference in real time. Z21 demonstrated that, although not exact, the neural posterior places accurate constraints on all parameters nearly 100% of the time, except for the parameter that quantifies the effect of a finite-sized source. This is because substantial finite-source effects only occur when the source approaches sufficiently close to the caustics, which is satisfied by only a small subset of events. With a focus on the next-generation, space-based 20 microlensing survey planned on the Roman Space Telescope 11 , here we generated a training set in a similar fashion as the Z21 training set, but with a caustic-centred coordinate system rather than a centre-of-mass (COM) coordinate system. This is because the COM coordinate system is highly inefficient for producing planetary-caustic passing events with randomly drawn source trajectories with respect to the COM. In addition, for wide binary ( s > 1; q ~ 1) events, the time to closest approach ( t 0 ) to the COM could have an arbitrarily large offset from the time of peak magnification, which can lead to the missing of solution modes (Section 4.3 of Z21). The caustic-centred coordinate system, on the other hand, efficiently spans the entire 2L1S parameter space that allows for substantial deviation from a single-lens light curve. We generated a total of 228,892 events centred on the planetary caustic and 960,000 events centred on the central caustic, and further removed those that are consistent with a single-lens model by fitting each light curve to such a model and adopting a Δ χ 2 = 140 cutoff (Z21). This resulted in a training set of 691,257 simulations, including 137,644 planetary-caustic events and 553,863 central-caustic events. For planetary-caustic events, u 0 is randomly sampled from 0 to 50 times the caustic size. For central-caustic events, u 0 is randomly sampled from 0 to 2. Compared with Z21, we expanded the source flux fraction, defined as \({f}_{\mathrm{s}}=\frac{{F}_{{{{\rm{source}}}}}}{{F}_{{{{\rm{source}}}}}+{F}_{{{{\rm{blend}}}}}}\) , to f s ~ loguniform(0.05, 1), to probe more deeply into the severely blended regime. Other aspects of event simulation are the same as in Z21 and the reader is referred to Section 3 of Z21 for details. Identifying degeneracies in Z21 posteriors Z21 provided three example events with degenerate posteriors where light curve realizations from each degenerate mode are almost indistinguishable from one another, a confirmation of the effectiveness in modelling light curves with degenerate solutions. While the posterior modes in Z21 were identified manually, in this work we automate the degeneracy-finding process. To work with posterior distributions that vary in scale, position and shape, we first fitted and applied a parametric, monotonic ‘power’ transformation 21 to the LFI-generated posterior samples for each simulated light curve. This transformation normalizes each marginal parameter distribution to an approximate Gaussian. To automatically identify degenerate posteriors, we used the HDBSCAN algorithm 12 to perform clustering on the transformed posterior samples. The HDBSCAN algorithm is a density-based, hierarchical clustering method, which required, for our task, minimal hyperparameter tuning. The output of HDBSCAN is a suggested cluster label for each posterior sample, including the labelling for outlier/noise samples. Events with more than one cluster are identified as degenerate events. Although the NDE posteriors are accurate enough for a qualitative study of degeneracies, we nevertheless refined each solution mode to the maximum likelihood value. The approximate posterior allows us to make use of bounded optimization algorithms to quickly locate the exact solution. We use a parallel implementation 22 of the L-BFGS-B optimization algorithm 23 to quickly solve for the best-fit solutions. The entire process from light curve to degenerate exact solutions takes a few minutes for each event, with the last refinement step costing the most time. Comparison with events in the literature We demonstrate the ubiquity of the offset degeneracy by performing a thorough investigation of 2L1S events in the literature with reported degenerate posteriors. We first filter through events on the NASA microlensing exoplanet archive, which contains 112 planets and 306 entries with reported 2L1S parameters (retrieved 23 August 2021). Each entry reports one solution for a given event. Entries from adaptive-optics follow-up papers of published events, as well as duplicate entries with identical 2L1S solutions, are first removed. Triple-lens events with detections of two planets—OGLE-2006-BLG-109 and OGLE-2018-BLG-1011—are also removed. Planets with reported higher-order effects (parallax, xallarap) are also removed, as such effects often exhibit additional degeneracies and may complicate the application of the offset degeneracy. We further remove twofold degenerate events with Δ χ 2 > 10 where one solution is significantly favoured. This leaves us with 20 planets with exactly two solutions and 12 with more than two solutions. Among the 20 planets with exactly two solutions 6 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , six are excluded: KMT-2016-BLG-1107 36 because it is a different type of degeneracy (two distinct source trajectories crossing the s < 1 planetary caustic, one of which is parallel to and does not intersect with the binary axis), OGLE-2017-BLG-0373 24 because it is an accidental degeneracy without complete temporal coverage of the caustic entrance/exit and KMT-2019-BLG-0371 41 because of the large mass ratio ( q ~ 0.1) and the fact that the offset degeneracy only strictly manifests when q ≪ 1. We also exclude OGLE-2016-BLG-1227 42 and OGLE-2016-BLG-0263 35 because in both cases s min,max ≈ 4 makes it difficult to include in Fig. 3 scale-wise, and because both cases are deep in the ∣ 1 − s ∣ ≫ q 1/3 limit, and are thus already well characterized by the inner–outer degeneracy. Similarly, MOA-2007-BLG-400 29 is also deep in the ∣ 1 − s ∣ ≫ q 1/3 limit and represents one of the few instances where the source passes almost exactly the location of the primary star, thus allowing a degenerate pair of central caustics to manifest. However, the large uncertainty of s wide = 2.9 ± 0.2 translates into an uncertainty in x null that is orders of magnitude larger than the size of the central caustic, and makes it uninformative to include here. We also inspected events with more than two degenerate solutions, and found that the solutions of KMT-2019-BLG-1339 43 and MOA-2015-BLG-337 44 both consist of two pairs of degeneracies, each with their distinct shared mass ratios. For both events, we include the pairs of solutions with planetary mass ratios ( q ≪ 1). Beyond the total of 16 degenerate events retrieved from the NASA microlensing exoplanet archive and discussed above, we further looked for relevant events in the literature that are not included in the NASA exoplanet archive. Additions include the pairs of solutions with planetary mass ratios for OGLE-2011-BLG-0526 9 and OGLE-2011-BLG-0950 9 , as well as the four events with degenerate solutions recently reported in ref. 45 . We also include OGLE-2019-BLG-0960 7 . This results in a final sample of 23 degenerate events. Range of applicability of the offset degeneracy When considering larger q , we find that the qualitative structure of the null persists through q → 1 (Extended Data Figs. 3 and 5 ), suggesting that some form of the offset degeneracy may manifest even for q ≳ 0.1 events. In this regime, there should also be a transition point similar to the close–wide degeneracy that results in x null = 0, but q A = q B may not hold, nor s A = 1/ s B . For example, in the quadrupole and pure-shear approximation, the analogy to the close–wide degeneracy requires \(\hat{Q}=\gamma\) , where \(\hat{Q}={s}_{\mathrm{c}}^{2}{q}_{\mathrm{c}}/{(1+{q}_{\mathrm{c}})}^{2}\) is the quadrupole moment of the close central caustic, and γ = (1/ s w ) 2 q w /(1 + q w ) is the shear of the wide central caustic 5 . Furthermore, it is not clear if the values of q A,B at the x null = 0 close–wide-equivalent transition point remain constant when one of s A and s B undergoes offset. A notable example in the literature is KMT-2019-BLG-0371 41 , where the source trajectory passes through the null created by the two degenerate solutions but q A = 0.123 and q B = 0.079 are substantially different. The exact behaviour of the offset degeneracy for q → 1 should be studied in future work. We also note that offset-degenerate, caustic-crossing events usually require nearly vertical trajectories because of the additional constraint on the caustic-crossing length. However, oblique trajectories are allowed if the change in caustic width near x null is small for both solutions (for example, OGLE-2019-BLG-0960 7 ). Relevant previous work Inconsistencies of the close–wide and inner–outer degeneracies with degeneracies in observed events have recently been pointed out in the literature. In the analysis of the semiresonant topology event OGLE-2019-BLG-0960, the authors of ref. 7 noticed that, while the close–wide degeneracy is expected to break down as s → 1, there are large numbers of resonant and semiresonant topology events invoking the close–wide degeneracy where one solution has s close > 1 and the other s wide < 1, but they do not satisfy s close = 1/ s wide . They further noted the conceptual similarity to the inner–outer degeneracy for these events, but again noted that this type of degeneracy too is expected to break down in the resonant regime. On the basis of these observations, they speculated that the two degeneracies merge as s → 1. While ref. 7 pointed out inconsistencies for resonant events ( ∣ 1 − s ∣ ≲ q 1/3 ), here we found that inconsistencies with s close = 1/ s wide persist even within the ∣ 1 − s ∣ ≫ q 1/3 regime, in which the two degeneracies are derived and the caustics are well separated. We claim that this inconsistency is fundamentally because caustic degeneracies are only approximately correct in describing magnification degeneracies, irrespective of caustic topology. While small deviations from s close = 1/ s wide in early high-magnification events tend to go unnoticed, resonant events do allow the asymmetry from log( s ) = 0 \(\log (s)=0\) to be immediately noticeable. For OGLE-2019-BLG-0960, log 10 ( s close ) ≃ −0.001 \({\log }_{10}({s}_{close})\simeq -0.001\) differs from log 10 ( s wide ) ≃ 0.01 \({\log }_{10}({s}_{wide})\simeq 0.01\) by an order of magnitude. The theoretical follow-up work of ref. 14 studied the behaviour of the close–wide degeneracy in the resonant regime. They first clarified that, rather than |log( s )| ≫ 0 \(| \log (s)| \gg 0\) , the exact condition of the close–wide degeneracy is ∣ 1 − s ∣ ≫ q 1/3 , which is dependent on the mass ratio. Furthermore, even for ∣ 1 − s ∣ ≲ q 1/3 , the central caustic could still be locally invariant under s ↔ 1/ s for parts of the caustic satisfying ∣ 1 − s e i ϕ ∣ ≫ q 1/3 , where ϕ is a parametric variable that describes the position along the caustic. We note that this fact has also been observed in the earlier work of ref. 46 . They concluded by suggesting that slight changes to s A,B and q A,B may create a local pair of degenerate models, which in some sense anticipated our discovery. Data availability Source data for Figures 2 and 3 have been made available online. Figure 3 data are also partially available in the NASA microlensing exoplanet archive, . Code availability This work utilized the public microlensing code, MulensModel 47 , available at .
Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world's oldest science. But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: Unsuspected connections hidden in the complex mathematics arising from general relativity—in particular, how that theory is applied to finding new planets around other stars. In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it—a process called gravitational microlensing—revealed that the decades-old theories now used to explain these observations are woefully incomplete. In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun. But when the foreground object is a star with a planet, the brightening over time—the light curve—is more complicated. What's more, there are often multiple planetary orbits that can explain a given light curve equally well—so called degeneracies. That's where humans simplified the math and missed the bigger picture. The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two "theories" are really special cases of a broader theory that the researchers admit is likely still incomplete. "A machine learning inference algorithm we previously developed led us to discover something new and fundamental about the equations that govern the general relativistic effect of light- bending by two massive bodies," Joshua Bloom wrote in a blog post last year when he uploaded the paper to a preprint server, arXiv. Bloom is a UC Berkeley professor of astronomy and chair of the department. He compared the discovery by UC Berkeley graduate student Keming Zhang to connections that Google's AI team, DeepMind, recently made between two different areas of mathematics. Taken together, these examples show that AI systems can reveal fundamental associations that humans miss. "I argue that they constitute one of the first—if not the first—time[s] that AI has been used to directly yield new theoretical insight in math and astronomy," Bloom said. "Just as Steve Jobs suggested computers could be the bicycles of the mind, we've been seeking an AI framework to serve as an intellectual rocket ship for scientists." "This is kind of a milestone in AI and machine learning," emphasized co-author Scott Gaudi, a professor of astronomy at The Ohio State University and one of the pioneers of using gravitational microlensing to discover exoplanets. "Keming's machine learning algorithm uncovered this degeneracy that had been missed by experts in the field toiling with data for decades. This is suggestive of how research is going to go in the future when it is aided by machine learning, which is really exciting." The manifestation of the offset degeneracy in source-plane magnification difference maps (top) and light curves (bottom). Credit: Nature Astronomy (2022). DOI: 10.1038/s41550-022-01671-6 Discovering exoplanets with microlensing More than 5,000 exoplanets, or extrasolar planets, have been discovered around stars in the Milky Way, though few have actually been seen through a telescope—they are too dim. Most have been detected because they create a Doppler wobble in the motions of their host stars or because they slightly dim the light from the host star when they cross in front of it—transits that were the focus of NASA's Kepler mission. Only a few more than 100 have been discovered by a third technique, microlensing. One of the main goals of NASA's Nancy Grace Roman Space Telescope, scheduled to launch by 2027, is to discover thousands more exoplanets via microlensing. The technique has an advantage over the Doppler and transit techniques in that it can detect lower-mass planets, including those the size of Earth, that are far from their stars, at a distance equivalent to that of Jupiter or Saturn in our solar system. Bloom, Zhang and their colleagues set out two years ago to develop an AI algorithm to analyze microlensing data faster to determine the stellar and planetary masses of these planetary systems and the distances the planets are orbiting from their stars. Such an algorithm would speed analysis of the likely hundreds of thousands of events the Roman telescope will detect in order to find the 1% or fewer that are caused by exoplanetary systems. One problem astronomers encounter, however, is that the observed signal can be ambiguous. When a lone foreground star passes in front of a background star, the brightness of the background stars rises smoothly to a peak and then drops symmetrically to its original brightness. It's easy to understand mathematically and observationally. But if the foreground star has a planet, the planet creates a separate brightness peak within the peak caused by the star. When trying to reconstruct the orbital configuration of the exoplanet that produced the signal, general relativity often allows two or more so-called degenerate solutions, all of which can explain the observations. To date, astronomers have generally dealt with these degeneracies in simplistic and artificially distinct ways, Gaudi said. If the distant starlight passes close to the star, the observations could be interpreted either as a wide or a close orbit for the planet—an ambiguity astronomers can often resolve with other data. A second type of degeneracy occurs when the background starlight passes close to the planet. In this case, however, the two different solutions for the planetary orbit are generally only slightly different. According to Gaudi, these two simplifications of two-body gravitational microlensing are usually sufficient to determine the true masses and orbital distances. In fact, in a paper published last year, Zhang, Bloom, Gaudi and two other UC Berkeley co-authors, astronomy professor Jessica Lu and graduate student Casey Lam, described a new AI algorithm that does not rely on knowledge of these interpretations at all. The algorithm greatly accelerates analysis of microlensing observations, providing results in milliseconds, rather than days, and drastically reducing the computer crunching. Zhang then tested the new AI algorithm on microlensing light curves from hundreds of possible orbital configurations of star and exoplanet and noticed something unusual: There were other ambiguities that the two interpretations did not account for. He concluded that the commonly used interpretations of microlensing were, in fact, just special cases of a broader theory that explains the full variety of ambiguities in microlensing events. "The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet," Zhang said. "The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn't pass close to either the star or planet and cannot be explained by either previous theory. That was key to us proposing the new unifying theory." Gaudi was skeptical, at first, but came around after Zhang produced many examples where the previous two theories did not fit observations and the new theory did. Zhang actually looked at the data from two dozen previous papers that reported the discovery of exoplanets through microlensing and found that in all cases, the new theory fit the data better than the previous theories. "People were seeing these microlensing events, which actually were exhibiting this new degeneracy, but just didn't realize it," Gaudi said. "It was really just the machine learning looking at thousands of events where it became impossible to miss." Zhang and Gaudi have submitted a new paper that rigorously describes the new mathematics based on general relativity and explores the theory in microlensing situations where more than one exoplanet orbits a star. The new theory technically makes interpretation of microlensing observations more ambiguous, since there are more degenerate solutions to describe the observations. But the theory also demonstrates clearly that observing the same microlensing event from two perspectives—from Earth and from the orbit of the Roman Space Telescope, for example—will make it easier to settle on the correct orbits and masses. That is what astronomers currently plan to do, Gaudi said. "The AI suggested a way to look at the lens equation in a new light and uncover something really deep about the mathematics of it," said Bloom. "AI is sort of emerging as not just this kind of blunt tool that's in our toolbox, but as something that's actually quite clever. Alongside an expert like Keming, the two were able to do something pretty fundamental."
10.1038/s41550-022-01671-6
Medicine
Addicted to ran, ovarian cancer cells stop moving when deprived
Kossay Zaoui et al. Ran promotes membrane targeting and stabilization of RhoA to orchestrate ovarian cancer cell invasion, Nature Communications (2019). DOI: 10.1038/s41467-019-10570-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-10570-w
https://medicalxpress.com/news/2019-07-addicted-ran-ovarian-cancer-cells.html
Abstract Ran is a nucleocytoplasmic shuttle protein that is involved in cell cycle regulation, nuclear-cytoplasmic transport, and cell transformation. Ran plays an important role in cancer cell survival and cancer progression. Here, we show that, in addition to the nucleocytoplasmic localization of Ran, this GTPase is specifically associated with the plasma membrane/ruffles of ovarian cancer cells. Ran depletion has a drastic effect on RhoA stability and inhibits RhoA localization to the plasma membrane/ruffles and RhoA activity. We further demonstrate that the DEDDDL domain of Ran is required for the interaction with serine 188 of RhoA, which prevents RhoA degradation by the proteasome pathway. Moreover, the knockdown of Ran leads to a reduction of ovarian cancer cell invasion by impairing RhoA signalling. Our findings provide advanced insights into the mode of action of the Ran-RhoA signalling axis and may represent a potential therapeutic avenue for drug development to prevent ovarian tumour metastasis. Introduction Epithelial ovarian cancer (EOC) is the deadliest of all female reproductive system cancers worldwide with 140,000 deaths each year 1 , 2 , 3 . The disease being largely asymptomatic, the vast majority of patients are diagnosed at an advanced stage, which is responsible for a poor prognosis 4 . We have demonstrated that the small GTPAse Ran (Ras-related nuclear protein) is strongly associated with EOC progression, poor overall survival, and a high risk of recurrence 5 , 6 . Ran is a master regulator of nucleocytoplasmic transport 7 , 8 and mitotic spindle formation, which are necessary for cell proliferation and cell cycle progression 7 , 9 . Indeed, we have shown that depletion of Ran prevents EOC cell proliferation in vitro and results in EOC tumor growth arrest in vivo 10 . RhoA is one of the most-studied Rho GTPase, it is activated by guanine-nucleotide exchange factors (GEFs) and is inactivated by guanine-nucleotide dissociation inhibitors (GDIs), which prevent its interaction with the plasma membrane (PM), but not necessarily with downstream targets 11 . In addition, the RhoA protein contains a CAAX motif that influences its targeting to specific plasma membrane (PM) microdomains 12 . However, the CAAX-signaled post-translational modification alone is not sufficient to promote full RhoA membrane association that is required for its proper function 13 , 14 . RhoA GTPase coordinately regulates multiple aspects of tumor cell invasion 15 , and its expression is significantly associated with poor tumor differentiation and advanced stages of ovarian cancer 16 . Here, we investigate the mechanism through which Ran modulates ovarian tumor progression. We find that Ran can localize to the PM where it forms a complex with RhoA GTPase, leading to RhoA stabilization and activation. Our findings describe a signaling pathway involving Ran that regulates EOC invasion through RhoA GTPase activity and may lead to alternative therapeutic strategies for ovarian cancer. Results Ran stabilizes and co-localizes with RhoA Ran, a member of the Ras GTPase family, has been demonstrated to control numerous cellular processes of cancer, including cell proliferation and tumor cell invasion/migration associated with a metastatic phenotype 17 , 18 , 19 . We have previously demonstrated that Ran is overexpressed in invasive high-grade serous EOC cells 6 ; however, the role of Ran in EOC cell invasion remains unclear. To address this, we examined the effect of Ran depletion by RNA interference (RNAi) in two aggressive EOC cell lines (TOV-112D and TOV-1946) derived in our laboratory 20 , 21 (Fig. 1a ; Supplementary Fig. 1a ). Video microscopy analysis revealed that TOV-112D cells with siRNA-mediated knockdown (KD) of Ran elicited reduced spreading and motility while producing long projections that appeared at the trailing end of cells in comparison with control TOV-112D cells (Supplementary Fig. 1b and Movies 1 , 2 ). Fig. 1 Ran GTPase stabilizes and co-localizes with RhoA at the plasma membrane of TOV-112D cells. a Western blot of Ran knockdown (KD) with siRNA (CTRL, Ran #1 or 2) and rescue levels with different RNAi-resistant 2xGFP constructs of Ran as wild-type (WT), dominant active (DA), and dominant-negative (DN) in TOV-112D. Actin served as a loading control for all blots. b Western blot showing RhoA and RhoC protein expression levels after Ran KD in cells. c Western blot showing RhoA protein level after re-expression of 2xGFP-Ran WT (Ran WT rescue) or treatment for 2 h with 20 µM MG-132 in cells transfected with CTRL or Ran #2 siRNA. d Active RhoA was examined in cell lysates of control (CTRL), Ran KD or Ran KD with Ran WT rescued. All values are means ± SEM from three independent experiments. P -values are based on comparisons with CTRL using the t test: * P < 0.05 was considered statistically significant. Western blot showing total RhoA. e , f Cell body (CB) and lamellipodia (LP) of CTRL and Ran KD cells (with or without Ran WT rescued) were fractionated and treated with or without 20 µM MG-132 for 2 h. Equal amounts of proteins were immunoblotted to show RhoA expression in the respective fractions. RhoA was decreased in CB and LP in Ran KD cells, but unchanged in CTRL. RhoA expression is only rescued in CB fractions after treatment with MG-132. g Top, TOV-112D cells were fixed, permeabilized, and subjected to immunofluorescence using Ran and RhoA antibodies and DAPI (Merge). Bottom, TOV-112D cells transfected with 2xGFP-Ran and mCherry-RhoA were visualized by spinning disk microscopy. Arrows show Ran and RhoA colocalization at the plasma membrane. h TOV-112D cells were transfected with RanBP1-GFP. Protein lysates were subjected to IP with Ran or control IgG antibodies. Proteins were separated by SDS-PAGE and immunoblotted for endogenous RhoA and Ran. i Protein lysates from TOV-112D and ARPE-19 cells were subjected to IP with Ran or control IgG antibodies. Proteins were separated by SDS-PAGE and immunoblotted for endogenous RhoA and Ran. Scale bars, 10 µm Full size image This Ran KD-induced phenotype of elongated cells with pronounced tails is similar to the disrupted RhoA signaling phenotype that has been observed in other systems 22 , 23 , 24 . Therefore, experiments were first carried out to examine RhoA protein levels following Ran KD which demonstrated a drastic decrease in RhoA protein levels (Fig. 1b; Supplementary Fig. 1c ). We found a similar effect on RhoA protein levels by targeting the 3′-untranslated region of Ran mRNA or its coding region using either siRNA#2 or siRNA#1, respectively (Fig. 1b; Supplementary Fig. 1c ). In contrast, RhoC protein levels were not altered (Fig. 1b; Supplementary Fig. 1c ), despite its extensive similarity in protein sequence with RhoA 11 . Importantly, re-expression of a RNAi-resistant Ran wild-type (2xGFP-Ran WT, plasmid containing only the coding sequence) rescued RhoA protein levels in Ran KD cells (Fig. 1a, c; Supplementary Fig. 1a, d ) and emphasized the specificity of this response to Ran. Moreover, Ran KD did not alter mRNA levels of RhoA, Rac1, and Cdc42 (Supplementary Fig. 1e ), providing further evidence that the effect of Ran on RhoA protein expression was specific and not due to the inhibition of transcription. Ubiquitination is reported as a major post-translational modification that regulates RhoA protein stability 25 . To determine whether Ran is implicated in the reduced RhoA levels through the ubiquitin proteasome system, Ran KD cells were treated with the proteasome inhibitor MG-132. We found that MG-132 treatment of Ran KD cells rescued RhoA expression (Fig. 1c; Supplementary Fig. 1d ). These results suggest that Ran stabilizes RhoA protein by inhibiting its degradation by the proteasome. RhoA is localized to the cytosol in mammalian cells and has been reported to translocate to the leading edge of migrating cells and at the membrane ruffles upon activation with for example FBS 26 . However, the increase of RhoA protein levels following MG-132 treatment in Ran KD cells does not reflect the activation status of RhoA. To test this, we performed a GTPase activity assay to determine any change in RhoA activity in response to MG-132 treatment. In the absence of Ran, the decrease in RhoA activity is due to low expression levels of the total RhoA protein in TOV-112D cells under these conditions. However, in Ran KD cells treated with the MG-132, the total level of RhoA protein is similar to control cells, but the RhoA activity is significantly diminished (Fig. 1d ). We hypothesized that the reduction of RhoA activity may be due to the absence of RhoA localization to the PM, which is required for RhoA function. To examine the effect of Ran on RhoA cellular localization, Ran KD cells were fractionated to separate PM/lamellipodia and cell body-enriched fractions 13 . Analysis of protein lysates confirmed that RhoA protein levels were decreased in the PM/lamellipodia and cell body-enriched fractions of Ran-depleted cells (Fig. 1e; Supplementary Fig. 1f ). Interestingly, RhoA was observed only in cell body-enriched fractions of Ran KD cells treated with MG-132 (Fig. 1f ), indicating that RhoA localization to the PM cannot occur in the absence of Ran even after proteasome inhibition. Together, these findings demonstrate that Ran specifically controls RhoA stability and localization to the leading edge of migrating cell. We also demonstrated the co-immunoprecipitation of endogenous Ran and RhoA (Fig. 1h ) that can be specifically disrupted by RanBP1 overexpression (Fig. 1h ). To test whether the association between Ran and RhoA can occur in non-cancer cells, cell lysates from ARPE-19 cells (a human retinal pigment epithelial cell line) were subjected to co-immunoprecipitation with endogenous proteins Ran and RhoA. Unlike TOV-112D cells, no interaction was detected in ARPE-19 cells (Fig. 1i ), suggesting that the association of Ran with RhoA appears specific to ovarian cancer cells. Ran promotes RhoA recruitment to the plasma membrane In addition to Ran’s role in nuclear transport, other examples of Ran’s involvement in cytoplasmic signaling pathways have recently included endocytic transport 27 , 28 , the regulation of neuronal outgrowth 28 , 29 . From budding yeast to mammalian epithelia, Ran is frequently associated with polarized activation of the Rho GTPase Cdc42 30 , 31 . Moreover, Ran also regulates the Arp2/3 complex 32 and ERM (Ezrin/Radixin/Moesin) activation 33 , both of which are signal transducers often linked to RhoA GTPase signaling 34 , 35 . However, these studies of Ran effector functions were largely limited to their structural effects and their role in cancer cell migration/invasion has yet to be elucidated. Since our findings point to a role for Ran in the recruitment of RhoA to the PM, we further examined the subcellular localization and dynamics of Ran in response to serum, which is known to cause Ran activation 36 . We found that Ran is localized predominantly in the nucleus under serum starvation conditions (Supplementary Fig. 1g, h ). After 30 min of serum stimulation, Ran was found in the cytoplasm and appeared mainly associated with the nuclear envelope and the PM/ruffles (Supplementary Fig. 1g, h ). When cells were stimulated with serum for 1 h, most of Ran re-localizes to the nucleus, but a pool of Ran was still bound to the PM/ruffles (Supplementary Fig. 1g, h ). Microscopy analysis showed that a portion of Ran, both endogenous and exogenous (2xGFP-Ran), colocalized with RhoA to the PM/ruffles (Figs 1g , 2b ). Consistent with our previous results in Fig. 1f using MG-132 treatment, RhoA was not able to localize to the PM/ruffles in Ran KD cells (Fig. 2a ; Supplementary Fig. 2a, b ). However, treatment with MG-132 does not affect the localization of RhoA in control cells (Fig. 2a ). Taken together, these results confirm that Ran is involved in RhoA localization to the PM/ruffles. Fig. 2 Ran GTPase promotes RhoA recruitment to the plasma membrane by direct interaction. a TOV-112D cells were either starved or incubated with 10% FBS, treated with or without 20 µM MG-132 for 2 h, and transfected with CTRL or Ran siRNA as indicated. Cells were then fixed, permeabilized, and subjected to immunofluorescence using Ran and RhoA antibodies and DAPI (Merge). Cells were visualized by spinning disk microscopy. Arrows show Ran and RhoA colocalization at the plasma membrane. Scale bars, 10 µm. b Colocalization between RhoA (red) and Ran (green) was represented as Pearson’s correlation coefficient and measured in individual TOV-112D starved cells or with 10% FBS. All values are means ± SEM from three independent experiments. P -values are based on comparisons with CTRL using the t test: * P < 0.05 was considered statistically significant. c TOV-112D cells co-transfected with RanBP1-Flag or RhoA-Flag and GFP alone or 2xGFP-Ran (WT) were starved or incubated with 10% FBS, as indicated. Cell lysates were subjected to immunoprecipitation (IP) with an anti-GFP or an anti-Flag antibody and western blotted as shown. GFP alone was used as a negative control and RanBP1-Flag as a positive control. d – g TOV-112D cells co-transfected with Myc-RhoA (WT, DA, or DN) and 2xGFP-Ran (WT, DA, or DN) were starved or incubated with 1% or 10% FBS, as indicated. Cell lysates were subjected to immunoprecipitation (IP) with an anti-GFP or an anti-Myc antibody and western blotted as shown. h TOV-112D cells transfected with 2xGFP-Ran (WT, DA or DN), lysed, and subjected to IP with an anti-GFP antibody. Protein complexes were separated by SDS-PAGE and transferred to the nitrocellulose membrane. The membranes were incubated with free GST protein (negative control) or fusion protein GST-RhoA (GDP or GTPγS) and immunoblotted with anti-GST antibody Full size image To further determine whether RhoA localization to the PM/ruffles was mediated through Ran, we examined if Ran could associate with RhoA. Ran WT co-immunoprecipitated with RhoA and the positive control RanBP1 only in the presence of 10% serum (Fig. 2c–e ), suggesting that this interaction was dependent on both their activation states and their localization. However, no interaction was detected using the GFP empty vector confirming the specificity of Ran interaction with RhoA (Fig. 2c ). Consistent with this interpretation, our results with 10% serum showed RhoA mutants that adopted either a dominant active (DA) or dominant-negative (DN) conformation co-immunoprecipitated with Ran (Fig. 2f ). This indicated that Ran transport out of the nucleus was necessary for this interaction, and that Ran re-located to the PM/ruffles as shown in Supplementary Fig. 1g . Similarly, reciprocal co-immunoprecipitations confirmed the ability of the active form (DA) of Ran, which is less concentrated in the nucleus 37 , to bind RhoA under serum starvation conditions (Fig. 2g ). In the presence of 10% serum, RhoA could efficiently bind to Ran WT and Ran DA but not the dominant-negative form (DN) of Ran, which is localized in the nucleus (Fig. 2g ) 37 . To determine whether the interaction between Ran and RhoA was direct, we used far-western blot analysis to examine the ability of Ran WT, Ran DA, and Ran DN to interact with RhoA purified from bacteria as GST-RhoA GDP or GTPγS fusion proteins. Interestingly, RhoA associated with Ran WT, Ran DA, and Ran DN, as detected by anti-GST antibody (Fig. 2h ). Taken together, these data demonstrated an undescribed direct interaction between Ran and RhoA that was dependent on Ran localization but not activity. Serine 188 of RhoA is crucial for RhoA and Ran interaction Because the C-terminus of RhoA is essential for correct localization of this protein 11 , we therefore generated multiple RhoA mutants (Fig. 3a ) and performed co-immunoprecipitations to identify the precise domain motif of RhoA that interacts with Ran. The deletion of the RRGKKKS residues at the C-terminus of RhoA disrupted the interaction with Ran (Fig. 3b ). Moreover, the phosphorylation of serine 188 of the RRGKKKS residues protects RhoA from ubiquitin-mediated proteasomal degradation 38 , and the removal of serine 188 disrupted the interaction of RhoA with Ran (Fig. 3b ). To directly analyse whether the phosphorylation of serine 188 specifically affects RhoA and Ran binding, we performed a co-immunoprecipitation using the serine 188 phosphomimetic RhoA (S188E) 38 , which revealed that RhoA S188E failed to co-immunoprecipitate with Ran (Fig. 3b ). These results thus provide evidence that the serine 188 of RhoA is crucial for Ran and RhoA interaction. Fig. 3 The serine 188 of RhoA is required for RhoA interaction with the DEDDDL polyacid domain of Ran. a Schematic of RhoA and RhoC mutant constructs. b TOV-112D cells were co-transfected with 2xGFP-Ran (WT) and either control, Myc-RhoA (WT), Myc-RhoA (ΔRRGKKKS), Myc-RhoA (ΔS188), or Myc-RhoA (S188E) followed by an immunoprecipitation (IP) using an anti-GFP antibody and western blotted as shown. c TOV-112D cells were co-transfected with 2xGFP-Ran (WT) and either control, Myc-RhoA (WT), Myc-RhoC (WT), Myc-RhoC-A, or Myc-RhoC-S188 followed by an IP with an anti-GFP antibody and western blotted as shown. d TOV-112D cells were co-transfected with 2xGFP-Ran (WT) and either control, Myc-RhoA (WT), Myc-RhoA (PI), Myc-RhoC (WT), Myc-RhoC-S188, or Myc-RhoC (LV) followed by an IP with an anti-GFP antibody and western blotted as shown. e TOV-112D cells were co-transfected with Myc-RhoA (WT) or 2xGFP-Ran (WT) or EGFP-Ran ΔCT (Ran without DEDDDL motif) followed by an IP with an anti-Myc antibody and western blotted as shown Full size image Although the amino acid sequences of RhoA and RhoC are 88% identical, there exists a major divergence in their C-terminus regions 11 . We demonstrated that RhoC fails to co-immunoprecipitate with Ran (Supplementary Fig. 2c ). To confirm the specificity of the serine 188 for the interaction of RhoA with Ran, we generated two mutants of RhoC, where the RRGKKKS amino acids (RhoC-A) or serine 188 alone (RhoC-S188) substituted the amino acids at the corresponding positions to mimic the sequence of RhoA WT (Fig. 3a ). We found that both mutants RhoC-A and RhoC-S188 co-immunoprecipitated with Ran (Fig. 3c ), confirming that serine 188 is required for RhoA interaction with Ran. To further define the specific role of serine 188, two other mutants where created, RhoA PI and RhoC LV, in which we exchanged between RhoA and RhoC their corresponding amino acids PI and LV in the hypervariable region, downstream S188 (Fig. 3a ). RhoA PI displayed a similar interaction with Ran as RhoA WT. However, unlike RhoC-S188, RhoC LV did not bind to Ran (Fig. 3d ). Taken together, these results indicate that serine 188 of RhoA is indispensable for the interaction with Ran. According to the role of the carboxyl-terminal DEDDDL domain of Ran in the nucleocytoplasmic transport 39 , 40 , a co-immunoprecipitation using the mutant of Ran without the conserved acidic domain DEDDDL (Ran ΔCT) was performed. We found that, the deletion of DEDDDL motif of Ran perturbs its interaction with RhoA (Fig. 3e ), proving that the DEDDDL motif of Ran is required for its interaction with RhoA in a transient/competitive manner. Ran recruits RhoA to subcellular structures Given that Ran and RhoA colocalized to the PM/ruffles of TOV-112D cells (Fig. 1g ) and Ran forms a complex with RhoA (Figs 1h , 2c ), we reasoned that Ran could recruit RhoA to the PM/ruffles, allowing spatially restricted activation of RhoA signaling in migrating cancer cells. To explore whether Ran is selectively required for RhoA recruitment to the PM/ruffles, Ran was targeted to a different subcellular membrane, the mitochondria. A fusion chimeric protein was generated (MitoGFP-Ran WT) which colocalized with a mitochondrial probe, MitoTracker ® (Supplementary Fig. 2d ). Importantly, the mCherry-RhoA WT localized to the mitochondria following co-expression with MitoGFP-Ran WT (Fig. 4a ). EGFP-Ran ΔCT (Ran without DEDDDL motif) failed to localize to the PM/ruffles of cells and RhoA remained localized to the cytoplasm (Fig. 4b, c ). These results support a role for Ran in recruiting RhoA to the PM/ruffles. Fig. 4 Ran GTPase recruits RhoA to subcellular structures. TOV-112D cells expressing mCherry-RhoA WT were co-transfected with 2xGFP-Ran or MitoGFP-Ran WT ( a ), EGFP-Ran ΔCT (Ran without DEDDDL motif), or MitoGFP-Ran ΔCT (Ran without DEDDDL motif) ( b ). Cells were visualized by spinning disk microscopy to establish the localization of RhoA with respect to MitoGFP-Ran or MitoGFP-Ran ΔCT (Ran without DEDDDL motif). Scale bars, 10 µm. c Left, percentage of TOV-112D cells with the corresponding phenotype as in ( a , b ) for RanWT/RhoA or Ran ΔCT/RhoA colocalization or not to the mitochondria was scored. Right, colocalization between RhoA (red) and Ran (green) represented as Pearson’s correlation coefficient and measured for individual TOV-112D cells. All values are means ± SEM from three independent experiments. P -values are based on comparisons with CTRL (Ran WT vs RhoA): using the t test: * P < 0.05 was considered statistically significant. d TOV-112D cells co-transfected with CTRL or RhoA siRNA and EGFP constructs of either RhoA (WT), RhoA ΔRRGKKS, RhoA S188E or RhoA ΔS188 treated for 2 h with 20 µM MG-132 as indicated. Cells were visualized by spinning disk microscopy. Scale bars, 10 µm. e Percentage of TOV-112D cells with corresponding phenotype as in ( d ) for RhoA localization at the PM or not, was scored Full size image Serine 188 preserves RhoA from proteasome-mediated degradation 38 . To better characterize the Ran/RhoA association, we investigated the subcellular localization of the mutants RhoA ΔRRGKKKS, RhoA S188E and RhoA ΔS188 following their overexpression either alone or with MitoGFP-Ran WT in TOV-112D cells treated with MG-132 to stabilize the expression of these mutants. In contrast to RhoA WT, we found that these mutants do not accumulate to the PM/ruffles in the cells (Fig. 4d, e ). Furthermore, in the majority of TOV-112D cells, these mutants do not follow MitoGFP-Ran WT to the mitochondria (Supplementary Fig. 2e , 3a, b ). Taken together, these results show that the serine 188 of RhoA and the C-terminus domain of Ran are necessary for their interaction and consequent association with the PM/ruffles. Ran-RhoA pathway regulates cell proliferation and invasion Despite frequent reports of Ran involvement in invasion and metastasis of tumor cells, little is known about the corresponding molecular mechanism 17 , 18 , 41 . Therefore, we explored the effect of Ran-RhoA signaling on the migratory and invasive abilities of EOC cells at the third day post transfection in order to avoid a cell migration/invasion result that is biased by the cell death seen at later time points (see the Methods section). We found that Ran-depleted cells showed decreased migration, where the net velocity of living cells was significantly reduced from 0.23 µm/min to 0.1 µm/min (Fig. 5a ). The re-introduction of Ran WT to Ran KD cells rescued cell velocity (Fig. 5a ). Alternatively, the depletion of Ran or RhoA significantly reduced cell invasion, and the re-introduction of Ran WT to the corresponding Ran KD cells rescued this altered cell invasion (Fig. 5b ). However, the expression of Ran ΔCT in Ran KD cells did not restore cell invasion. Contrary to RhoA KD cells that expressed constructs of RhoA WT, RhoC-A, and RhoC-S188, the RhoA KD cells that expressed RhoC WT and RhoA mutants were not rescued and remained attenuated in EOC cell invasion (Fig. 5b ). To our knowledge, this is the first report demonstrating the relationship between Ran and RhoA signaling to control EOC cell invasion. Next, we used an encoded red fluorescent protein called KillerRed-membrane fusion that is activated under appropriate light excitation to efficiently kill cells and selectively disrupt protein–protein interactions at the PM 42 . As a complementary approach, GFP-RhoA WT and a generated chimera of Ran fused with KillerRed-membrane (Ran-KillerRed) were transiently expressed in Ran KD TOV-112D cells to exclusively target Ran to the PM, confirming the role of Ran in the recruitment of RhoA to the PM/ruffle and the effect on cancer cell proliferation and invasion. Fig. 5 Ran regulates cell proliferation and migration/invasion through RhoA recruitment. a TOV-112D cells were transfected with siRNAs, and Ran WT as indicated for cell migration assays. Left, cell velocity was determined by tracking living cells. Right, analysis of cell migration paths in CTRL and Ran KD cells. The data represent the trajectories of 30 cells. All values are means ± SEM from three independent experiments. P -values are based on comparisons with CTRL using the t test: * P < 0.05 was considered statistically significant. b Effect of Ran-RhoA signaling with or without MG-132 treatment on transwell cell invasion. The invading TOV-112D cells passed through the membrane and were fixed, stained, quantified as described in the Methods section. All values are means ± SEM from three independent experiments. P -values are based on comparisons with CTRL using the t test: * P < 0.05 was considered statistically significant. c Left, TOV-112D cells co-expressing Ran-KillerRed and GFP-RhoA were irradiated with green light for 60 s. The illumination resulted in considerable decrease in GFP membrane signal (arrowheads) confirming RhoA detachment from the plasma membrane after light-induced damage of Ran. Right, control experiment showing TOV-112D cells co-expressing KillerRed and GFP-RhoA-CCKVL were irradiated with green light for 60 s. No change in GFP signal distribution from the plasma membrane was observed. Scale bars, 10 µm. d Graph shows TOV-112D cell proliferation plotted over time (from the third day post transfection) for each condition as indicated and normalized with corresponding inactivated condition. Values (means ± SEM) from three independent experiments are shown as ratio change in cell survival. P -values are based on comparisons with CTRL using the t test: * P < 0.05 was considered statistically significant. e Transwell-invasion assay using transwell chamber before and after KillerRed inactivation. NA non-activated, ACT activated. The data from three independent experiments are expressed as percent change (means ± SEM) compared with the controls. P -values based in comparison with KillerRed alone activated conditions using the t test: * P < 0.05 was considered statistically significant Full size image Intracellular localization of the GFP-RhoA signal was monitored before and after light inactivation of Ran-KillerRed using spinning disk microscopy. As expected, GFP-RhoA accumulated constitutively with Ran-KillerRed to the PM/ruffles (Fig. 5c ). Ran-KillerRed inactivation drastically affected RhoA association with the PM/ruffles (Fig. 5c ), and consequently, disrupted Ran binding with RhoA (Supplementary Fig. 3c ). However, no change in GFP signal distribution from the plasma membrane was observed in cells expressing RhoA-CCKVL, which is a fusion protein containing wild-type RhoA and the palmitoylation motif of RhoB that promotes RhoA constitutive membrane localization (Fig. 5c ) 13 . In order to highlight the importance of Ran association to the PM with RhoA signaling on cell proliferation and invasion, we carried out a TOV-112D cell proliferation assay and transwell-invasion assay before and after inactivation of KillerRed alone, Ran KillerRed and RhoA KillerRed (Fig. 5d, e ). The cell invasion was tested independently of any effect on cell proliferation as described in the Methods section, and we found that Ran KillerRed or RhoA KillerRed inactivation resulted in a more pronounced inhibition of TOV-112D cell proliferation and invasion compared with KillerRed vector alone (Fig. 5d, e ), although a more marked effect was observed on cell invasion than proliferation. These results underline that the role of Ran on ovarian cancer cell invasion, and to a lesser extent cell proliferation, is dependent on RhoA localization/signaling to the PM. Discussion Ran, a member of the Ras GTPase family, has been shown to activate several cancer signaling pathways 10 , 41 . In this study, we have identified an original role of Ran in the vicinity of the PM to control tumor cell invasion by functionally and specifically linking it to RhoA signaling. This discovery sheds different light on the role of Ran in the fidelity of cell growing and metastasis formation in ovarian cancer. We demonstrate here that downregulation of Ran affects ovarian cancer cell proliferation and invasion through a proteasome-mediated degradation of RhoA which leads to PM restricted RhoA activity. Ran is a plurifunctional protein and here we show for the first time its PM/ruffles localization. As shown in our model (Supplementary Fig. 3d ), we have identified an original role of Ran in association with the PM to control tumor cell invasion by functionally and specifically linking it to RhoA signaling. Interestingly, it has been reported that Ran is distant from neuronal nuclei and is found in association with the microtubule motor dynein 43 . These findings suggest a mechanism where Ran protein could play a role in microtubule-dependent cellular functions, such as membrane vesicle transport between the intracellular compartments, including the plasma membrane and the nucleus. Moreover, it has been shown that Ran can be secreted and distributed between cells thereby contributing to a localization of Ran to the plasma membrane 44 . Given the ability of Ran to move from cell to cell and its association with microtubules cytoskeleton elements, it is tempting to speculate that an intracellular transport of cargoes loaded with Ran destined for secretion potentially occurs through the export complex. One exciting possibility, although speculative, is that this long-range trafficking of Ran could be a mechanism to explain why a fraction of Ran localizes to the plasma membrane. However, this hypothesis requires further study. The existence of RhoA in the nucleus has been reported where it is implicated in regulating the transcriptional activities of specific genes and in the DNA damage response 45 , 46 , 47 . Nevertheless, we did not detect endogenous RhoA in the nucleus of TOV-112D or TOV-1946 cells, suggesting that the major signaling responses observed in our study are mainly due to the RhoA translocation to the PM/ruffles. However, in transfected cells, we do on occasion see RhoA signal in the nucleus, although this is most probably due to an artifact associated with overexpression, which has been observed in other mCherry constructs 48 . Several studies have shown that RhoA protein ubiquitination is a post-translational modification that regulates its stability 25 , 49 , 50 , 51 . Our result showed that, treatment of TOV-112D and TOV-1946 ovarian cancer cells with proteasome inhibitor MG-132 do not increase RhoA protein level compared with DMSO condition. However, in Ran-depleted cells, MG-132 treatment was able to fully restore RhoA and with similar amount of that of control condition (Fig. 1c; Supplementary Fig. 1d ). Based on this observation, Ran protein appears to control RhoA protein stability by shifting the balance to favor RhoA degradation, regardless of ubiquitination machinery modification and proteasome degradation system regulation. Our results demonstrate that, Ran-RhoA complex formation is mediated by the interaction between DEDDDL domain of Ran and Serine 188 of RhoA to control RhoA recruitment to the plasma membrane/ruffles in migrating cells. The fact that Ran DEDDDL domain is essential for mediating Ran interaction with several proteins including RanBP1 (Ran-binding protein 1) 39 , 52 , 53 , 54 suggests that Ran and RhoA are in transient/competitive interaction and which can be specifically disrupted by adding an excess of one of the known interactors, similarly to a model where Mog1 competes with RCC1 for Ran binding 55 , 56 . It has been shown that, phosphorylated RhoA at the Serine 188 deactivates RhoA by increasing its interaction with RhoGDI and translocation of RhoA from the membrane to the cytosol 57 . Our data indicate that, RhoA phosphorylation at Serine 188 is not required for the RhoA interaction with Ran and consequently its localization to the PM. However, it could be envisaged that, upon stimulation, RhoA is released from RhoGDI leading to RhoA interaction with Ran and which would allow its stabilization and promotes its localization to the PM/ruffles. Regulatory control of RhoA protein stability plays a critical role in RhoA-mediated cellular signaling and biological functions 49 , 50 . Our results unveil a direct interaction between the Ran C-terminal polyacid region and RhoA C-terminal polybasic region. Moreover, the RhoA serine 188 is required for this association. Our approach of manipulating the subcellular location of Ran has provided strong evidences revealing the spatial requirements of RhoA for its localization, stabilization, and activation. The data presented here are consistent with a model in which the RhoA serine 188 overrides the activation state to control RhoA localization. It has been reported that Memo, an effector of the ErbB2 tyrosine kinase receptor, is necessary for RhoA localization and activation at the PM 13 ; comparatively, therefore, we propose that Ran acts as a scaffold to coordinate both spatial and temporal engagement of RhoA with guanine exchange factors (GEFs), required for its GTPase activity. Following Ran depletion, it is conceivable that the alterations in nuclear–cytoplasmic transport may cause abnormal ovarian cancer cell proliferation and migration/invasion. However, the expression of Ran-KillerRed in Ran-depleted cells appears to exclude this possibility. When endogenous Ran is absent, expression of exogenous Ran-KillerRed directs all of the protein to the plasma membrane, and under these conditions, we note an effect on proliferation and migration and this only in the context where Ran-KillerRed is activated (Fig. 5e ). In summary, this study provides an undescribed link between Ran and RhoA signaling that collectively contributes to enhanced ovarian cancer cell growth and invasiveness. In fact, the Ran association with RhoA prevents its ubiquitin-mediated proteasomal degradation through promoting RhoA localization to the PM and then its activation. The fact that ovarian cancer cell proliferation and invasion can be affected by disrupting the interaction between Ran and RhoA provides a rationale to develop advanced pharmacological compounds to prevent ovarian cancer cell progression. Thus, Ran-RhoA signaling complex may be an effective molecular target for controlling cancer metastasis. Methods Cell culture, transfection, and plasmid constructs ARPE-19 a human retinal pigment epithelial cell line was purchased from ATCC (#CRL2302). The TOV-112D and TOV-1946 ovarian cancer cell lines were, respectively, derived from a high-grade endometrioid tumor and a high-grade serous carcinoma, and were used to downregulate the expression of Ran and RhoA. Both cell lines are known to express high levels of Ran 20 , 21 . Cells (ARPE-19, TOV-112D, and TOV-1946) were grown in the OSE complete medium (Wisent ® ) containing 10% fetal bovine serum (FBS; Wisent ® ), 250 µg/mL amphotericin B and 50 µg/mL gentamicin (Wisent ® ) at 37 °C and 5% CO 2 20 , 21 . Cells were transfected by nucleofection (Amaxa-Lonza ® ) with 2 µg of siRNA of either CTRL (D-001810-02, Dharmacon ® ), Ran#1 (J-010353-06-0050, Dharmacon ® ), Ran#2 (CTM-278994, Dharmacon ® a custom designed siRNA targeting 3′UTR of Ran, containing siRNA sequence: GGGUGAAGCUGAAUAAAGUUCUACUUU), or RhoA (A-003860-18-0010, Dharmacon ® ). Transfections were also carried out using the following plasmids: 2xGFP-Ran WT, 2xGFP-Ran DA, and 2xGFP-Ran DN (gift from J. Joseph, National Center for Cell Science, India); GFP-RhoA WT and GFP-RhoA-CCKVL (gift from M. Philips, New York University School of Medicine, USA); GFP vector, RhoA-Flag (gift from M. Park, McGill University, Canada); EGFP-RhoC WT and mCherry-RhoA WT (gift from A. Badache, Aix-Marseille Université, Marseille, France); RanBP1-GFP and RanBP1-Flag (gift from P. Lavia, Istituto di biologia e patologia molecolari, Italy); KillerRed-mem (FP966, Evrogen ® ); pLYS1-FLAG-MitoGFP-HA (Addgene plasmid 50057); Myc-RhoA (WT, DA, DN), Myc-RhoC WT, Myc-RhoA ΔRRGKKKS, EGFP-RhoA ΔRRGKKKS, Myc-RhoA ΔS188, EGFP-RhoA ΔS188, Myc-RhoA S188E, EGFP-RhoA S188E, Myc-RhoA PI, Myc-RhoC-A, Myc-RhoC-S188, Myc-RhoC LV, EGFP-Ran ΔCT (Ran without DEDDDL motif), MitoGFP-Ran WT, MitoGFP-Ran ΔCT (Ran without DEDDDL motif), Ran WT KillerRed, and RhoA WT KillerRed were created by Bio Basic Canada, Inc. Random migration assays For cell migration, cells were grown on collagen-coated six-well plates (Costar ® ) for 48 h and were maintained within a chamber (Climabox, Carl Zeiss, Inc) with 5% (v/v) CO 2 at 37 °C. The microscope was driven by the AxioVision LE software (Carl Zeiss, Inc) set at ×20 plan Apo 0.8 NA objective and AxioCam MRm (Carl Zeiss, Inc). The motorized stage advanced to pre-programmed locations and photographs were collected for 24 h at 5 min intervals for time-lapse imaging. Motility parameters of living cells including rates of migration and migration paths were obtained from time-lapse movies. Means of velocity were calculated using MetaMorph ® and Microsoft Excel ® software 13 . The movies represent the behavior of cells during a 24 h period starting at 48 h post transfection. Transwell-invasion assays The cell-invasion experiments were based on the results from our random migration assays (Fig. 5a ). Since results showed a substantial decrease in the cell displacement/speed of TOV-112D cells 72 h post transfection with Ran siRNA and showed no significant defects on cell proliferation between control and Ran-depleted cells (Supplementary Fig. 1b and Movies 1 , 2 ), we maintained the same conditions as the migration assays to measure cell invasion in order to avoid any bias associated with cell death. Cells were plated on 8.0 -µm porous polycarbonate Transwell membrane inserts (Costar ® ) that were coated on the bottom with 25 µg/mL rat tail collagen (Sigma ® ). The lower chamber contained medium with 10% FBS, while the upper chamber was serum free. Cells were plated 48 h after transfection and allowed to migrate through the pores for 24 h. After 1 day, cotton swabs were used to remove non-invading cells from the upper chamber. Migrating/invading cells were fixed with 100% methanol at room temperature, washed with phosphate buffered saline (PBS) and stained with a solution containing 0.5% methylene blue and 50% methanol. Cells were counted with the Count tool of Adobe Photoshop CC ® by photographing the membrane inserts using EVOS FLc Cell Imaging System from Invitrogen ® (Thermo Fisher Scientific ® ) and an objective Plan Apo 1.25 × /0.04. Immunoprecipitation and western blot analysis Cells were harvested in 1% Triton lysis buffer (150 mM NaCl, 20 mM Tris HCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, 1% deoxycholate, at pH 7.4). All lysis buffers were supplemented with 1 mM phenylmethylsulfonyl fluoride (PMSF), 1 mM sodium vanadate, 1 mM sodium fluoride, 10 µg/ml aprotinin, and 10 µg/ml leupeptin. Samples were resolved by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to the nitrocellulose. Membranes were blocked with 5% bovine serum albumin (BSA) and probed as described with appropriate antibodies: anti-Ran (sc-271376) diluted 1:1000 in TBS-Tween buffer, anti-RhoA (sc-418) diluted 1:100 in TBS-Tween buffer, anti-RhoC (sc-393090) diluted 1:100 in TBS-Tween buffer, anti-GST (sc-138) diluted 1:100 in TBS-Tween buffer, and anti-Myc (sc-764) diluted 1:100 in TBS-Tween buffer, from Santa Cruz Biotechnology, Inc.; anti-GFP (11814460001) diluted 1:100 in TBS-Tween buffer, from Roche ® ; anti-Flag (F3165) diluted 1:100 in TBS-Tween buffer, from Sigma ® ; KillerRed (AB961), diluted 1:1000 in TBS-Tween buffer from Evrogen ® and anti-actin (ab6276), diluted 1:10000 in TBS-Tween buffer from Abcam ® . This was followed by incubation with horseradish peroxidase (HRP)-conjugated secondary antibodies: anti-rabbit (sc-2077) diluted 1:10,000 in TBS-Tween buffer or anti-mouse (sc-2061) diluted 1:10,000 in TBS-Tween buffer from Santa Cruz Biotechnology, Inc, or anti-mouse (D3V2A) diluted 1:1000 in TBS-Tween buffer from Cell Signaling Technology ® . All immunoblots were visualized by Amersham ECL from GE Healthcare ® . For immunoprecipitations, lysates were incubated overnight with antibody at 4 °C with gentle rotation followed by 1 h incubation with protein A- or G-Sepharose beads. Captured proteins were collected by washing three times in lysis buffers, eluted by boiling in SDS sample buffer, and processed as above for western blotting. Far-western blotting TOV-112D cells were transiently transfected with the indicated constructs, immunoprecipitated with GFP antibody, separated by SDS-PAGE, and transferred to the nitrocellulose membranes. Membranes were incubated with GST-RhoA GDP or GTPγS (Cytoskeleton ® ) fusion proteins in lysis buffer (20 mM HEPES pH 7.5, 120 mM NaCl, 2 mM EDTA, 10% glycerol, 1 mM PMSF, 10 mg/mL aprotinin, and 10 mg/mL leupeptin), and bound GST-RhoA (GDP or GTPγS) fusion proteins were detected using an anti-GST antibody. For negative control, membranes were incubated with free GST protein 58 (gift from M. Park, McGill University, Canada). Subcellular fractionation of the PM/lamellipodia For proteins localized in the lamellipodia, cells were plated on 3.0 -µm porous polycarbonate Transwell membrane inserts (Costar ® ) that were coated on the bottom with 25 µg/mL rat tail collagen (11179179001 from Roche ® ). The lower chamber contained medium with 10% FBS, while the upper chamber was serum free. Cells were allowed to extend their lamellipodia through the pores. Cell bodies remaining on the upper surface were removed by scraping and the lamellipodia extending to the lower surface were recovered in lysis buffer 13 , 59 . Immunofluorescence microscopy and quantification Cells grown on collagen-coated coverslips, treated with 20 µM of DMSO (SHBH6857, Sigma ® ) or 20 µM of MG-132 (C2211, Sigma ® ) for 2 h, were fixed in 4% paraformaldehyde at room temperature, permeabilized in 0.2% Triton X-100, and blocked with 5% BSA before the addition of primary antibodies. Primary antibodies used for immunofluorescence were against the following: anti-Ran (sc-271376) diluted 1:50 in TBS-Tween buffer, anti-RhoA (sc-179) diluted 1:50 in TBS-Tween buffer and anti-Myc (sc-764) diluted 1:50 in TBS-Tween buffer, from Santa Cruz Biotechnology, Inc. Secondary antibodies Alexa-Fluor 488 or 546 were obtained from Molecular Probes (Thermo Fisher Scientific ® ) diluted 1:500 in TBS-Tween buffer. The MitoTracker probe (M7514) diluted 1:2000 in OSE complete medium (Wisent ® ) to label mitochondria is from Invitrogen ® . Cells were mounted with the ProLong Diamond Antifade Mountant with DAPI (P36962) from Molecular Probes (Thermo Fisher Scientific ® ). Images were recorded with a scanning confocal microscope (ZEISS Axio Observer; Carl Zeiss, Inc.) with a ×100 plan Apo 1.4 NA objective and driven by ZEN LE software (Carl Zeiss, Inc.). The degree of colocalization, expressed as the Pearson’s correlation coefficient (proportion of all red intensities that have green components among all red intensities), was assessed by the colocalization analysis function of Imaris software (Bitplane ® ). The results were logged into Microsoft Excel ® for analysis. All values are means ± SEM from three independent experiments. Rho GTPase activity assay The Rho GTPase activation assay was performed using the G-LISA RhoA absorbance-based activation assay (Cytoskeleton ® ). Briefly, cells were grown on collagen-coated 96-well plates (Costar ® ), treated for 2 h with 20 µM of DMSO or 20 µM of MG-132 and incubated at 37 °C. At the end of the incubation period, all cells were washed twice with ice-cold PBS and re-suspended in 65 µl of G-LISA lysis buffer. Protein lysates were transferred to ice-cold 1.5-ml centrifuge tubes and clarified by centrifuging at 10,000 rpm for 2 min. Protein concentrations were determined using the Bradford Protein Assay (Bio-Rad ® ), and 1.0 mg/ml protein was used for the Rho GTPase activation assay as per manufacturer’s recommendations. A 1:50 dilution of the primary antibody and 1:250 dilution of the HRP–conjugated secondary antibody were sufficient to produce a RhoA-specific signal. After antibody and HRP reagent incubation, signals were detected on a Versamax microplate reader at 490 nm (Molecular devices ® ). Data analysis was performed using Microsoft Excel ® . Live cell imaging Cells were grown on collagen-coated coverslips (35 mm, Ibidi GmbH Germany) for 48 h and positioned on a motorized stage equipped with a scanning confocal microscope (ZEISS Axio Observer; Carl Zeiss, Inc.) set at ×100 plan Apo 1.4 NA objective and an Evolve 512 digital camera (Photometrics ® ) containing a small transparent environmental chamber (Tokai hit ® ) that was maintained with 5% (v/v) CO 2 in air at 37 °C. The microscope was driven by ZEN LE software (Carl Zeiss, Inc.). Light inactivation For chromophore-assisted laser or light inactivation (CALI) experiments, TOV-112D cells co-expressing GFP-RhoA and Ran-KillerRed or GFP-RhoA-CCKVL and KillerRed empty vector were irradiated for 1 min with green light (×100 plan Apo 1.4 NA objective, 515–560 -nm transmitted light at 18 W/cm2) to bleach KillerRed fluorescence. After bleaching, green fluorescence was recorded every second over a period of 5 min with a scanning confocal microscope (ZEISS Axio Observer; Carl Zeiss, Inc.) with a ×100 plan Apo 1.4 NA objective and driven by ZEN LE software (Carl Zeiss, Inc.). IncuCyte cell proliferation phase-contrast imaging assay For cell proliferation, 20,000 cells/well were seeded for TOV-112D in 24-well plates. Cells were transfected using the following plasmids: KillerRed-mem empty, RhoA WT in KillerRed-mem, and Ran WT in KillerRed-mem incubated for 48 h. Plates were imaged by phase contrast using the IncuCyte™ Live Cell Imaging System (Essen BioScience ® ). Frames were captured at 2 h-intervals for 7 days from two separate regions/well using a ×10 objective. Proliferation growth curves were constructed using IncuCyte™ Zoom software. Each experiment was performed in triplicate and repeated three times. The data represent TOV-112D cell proliferation from the fifth day post transfection. Cell quantification with the corresponding phenotype The scanning confocal microscope (ZEISS Axio Observer; Carl Zeiss, Inc) with a ×100 plan Apo 1.4 NA objective and driven by ZEN LE software (Carl Zeiss, Inc) was used to measure the cell number to corresponding phenotypes from three independent experiments ( n = 100 individual cells). Percentages were calculated using Microsoft Excel ® . RT-PCR The total RNA from TOV-112D cells was isolated using RNeasy Kit (Qiagen ® ). The total RNA concentration and purity were measured on a NanoDrop™ spectrophotometer. RNA was reverse-transcribed using QuantiTect Reverse Transcription Kit (Qiagen ® ) according to the manufacturer’s protocol. cDNA amplification was performed with SYBR Green PCR master mix (Applied Biosystems ® ) using the StepOnePlus Real-Time PCR system (Applied Biosystem ® ). Negative controls were included in all experiments, and actin served as the housekeeping gene. Primers were ordered from Integrated DNA Technologies, Inc: Ran, forward primer: GGTGGTACTGGAAAAACGACC reverse primer: CCCAAGGTGGCTACATACTTCT RhoA, forward primer: AGCCTGTGGAAAGACATGCTT reverse primer: TCAAACACTGTGGGCACATAC Cdc 42, forward primer: CCATCGGAATATGTACCGACTG reverse primer: CTCAGCGGTCGTAATCTGTCA Rac1, forward primer: ATGTCCGTGCAAAGTGGTATC reverse primer: CTCGGATCGCTTCGTCAAACA Statistics All statistical analyses were performed using Microsoft Excel ® . Graphed data represent the average values ± SEM from at least three independent experiments. Two-tailed, paired Student’s t test was used to determine the statistical significance unless otherwise specified. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The authors declare that the data supporting the findings of this study are available within the paper and its Supplementary Information files. If needed, additional information is available from the corresponding author upon reasonable request.
Did you know that 90% of cancer patients die from distant metastasis? The latter occurs when cancer cells have the ability to move within the patient's body and invade its healthy tissues. In a study published in Nature Communications, researchers from the University of Montreal Hospital Research Centre (CRCHUM) have shown the key role that a protein called Ran plays in the mobility of ovarian cancer cells. They demonstrated these cells cannot migrate from cancerous sites without the help of Ran. Implicated in cancer development and survival, Ran is often referred to as a shuttle protein mostly supporting transport between the inside of a cell and its nucleus. In ovarian cancer cells, the team of researchers, led by Dr. Anne-Marie Mes-Masson and Dr. Diane Provencher, showed Ran acts as a taxi to the cell membrane for another protein, RhoA, which is important in cell migration. "In normal cells, RhoA can make its way directly to the cell membrane but in ovarian cancer cells it cannot. It has to link to Ran first in order to reach the cell membrane. It really needs a ride," said Mes-Masson, a researcher at the CRCHUM, professor at Université de Montréal and member of the Institute du cancer de Montréal. "In our study, we showed that in cancer cells where we inhibit the action of Ran, RhoA gets broken down. Without RhoA, cancer cells lose then their ability to move, migrate and invade healthy tissues." Thanks to the vast expertise in biochemistry of the first author, Dr. Kossay Zaoui, the science team was able to explain at least in part why Ran is so important in a cancer cell. In many cancers high expression of Ran is often associated with poor outcomes. "We have previously demonstrated that Ran is a good therapeutic target. Our study helps us understand when and in which cancer patients our approach might be most beneficial. As healthy cells do not need Ran to move around, we can target the cancer cells without touching the healthy cells. Based on our findings, it is probable that inhibiting Ran will also be a winning strategy in other cancers," said Dr. Provencher, a researcher at CRCHUM, Head of the Division of Gynecology Oncology, professor at Université de Montréal and member of the Institute du cancer de Montréal. Simultaneous protein labeling: Ran (pink) and tubulin (green) in ovarian cancer cells. The majority of Ran is found in the nucleus, but our study reveals that a portion of it is localized on the surface of cells inducing movement and invasion in ovarian cancer via its partner RhoA Credit: Euridice Carmona, CRCHUM The researchers have already begun to develop small molecules that can inhibit Ran and are testing them in the preclinical models they have developed to show that they can slow or eliminate cancer development. They hope one day that these new drugs will make their way into the clinic to be used to treat ovarian cancer patients. The Importance of Our Biobank For three decades, Drs. Provencher and Mes-Masson have collaborated to create the largest biobank of ovarian cancer specimens from women who have consented to participate in their research program. They managed to develop and characterize cell lines from tumour tissues, and these cell lines were essential to conduct this work. These cell lines are now used by ovarian cancer research groups worldwide to conduct ovarian cancer research. The patient's precious contribution to research is fuelling the type of new discoveries that both researchers hope will help cure this deadly disease. According to the Canadian Cancer Society, 2,800 Canadian women were diagnosed with ovarian cancer in 2017 and 1,800 died from the disease. It is the fifth leading cause of death in North America.
10.1038/s41467-019-10570-w
Medicine
Achilles heel of anaplastic large-cell lymphoma (ALCL) cells identified
Nicole Prutsch et al. Dependency on the TYK2/STAT1/MCL1 axis in anaplastic large cell lymphoma, Leukemia (2018). DOI: 10.1038/s41375-018-0239-1 Journal information: Leukemia
http://dx.doi.org/10.1038/s41375-018-0239-1
https://medicalxpress.com/news/2018-09-achilles-heel-anaplastic-large-cell-lymphoma.html
Abstract TYK2 is a member of the JAK family of tyrosine kinases that is involved in chromosomal translocation-induced fusion proteins found in anaplastic large cell lymphomas (ALCL) that lack rearrangements activating the anaplastic lymphoma kinase (ALK). Here we demonstrate that TYK2 is highly expressed in all cases of human ALCL, and that in a mouse model of NPM-ALK-induced lymphoma, genetic disruption of Tyk2 delays the onset of tumors and prolongs survival of the mice. Lymphomas in this model lacking Tyk2 have reduced STAT1 and STAT3 phosphorylation and reduced expression of Mcl1 , a pro-survival member of the BCL2 family. These findings in mice are mirrored in human ALCL cell lines, in which TYK2 is activated by autocrine production of IL-10 and IL-22 and by interaction with specific receptors expressed by the cells. Activated TYK2 leads to STAT1 and STAT3 phosphorylation, activated expression of MCL1 and aberrant ALCL cell survival. Moreover, TYK2 inhibitors are able to induce apoptosis in ALCL cells, regardless of the presence or absence of an ALK-fusion. Thus, TYK2 is a dependency that is required for ALCL cell survival through activation of MCL1 expression. TYK2 represents an attractive drug target due to its essential enzymatic domain, and TYK2-specific inhibitors show promise as novel targeted inhibitors for ALCL. Introduction TYK2 was the first Janus kinase described, and it was shown to collaborate with JAK1 to facilitate interferon-α/β (IFN) responsiveness [ 1 , 2 ]. Recently, activation of TYK2 has been noted in a number of malignancies including T-cell acute lymphoblastic leukemia (T-ALL), anaplastic large cell lymphoma (ALCL) and nerve sheath tumor [ 3 , 4 , 5 , 6 ]. In T-ALL cell lines, activating somatic mutations have been detected in the TYK2 FERM domain (G36D, S47N) and in the kinase domain (E957D, R1072H) [ 3 ]. Unmutated TYK2 also represented a dependency in T-ALL cell lines and patient samples [ 3 ]. Moreover, germline TYK2 mutations potentially causing ALL have been described [ 7 ]. Recently, somatic TYK2 fusion proteins have also been detected in ALL [ 8 ], AML [ 9 ], cutaneous [ 5 ], and systemic ALCLs that lack anaplastic lymphoma kinase (ALK) fusion genes [ 6 ]. Despite involvement of TYK2 in fusion proteins and the presence of activating mutations in some cancers, with the exception of T-ALL [ 3 , 10 ], little is known regarding TYK2’s oncogenic functions and downstream effectors. To elucidate the role of TYK2 in tumorigenesis, we focused on ALCL as a well-defined lymphoma subtype [ 11 ]. ALCL is a CD30 positive, aggressive non-Hodgkin T-cell lymphoma with early onset that is characterized in approximately half of all patients (ALCL, ALK+) by fusion of the catalytic domain of ALK with the N-terminus of the gene encoding the Nucleophosmin 1 (NPM1) protein due to a t (2;5) chromosomal translocation [ 11 ]. Despite initial classification as a T-cell lymphoma arising in mature memory T cells, several recent publications point toward a transformation of early thymic progenitor cells in ALCL [ 12 , 13 ]. ALCL, ALK+ patients can be effectively treated with the poly-chemo- therapy (e.g., CHOP) or ALK inhibitors. However, still 25–30% of patients relapse leading to very aggressive disease [ 14 , 15 ]. An additional targeted agent is provided by the recently introduced armed CD30 antibody brentuximab vedotin, which shows good responses but is often associated with polyneuropathy as a severe side effect [ 16 ]. ALCL patients without ALK translocations cannot be treated by ALK inhibitors and have a worse prognosis compared to ALCL, ALK+ patients creating an urgent need for new and refined molecularly targeted therapeutic options for ALCL [ 15 , 17 , 18 ]. The WHO has classified ALCL, ALK− as a distinct disease with sub-entities defined by chromosomal rearrangements that disrupt the DUSP22 and TP63 tumor suppressors [ 18 ]. Several transplant but also transgenic mouse models for ALCL, ALK+ have been created, with the CD4 -NPM-ALK transgenic mouse being the best established [ 19 , 20 , 21 ]. Similar to ALK, TYK2 is a tyrosine kinase that can be readily inhibited by small molecules and therefore represents an attractive therapeutic target in ALCL. We show here that the TYK2 tyrosine kinase is expressed in human ALCLs irrespective of ALK status and is essential for tumor cell viability. Genetic studies in a transgenic NPM-ALK driven lymphoma model also demonstrate that T cell-specific loss of Tyk2 delays the onset of tumors and prolongs the survival of mice. We furthermore show that TYK2 is activated by an autocrine loop involving IL-10 and IL-22 and that STAT1 and STAT3 are essential mediators of aberrant tumor cell survival through activation of the pro-survival protein MCL1. Our data underscore the potential therapeutic importance in ALCL of TYK2 inhibitors which are currently in late preclinical stages of development. Materials and methods Cell culture ALCL cell lines were obtained from DSMZ, Braunschweig, Germany. For cytokine complementation experiments, recombinant human Interleukin-10 (10 ng/ml, rh IL-10, Immunotools, Friesoythe, Germany) or rhIL-22 (20 ng/ml, Immunotools) were used. For detection of downstream targets, ALCL cells were incubated with TYK2 inhibitors or pan-JAK inhibitors (including 1 µM JAK inhibitor I, Calbiochem, San Diego, CA, USA) for 3 or 6 h and then incubated with IFN-α for 10 min before immunoblot analysis. Description of quantitative RT-PCR, flow cytometry, cytokine arrays and immunohistochemistry, shRNA sources, CRISPR / Cas9 genome editing and murine lymphoma models can be found in Supplementary Methods. Cloning of mutant TYK2 construct and rescue experiment Retroviral constructs encoding the mutant TYK2_E957D cDNA as well as the WT TYK2 cDNA were obtained from Dr. Takaomi Sanda from CSI, Singapore. Production of retrovirus expressing TYK2_E957D and TYK2_WT was performed as previously described [ 3 ]. Cell growth and viability assays For cell counting of shRNA knockdown or CRISPR knockout experiments, cells were seeded into 12-well plates in triplicates at day 1 and counted at days 2, 3, 4, and 5. For drug treatment, cells were incubated with TYK2 inhibitors or pan-JAK inhibitors (Table S 6 ) for 72 h, and cell proliferation was quantified using the XTT Cell Proliferation Assay Kit (ATCC, Manassas, Virginia, USA) according to manufacturer’s instructions. All values were normalized to the untreated control. Immunoblotting Cells were lysed in RIPA buffer containing phosphatase and protease inhibitors. Equivalent amounts of protein were diluted in sample buffer and separated by 10% SDS-PAGE. Proteins were transferred to nitrocellulose membranes (Millipore), subjected to immunoblot analysis and stained with antibodies as listed in Table S 4 . Western blot quantification was conducted using Image J version 2007. Results TYK2 ablation in CD4- NPM-ALK transgenic mice reduces the growth rate of lymphoma and significantly increases survival Chromosomal translocations produce fusion proteins that activate the TYK2 tyrosine kinase in human ALCL that lack activation of ALK [ 3 , 5 , 6 , 8 , 9 ]. This led us to test the hypothesis that TYK2 plays a unique role in ALCL and function as a dependency even in cases harboring the NPM-ALK fusion genes. Thus, floxed Tyk2 mice were crossed to mice bearing the CRE-recombinase under the Lck -promoter, resulting in mice with T-cell-specific Tyk2 deletion [ 22 ]. These mice were then crossed to an ALCL mouse model that expresses human NPM-ALK from the CD4 promoter [ 21 ]. Deletion of Tyk2 was confirmed by PCR of the Tyk2 gene locus, western blot of tumor tissue and real-time RT-PCR (Fig. 1a–c ). NPM-ALK and pNPM-ALK expression levels were not affected (Fig. 1b ). CD4- NPM-ALK mice with intact Tyk2 developed aggressive T-cell lymphomas from about 12 weeks post - partum . Log-rank analysis of Kaplan–Meier survival curves indicated significantly longer survival of CD4- NPM-ALK LCKΔΔTyk2 as compared to CD4- NPM-ALK mice (median survival of 53.3 weeks CD4- NPM-ALK LCKΔΔTyk2 versus 16.0 weeks in control CD4- NPM-ALK mice, P < 0.0001, Fig. 1d ). To test the ability of tumor cells to grow in an in vitro setting, lymphoma cells from CD4- NPM-ALK and CD4- NPM-ALK LCKΔΔTyk2 mice were taken into culture. CD4- NPM-ALK LCKΔΔTyk2 lymphomas failed to grow in vitro in contrast to tumor cells isolated from CD4- NPM-ALK mice (Fig. 1e ). Tumors arising in CD4- NPM-ALK LCKΔΔTyk2 mice showed increased numbers of apoptotic cells ( P = 0.034, Suppl. Figure 1A, B ), but cell proliferation as assessed by Ki67 staining was not affected (Suppl. Figure 1A, B ), suggesting a primary role for Tyk2 in promoting the survival of lymphoma cells. To assess signaling downstream of Tyk2, we examined Stat1 and Stat3 expression at the mRNA and protein levels and found significant reductions of Stat3 and pYStat3 in CD4 -NPM-ALK LCKΔΔTyk2 mice compared to CD4 -NPM-ALK mice. Stat1 and pYStat1 levels were also decreased in Tyk2 knockout lymphomas at both the mRNA and protein levels (Fig. 1f , Suppl. Figure 1C, D ). To elucidate the mechanism of cell survival mediated by Tyk2, expression levels of Bcl2 family proteins were assessed, in particular Mcl1, which is pivotal for ALCL cell survival [ 23 ]. Interestingly, in lymphomas from CD4 -NPM-ALK LCKΔΔTyk2 mice, MCL1 expression was decreased at both the mRNA and protein levels as compared to CD4 -NPM-ALK mice expressing TYK2 ( P < 0.0001; Fig. 1g ). Loss of Tyk2 did not affect Bcl2 expression levels in these lymphoma cells. Fig. 1 Conditional TYK2 knockout prolongs survival of CD4 -NPM-ALK LCKΔΔTyk2 transgenic mice. a Tyk2 alleles assessed by PCR in DNA isolated from murine CD4 -NPM-ALK or CD4 -NPM-ALK LCKΔΔTyk2 lymphomas. b Western blot analysis of Tyk2, NPM-ALK, p-NPM-ALK, and β-Actin/β-Tubulin expression in mouse lymphomas. c Tyk2 mRNA expression levels in CD4 -NPM-ALK or CD4 -NPM-ALK LCKΔΔTyk2 lymphomas. Data are mean values ± s.d. of five mice. d Kaplan–Meier survival analysis of CD4 -NPM-ALK and CD4 -NPM-ALK LCKΔΔTyk2 mice. e In vitro growth rates of CD4 -NPM-ALK or CD4 -NPM-ALK LCKΔΔTyk2 lymphoma cells, showing Tyk2 dependency. Two cell lines per genotype are shown. f IHC of lymphoma tissues showing pYStat3 and pYStat1 expression in CD4 -NPM-ALK mice and the lack of expression in CD4 -NPM-ALK LCKΔΔTyk2 mice. g mRNA expression of Mcl1 and Bcl2 in CD4 -NPM-ALK lymphomas and the lack of Mcl1 expression in CD4 -NPM-ALK LCKΔΔTYK2 lymphomas. Data are mean values ± s.d. of five mice. Western blot shows Mcl1 expression in murine CD4 -NPM-ALK lymphomas and lack of Mcl1 expression in CD4 -NPM-ALK LCKΔΔTyk2 lymphomas. Compare with Tyk2 expression depicted in b Full size image Inhibition of TYK2 by gene knockdown induces death of human ALCL cells To determine whether human ALCL cells depend on TYK2 for survival, we depleted TYK2 using both CRISPR-cas9 and shRNA techniques. The TYK2 protein is comprising of four functional domains, the FERM (F for 4.1 protein, Ezrin, Radixin and Moesin), SH2 (Src Homology 2), pseudo-kinase and kinase domain (Suppl. Figure 2A ). Employing CRISPR/Cas9 technology, a disruption was generated in the coding sequences of the FERM domain (TYK2-CRISPR1) or in the kinase domain (TYK2- CRISPR2) of the TYK2 gene. After TYK2-CRISPR1 disruption, single clones lacking TYK2 expression were validated by immunoblot analysis. Remarkably, loss of TYK2 through a STOP codon in the FERM domain (TYK2-CRISPR1) resulted in severe growth retardation indicative of TYK2 dependency in two different ALCL cell lines representing NPM-ALK-positive (SR786) and ALK-negative (Mac1, ALK−) lines (Fig. 2a ). When Mac1, ALK− cells were injected subcutaneously into NSG mice, TYK2-positive cell clones developed tumors within 2 weeks while TYK2-negative cell clones did not (Suppl. Figure 2B ). We employed a different strategy for the TYK2-CRISPR_#2 (kinase domain) knockout in which the ALCL cell lines stably expressed CAS9 and we monitored for successful TYK2-CRISPR2 transduction via a reporter vector expressing GFP-tagged CRISPR2 over three to 5 weeks compared to the same reporter vector containing a GFP-tagged non-targeting guide RNA (NTC) as a control. In all five cell lines tested expression of GFP was lost over time in the TYK2-CRISPR2 guide but not in NTC-control transduced cells, indicating reduced growth due to inactivation of TYK2 in the TYK2-CRISPR2 containing cells (Suppl. Figure 2C ). Cell dependency on TYK2 was conclusively shown by rescue of cell survival after co-transduction of a mutated form of TYK2 (E957D, Suppl. Figure 2D ) that was not recognized by CRISPR2. In a complementary approach, shRNAs targeting the kinase domain of TYK2-mediated downregulation of TYK2, as confirmed by western blot, and each led to growth reduction and apoptosis induction in both ALCL cell lines tested (Fig. 2b , Suppl. Figure 2E, F ). Survival of the cells was rescued by co-expression of the hyperactive form of TYK2 (E957D) that lacked sequences recognized by shRNA TYK2#3 (Suppl. Figure 2G ). Fig. 2 ALCL cells depend on TYK2 for survival. a CRISPR knockout of TYK2 using sgRNA targeting of the FERM Domain (TYK2_CRISPR1) decreases cell viability in human ALCL cell lines. Means and standard errors of three experiments are shown ( P < 0.0001). Western blot analysis of TYK2, JAK1, and beta-ACTIN in the indicated clones verifies specific knockout. b Knockdown of TYK2 by lentivirus-transduced shRNAs decreases cell viability in ALCL cell lines. Data show the means and standard errors of three experiments. Expression of TYK2, JAK1, and β-ACTIN was assessed by western blot. c Expression of STAT1, pYSTAT1, STAT3, pYSTAT3, and β-ACTIN (control) in SR786 and Mac1 human ALCL cell lines 14 days after sgRNA targeting of TYK2 or GFP (control). Compare with TYK2 expression depicted in a . Whole-cell extracts of ALCL cell lines were collected 8 days after shRNA transduction and subjected to immunoblot analysis with the indicated antibodies. Compare with TYK2 expression depicted in the right panel of b . Right panel of c shows western blot analysis of MCL1 and β-ACTIN in the SR786 cell line 7 days after sgRNA targeting of TYK2 or GFP (control). Compare with TYK2 expression depicted in a Full size image Janus kinases such as TYK2 phosphorylate STAT proteins on the critical tyrosine residue, which in turn mediate signal transduction and efficient transcription in the nucleus [ 24 ]. Hence, to assess the effect of TYK2 depletion on STAT expression levels and activity, immunoblot analysis of ALCL cell lines was performed with TYK2 depleted by either CRISPR/Cas9 or TYK2-specific shRNA (Fig. 2c ). In each of the four human ALCL cell lines, a reduction in levels of phosphorylated STAT1 and STAT3 were observed (Fig. 2c ), consistent with the role of heterodimeric JAK proteins containing TYK2 in the phosphorylation of STAT1 and STAT3 in ALCL. To follow-up on the effects of TYK2 on MCL1 expression levels in transgenic mice, we assessed MCL1 expression levels in SR786 ALK+ ALCL cells after TYK2 depletion. As in the mouse model, we found that expression levels of MCL1 were profoundly downregulated by TYK2 depletion, suggesting that TYK2’s ability to promote lymphoma cell survival may be mediated through MCL1 (Fig. 2c ). Small molecule inhibition of TYK2 induces cell death of ALCL cells Because ALCL cells are dependent on TYK2 for cell survival, we tested the activity of the recently published small molecule TYK2 inhibitors TYK2#1 [ 25 ] and Bayer-18 (Symansis). Initially, we established the 72-h IC 50 values of the TYK2 inhibitors for four different ALCL cell lines. We found that TYK2#1 and Bayer-18 had IC 50 values ranging from 0.5–1 µM to 2–3 µM for the different cell lines (Fig. 3a ). Then we assayed each of the inhibitors at their mean IC 50 concentrations in ALCL cell lines against freshly isolated PBMCs. Treatment with TYK2#1 for 72 h reduced cell viability by a mean of 73.4 ± 2.0% in the ALCL, ALK- cell lines Mac1, Mac2a, and FE-PD and 64.5 ± 2.7% in the ALCL, ALK+ cell lines Karpas-299, SR786, and SUDHL-1, whereas PBMCs from healthy donors were only slightly or not affected (Fig. 3b , Suppl. Figure 3A ). Bayer-18 also had minimal activity against PBMCs and had comparable activity to TYK2#1 in ALK- ALCL but was less active in ALK+ ALCL cell lines (ALK- 70.1 ± 2.1% versus ALK+ 20.6 ± 5.6% reduction in viability; Fig. 3b , Suppl. Figure 3A ). This discrepancy between TYK2#1 and Bayer-18 may be explained by the only partial inhibition of pSTAT1 and pSTAT3 by Bayer-18 in the ALCL ALK+ cell line (Fig. 3b ). By contrast, the pan-JAK inhibitors Tofacitinib and Ruxolitinib were less active in ALCL (Tofacitinib 22.0 ± 2.4% and Ruxolitinib 28.8 ± 4.1% viability reduction, Fig. 3c ). Neither TYK2#1 nor Bayer-18 had any effect on NPM-ALK or pNPM-ALK expression levels (Suppl. Figure 3B ). Fig. 3 TYK2 and pan-JAK inhibitors reduce viability and pYSTAT1/3 expression in human ALCL cells. a The indicated human ALCL cell lines were cultured with graded concentrations of TYK2 inhibitors (TYK2#1 or Bayer-18) for 3 days. Cell viability values are means ± SEM given as a percentage of the untreated control. Values represent the mean of three experiments. b The indicated cell lines were treated with the TYK2 inhibitors TYK2#1 (1 μM) or Bayer-18 (2.7 μM), or c the pan-JAK inhibitors Ruxolitinib (3 μM) or Tofacitinib (3 μM) for 72 h and cell proliferation was assessed by an XTT assay. Western blot analysis of the indicated antibodies after 48 h of inhibitor treatment Full size image We assayed for the levels of apoptosis by Annexin V staining after 48 h of exposure to the drugs, and documented apoptosis induction in ALCL cell lines Mac1, K299, and SR786 (TYK2#1 66.1 ± 5.9% and Bayer-18 50.8 ± 10.0% Annexin V staining; Suppl. Figure 3B ). We assayed the downstream consequences of TYK2 inhibition by assessing the levels using western blotting of TYK2, STAT1, STAT3, pYSTAT1, and pYSTAT3. Surprisingly, total TYK2 levels were reduced 48 h after treatment with either inhibitor. Drug-induced decreased expression of total TYK2 may be due to reduced protein stability in the absence of auto-phosphorylation, as previously described [ 26 ] (Fig. 3b ). Unfortunately, as in T-ALL, endogenous phospho-TYK2 was not detectable in ALCL cells when using currently available reagents that have limited sensitivity [ 3 ]. We found downregulation of pYSTAT1 and to a lesser extent of pYSTAT3 after inhibiting TYK2, whereas total protein levels of STAT1 and STAT3 were not affected. STAT1 and STAT3 are TYK2 targets promoting tumor growth in ALCL Because depletion or small molecule mediated inhibition of TYK2 strongly affected STAT1 signaling, we studied the effects of depletion of STAT1 on cell survival. Transduction with vectors encoding shRNAs targeting STAT1 showed a marked reduction of STAT1 by western blotting and profoundly reduced cell growth rate (Fig. 4a ). Then we transduced five ALCL cell lines with a GFP-tagged CRISPR/Cas9 STAT1 deletion construct and showed that GFP-positive cells were depleted over time but not the non-targeting control (NTC) transduced cells, indicating that STAT1 was essential for ALCL cell survival (Suppl. Figure 4A ). Indeed, shRNA-mediated depletion of STAT1 had a greater effect on cell death, as determined by positive staining for Annexin V, than depletion of STAT3 (Fig. 4b and Suppl. Figure 4B ). To compare the TYK2-specific (Bayer-18 or TYK2) and the pan-JAK inhibitor, JAK inhibitor 1 or Ruxolitinib, we treated ALCL cells for 3 h, and then stimulated the cells for 10 min with Interferon-alpha (IFN-α), which is known to induce TYK2 and JAK1 activity [ 1 , 27 ]. Treatment with the pan-JAK inhibitor JAK inhibitor 1 led to abrogation of both pYSTAT1 and pYSTAT3 activation but the TYK2-specific inhibitor only affected inhibition of pYSTAT1 and not pYSTAT3 (Suppl. Figure 4C ). To clarify the effect of TYK2 loss on the downstream effector STAT1, we performed a rescue experiment by expressing wild-type STAT1 in the K299_TYK2ko cell line. Strikingly, expressing wild-type STAT1 could partially restore the viability of the K299_TYK2ko cell line; whereas, the STAT1 Y701F plasmid, which is incapable of being activated through phosphorylation, did not show any effect (Suppl. Figure 4D ). Suppl. Figure 4E shows that TYK2 ko results in decreased STAT1 and pYSTAT1 compared to the CRISPR control cell line, whereas cells rescued with wild-type STAT1 exhibit restored levels of STAT1 and pYSTAT1. Fig. 4 Depletion of STAT1 or STAT3 leads to reduced growth of human ALCL cells. a Knockdown of STAT1 by lentivirus-transduced shRNAs decreases cell viability in ALCL cell lines. Means ± SEM of three experiments are shown. Cells with and without STAT1 knockdown were subjected to immunoblot analysis using antibodies for STAT3, pYSTAT3, STAT1, pYSTAT1, and β- ACTIN controls. b Knockdown of STAT3 by lentivirus-transduced shRNAs decreases cell viability in ALCL cell lines. Means ± SEM of three experiments are shown. Whole-cell extracts of cells with shRNA-mediated STAT3 knockdown were subjected to immunoblot analysis using antibodies for STAT3, pYSTAT3, STAT1, pYSTAT1, and β-ACTIN controls Full size image IL-10 and IL-22 are mediators of TYK2 activity and both are critical for ALCL cell survival JAK-STAT signaling is a key mediator of cytokine production leading to the release of autocrine or paracrine factors that influence differentiation, immune modulation, and survival. Thus, in ALCL signaling mediated through TYK2 may be responsible for the release of autocrine factors that stimulate cell growth. Indeed, TYK2_CRISPR1 knockout cells lacked the ability to grow on limiting dilution (Fig. 5a , Suppl. Figure 5A ). These data suggest that the absence of, or a reduction in essential autocrine survival factors produced by the TYK2 knockout cells impacts ALCL cell survival. In order to identify these factors, we performed a comprehensive cytokine screen for 24 selected cytokines that have been previously described in the context of ALCL and lymphoma [ 3 , 28 , 29 ] (Fig. 5b ). The most abundant cytokines detected in our screen were IL-22 and IL-10, which are also expressed in primary patient tumors as determined by the analysis of published RNAseq data (Fig. 5b , Suppl. Fig 5A ) 6. These two factors are especially intriguing because TYK2 is intimately involved in signal transduction downstream of the IL-22 and IL-10 receptors, acting as a heterodimeric complex with JAK1 [ 28 ]. The active role of TYK2 in driving the expression of IL-22 and IL-10 was further confirmed as production of these cytokines was reduced in cells lacking expression of TYK2 (45%, P = 0.0113 and 63%, P = 0.0195, respectively), an effect that could be reversed in cells transduced to express the hyperactive E957D mutant of TYK2 (Fig. 5c ). Interestingly, both cytokines share a common chain IL-10RB in their heterodimeric receptors: IL-10RA and IL-10RB chains for IL-10 versus IL-22R and IL-10RB for IL-22 [ 30 ]. Using shRNA constructs, expression of IL-10RA or IL-10RB were knocked-down in ALCL cell lines showing a reduction in cell growth in both cases (Fig. 5d , Suppl. Figure 5B,C ). Taken together, these data point toward an autocrine mechanism mediated by TYK2 in which ALCL cells produce IL-10 and IL-22, which bind to their receptors on the same cells, activating an autocrine TYK2-mediated single transduction pathway resulting in pYSTAT1 with efficient translocation and gene regulation in the nucleus, where it is essential for survival of the cells and autonomous cell growth. Fig. 5 TYK2 activity induces expression of IL-10 and IL-22 in human ALCL. a Limiting dilution of the ALK-positive cell line Karpas-299 or the ALK-negative cell line Mac1 with and without CRISPR-Cas9 TYK2 knockout in 96-well plates containing RPMI1640 and 10% FCS. TYK2 knockout (TYK2_ko) cells require greater plating cell numbers for cell growth assessed after 2 weeks of incubation. b Heat map panel of cytokines detected in the supernatant of ALCL cell lines Karpas-299 and Mac1 with and without TYK2. Cytokine levels were analyzed by ProCarta Multiplexx assay. c IL-10 and IL-22 protein expression in the supernatants of the ALK positive cell line Karpas-299 with and without TYK2 knockout, compared to cells expressing TYK2-E957D and to PBMCs. Supernatants were collected from cell cultures after 48 h and analyzed by the ProCarta Multiplexx assay as above. d Knockdown of IL-10RA by lentivirus-transduced shRNAs decreases cell viability in ALCL cell lines. Cells were subjected to immunoblot analyses and stained with antibodies against IL10RA and β-ACTIN Full size image TYK2 is expressed in ALCL regardless of ALK status TYK2 mRNA levels were analyzed using RNA isolated from formalin fixed paraffin embedded (FFPE) ALCL patient samples (7 ALCL, ALK+, 4 ALCL, ALK-, and 8 reactive lymph nodes (LN)) showing an upregulation across all ALCL samples (ALK neg: 7.1 ± 2.5, ALK pos: 7.7 ± 3.4, reactive LN 1.0 ± 0.4) as compared to lymph nodes from healthy donors ( P = 0.0289) (Fig. 6a ). These data are in line with those observed from re-evaluated, published RNA sequencing data of 23 ALCL patients (18 ALCL, ALK−, 5 ALCL, ALK+) [ 6 ]. From these data, TYK2 expression by primary ALCLs was independent of the presence or absence of ALK fusions (n.s., Fig. 6a ). The highest TYK2 expression level was found in a tumor with a previously reported NFkB-TYK2 fusion protein [ 6 ]. By contrast, a PABPC4- TYK2 positive ALCL expressed TYK2 at similar levels to tumors without TYK2 fusions (Fig. 6b ). Furthermore, we measured TYK2 expression in 6 ALCL cell lines (ALK+: K299, SR786, SUDHL-1, SUP-M2; and ALK−: Mac1, Mac2a), the cutaneous T-cell lymphoma cell line MyLa [ 31 ] bearing the NPM1-TYK2 fusion [ 5 ] and PBMCs. Real-time RT-PCR data revealed a 3.3-fold higher (mean 2.6 ± 0.68 SD) TYK2 expression in ALCL cell lines as compared to PBMCs (Fig. 6c ). These results were recapitulated by Western blot analysis whereby TYK2 expression was 3.0- to 7.7-fold higher in ALCL cell lines as compared to PBMCs (Fig. 6b ). Immunohistochemical (IHC) staining for TYK2 in ALCL patient tissue was not possible, due to the lack of specificity of commercially available antibodies for formalin-fixed tissue (Suppl. Figure 6A ). However, we were able to detect enhanced TYK2 mRNA expression in FFPE sections of ALCL patients using RNA in situ hybridization (ISH) (Suppl. Figure 6B ). Moreover, analysis of published RNA-seq data showed MCL1 to be expressed with a higher level of normalized counts compared to other pro-survival BCL2 family members, such as BCL2 or BCL2L1 (Fig. 6c ). Hence, our data show that ALCL is a tumor that is dependent on TYK2 for cell survival, that TYK2 is activated through an autocrine loop involving IL-10 and IL-22, and that TYK2 promotes cell survival at least in part through activating the expression of the BCL2 family protein, MCL1. Fig. 6 TYK2 expression is upregulated in ALCL. a TYK2 transcript levels were assessed by gene specific RT-PCR using RNA isolated from FFPE ALCL tumor samples. Published RNA-seq data of 23 ALCL patients were analyzed for TYK2 expression, including two cases with NFkB-TYK2 or PABC4-TYK2 fusion. b TYK2 expression was assessed by RT-PCR in the indicated cell lines using primers designed to recognize endogenous TYK2 only. Data are representative of the means and standard deviations of three experiments. TYK2 protein expression was analyzed in ALCL cell lines by western blot and quantified by densitometry analysis as shown by the numbers under the blot. The 81 kDa NPM1-TYK2 fusion observed in the MyLa cell line is smaller than the endogenous 134 kD TYK2 and serves as a positive control for the TYK2 antibody. c Published RNA-seq data of 23 ALCL patients were re-evaluated for BCL2 , MCL1 , BAX , and BCL2L1 transcript levels, and show high levels of MCL1 expression Full size image Discussion We show here that TYK2 is expressed at high levels in human ALCL cell lines and primary ALCL patient samples. T-cell-specific loss of TYK2 in a transgenic mouse model of NPM-ALK driven lymphoma resulted in delayed tumor growth and significantly prolonged overall survival of the mice. siRNA-mediated TYK2 depletion as well as CRISPR/Cas9-mediated TYK2 disruption led to rapid induction of cell death in ALCL cells. Loss-of-TYK2 reduced pYSTAT1, pYSTAT3, and MCL1 expression. In keeping with these in vivo results in an experimental model, we found that STAT1 knockdown completely phenocopied TYK2 knockdown, whereas STAT3 knockdown only partially mirrored loss of TYK2. These results implicate a novel TYK2-STAT1 axis that is essential for tumor cell survival in ALCL. The TYK2–pYSTAT1 pathway positively regulates MCL1 expression in ALCL cells, contributing to this aberrant tumor cell survival. Knockdown of IL-10RA in ALCL cell lines resulted in growth arrest, implicating aberrantly expressed IL-10 and IL-22 in autocrine loops that provided a mechanism for aberrant TYK2 activation in tumor cells. Our results licence TYK2 as a key dependency in ALCL pathogenesis, which is potentially druggable once clinical grade TYK2 inhibitor becomes available. The role of TYK2 in lymphoma development and progression is not yet fully understood, although, the presence of activating TYK2 fusion proteins in a small subset of ALCL patients indicates its importance [ 3 , 4 , 5 , 6 , 7 ]. Interestingly, re-evaluation of published RNA-seq data [ 6 ] revealed TYK2 expression in ALCL patients without TYK2 fusions at a similar level to patients bearing TYK2 fusions and at 7–8-fold higher levels than in lymph nodes from healthy donors (Fig. 6a ). The mechanism responsible for enhanced expression of TYK2 in ALCLs lacking TYK2 gene fusions remains to be elucidated. In this study, treatment with TYK2 inhibitors of ALCL cell lines combined with shRNA and CRISPR-cas9 gene disruption experiments supported STAT1 as important downstream mediator of activated TYK2 in ALCL. This is surprising, since STAT3 has widely been described as a tumor driver in ALCL [ 6 , 32 ] and STAT1 has been often ascribed a tumor suppressive function by inducing cell cycle arrest, apoptosis and suppression of metastasis [ 33 , 34 , 35 ]. However this view was challenged recently by reports showing STAT1 was involved in radioresistance [ 36 , 37 ] and as a promoter and not a suppressor in breast and gastric cancer cells [ 38 , 39 , 40 ]. In agreement with previous studies we find that in ALCL cell lines STAT1 is robustly expressed and phosphorylated (see Fig. 1c ) [ 34 ] and loss of STAT1 has no influence on STAT3 or pSTAT3 levels despite prominent growth reduction indicating that STAT1 acts independently. Moreover, T-cell-specific TYK2 deletion in CD4-NPM- ALK transgenic mice led to reduced STAT1 phosphorylation and increased survival whereas the analogous deletion of STAT3 in the same mouse model had no effect [ 32 ]. In view of our studies, it will be important in the future to also inactivate STAT1 in T-lineage cells of this mouse ALCL model, to clearly define the dependency on STAT1. BCL2 expression is mostly absent in ALCL, ALK+ lymphomas but MCL1 expression can be detected in the majority of these lymphomas [ 23 , 41 ]. We show in this study that MCL1 expression is correlated with TYK2 expression, and TYK2 ablation in mice or human cell lines leads to markedly reduced levels of the pro-survival protein, MCL1. This is in contrast to the situation in transformed thymocytes, in which we have shown that TYK2 signals through STAT1 to upregulate BCL2. Although T-ALL and ALCL both represent transformed lymphocytes within the spectrum of the T-cell lineage, clearly signal transduction pathways are much different in developing thymocytes and memory T-cells post-antigen stimulation, which may be the cell of origin of ALCL. It is interesting that the TYK2-STAT1 signalling axis has been aberrantly rewired in each of these hematopoietic malignancies to promote cell survival, although taking advantage of different pro-survival proteins as effectors to thwart cell death at these different stages of lymphoid development. TYK2 has previously been described in the context of IL-10, IL-12, IL-22, IL-23, and IFN type I and III signaling, as well as being linked to defective IL-12 signaling in Tyk2 −/− mice [ 29 , 30 , 42 , 43 ]. Autocrine IL-10 signaling has recently been shown to upregulate TYK2 and to activate STAT1 signaling in T-ALL [ 3 ]. Among peripheral T-cell lymphomas ALCL has been associated with the highest level of IL-10 expression [ 44 ], and ALCL cells also express the IL-10 receptor, which provides an autocrine mechanism for the aberrant activation of TYK2 in ALCL cells. In this paper, we demonstrate high levels of expression of not only IL-10 but also IL-22 in ALCL cells, coincident with expression of IL10RB, IL22R1 and the common alpha chain IL10RA. Interestingly, it has been shown that the aberrant expression IL22R1 in ALCL cells are induced by NPM-ALK and mediates the pro-proliferative effect of IL-22 [ 45 ]. In ALCL patients plasma levels of IL-22 are increased, and these levels become undetectable in patients who reach complete remission [ 46 ]. Thus, both IL-10 and IL-22 are expressed by ALCL cells and form autocrine loops to activate TYK2, which along with its heterodimeric partner JAK1 provides the signaling component of both the IL-10 receptor (IL10RA/IL10RB) and the IL-22 receptor (IL10RA/IL22RA) [ 47 ]. JAK1, JAK2, and JAK3 but not TYK2 have been studied in the context of ALCL. In about 15% of ALCL, ALK- patients JAK1 (G1097) mutations and/or STAT3 (Y640) mutations have been described and lead to increased oncogenic signalling and to Ruxolitinib resistance in the latter cases [ 6 ]. JAK2 has been described to be highly phosphorylated in ALCL, ALK+ cell lines and to directly interact with NPM-ALK with potential activation of STAT5 [ 48 ]. Earlier work in ALCL cell lines has documented constitutive JAK3 phosphorylation with STAT3 activation such that apoptosis is induced when JAK3 is inhibited [ 49 , 50 , 51 ]. In this context it is interesting to note that TYK2, and also pan-JAK inhibitors, in our study are somewhat more effective in ALCL ALK− compared to ALK+ cell lines (Fig. 3a, b ). This finding is consistent with recent work showing higher responsiveness of pSTAT3 expressing ALCL ALK- cells to JAK1-3 kinase inhibition [ 52 ]. The TYK2 inhibitors used in our study were both designed to be ATP-competitive kinase domain inhibitors, but additional inhibitor types are currently under development. Since it has been shown that TYK2 abrogation in healthy tissue attenuates but does not eliminate the effect of cytokines, it would be expected that the side effect profile of TYK2-specific inhibitors would be milder than that of pan-JAK inhibitors [ 53 , 54 ]. Currently TYK2 inhibitors are being developed mostly for the treatment of autoimmune/inflammatory diseases. However, in the light of the findings in this paper and other recent reports [ 3 , 55 ], it appears that TYK2 inhibitors may eventually have roles in anti-cancer treatment. Change history 14 July 2020 An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Anaplastic large-cell lymphomas (ALCL) are rare cancers of the white blood cells. New research from the international ERIA consortium, led by scientists in Vienna, has now shown that the same signaling pathway is essential to the growth of cancer cells in various forms of ALCL: TYK2 (tyrosine kinase 2, an important component of the immune system) prevents apoptotic cell death by increasing the production of Mcl1, a special type of protein belonging to the BCL2 family. Due to its unique enzymatic composition, TYK2 is therefore an interesting therapeutic target, making TYK2-specific inhibitors highly promising as new therapeutic agents in ALCL. A particularly fruitful area of personalised medicine is cancer treatment, where improved diagnostic methods are able to break cancers down into increasingly smaller subcategories, thereby making it possible to apply individual treatment strategies. The molecular analysis of human tumour samples has therefore become a focus of cancer research, to identify new therapeutic targets and validate them in tumour models, in order to improve the clinical management of cancer patients. However, this faces clinicians with several challenges, including increasingly comprehensive diagnostics as well as the problem of adequately validating this data for smaller patient groups. This is all the more urgent in the case of rare cancers such as ALCL, where the number of patients is so small. Nicole Prutsch and Olaf Merkel from the Medical University of Vienna and their international colleagues have now reported in the journal Leukemia that, rather than finding yet another subdivision of the ALCL subgroups, they have managed to identify a common actor in ALCL patients. TYK2 is not only expressed in all patients but produces the same anti-apoptotic reaction, which keeps the lymphoma cells alive and so helps the tumour to grow. "We were therefore able to regard the TYK2 signals as the Achilles heel of ALCL, since both types of ALCL that we investigated relied on its activity to maintain the essential signal to protect against cell death," explains Olaf Merkel, who is co-last author of this publication together with Lukas Kenner. Attenuating the TYK2 signal in the cell culture resulted in rapid cell death and, in ALCL model mice, in which TYK2 was genetically switched off, the researchers observed that the laboratory animals survived for longer. Lukas Kenner from MedUni Vienna and the Ludwig Boltzmann Institute for Cancer Research and co-founder of the European Research Initiative on Alk-mediated diseases (ERIA) emphasises the potential therapeutic significance of TYK2-inhibitors in ALCL. "We look forward to TYK2 inhibitors, which are currently being developed for treating immunological diseases, being available, since we urgently need better treatments for rare lymphomas," he says.
10.1038/s41375-018-0239-1
Biology
A molecular machine's secret weapon exposed
Leemor Joshua-Tor, A shape-shifting nuclease unravels structured RNA, Nature Structural & Molecular Biology (2023). DOI: 10.1038/s41594-023-00923-x. www.nature.com/articles/s41594-023-00923-x Journal information: Nature Structural & Molecular Biology
https://dx.doi.org/10.1038/s41594-023-00923-x
https://phys.org/news/2023-02-molecular-machine-secret-weapon-exposed.html
Abstract RNA turnover pathways ensure appropriate gene expression levels by eliminating unwanted transcripts. Dis3-like 2 (Dis3L2) is a 3′–5′ exoribonuclease that plays a critical role in human development. Dis3L2 independently degrades structured substrates, including coding and noncoding 3′ uridylated RNAs. While the basis for Dis3L2’s substrate recognition has been well characterized, the mechanism of structured RNA degradation by this family of enzymes is unknown. We characterized the discrete steps of the degradation cycle by determining cryogenic electron microscopy structures representing snapshots along the RNA turnover pathway and measuring kinetic parameters for RNA processing. We discovered a dramatic conformational change that is triggered by double-stranded RNA (dsRNA), repositioning two cold shock domains by 70 Å. This movement exposes a trihelix linker region, which acts as a wedge to separate the two RNA strands. Furthermore, we show that the trihelix linker is critical for dsRNA, but not single-stranded RNA, degradation. These findings reveal the conformational plasticity of Dis3L2 and detail a mechanism of structured RNA degradation. Main RNA quality control and turnover are vital for cellular function, yet little is known about how nucleases deal with the diverse universe of structured RNAs. Dis3-like 2 (Dis3L2) is an RNase II/R family 3′–5′ hydrolytic exoribonuclease that plays an important role in development and differentiation 1 , 2 , cell proliferation 3 , 4 , 5 , 6 , calcium homeostasis 7 and apoptosis 8 , 9 by effectively removing or processing 3′ uridylated RNAs 1 , 10 , 11 , 12 , 13 . Dis3L2 targets are oligouridylated by the terminal uridylyl transferases (or TUTs) 14 , 15 , 16 . The specificity toward uridylated RNAs is conferred through a network of base-specific hydrogen bonds along the protein’s extensive RNA-binding surface, as demonstrated by the structure of Mus musculus Dis3L2 (MmDis3L2) in complex with a U 13 RNA 17 . Genetic loss of Dis3L2 causes Perlman syndrome, a congenital overgrowth disorder that is characterized by developmental delay, renal abnormalities, neonatal mortality and high rates of Wilms’ tumors 1 . The first reported physiological substrates of Dis3L2 were the uridylated precursors of let-7 microRNAs 10 , 13 , which play an important role in stem cell differentiation by silencing growth and proliferation genes such as HMGA2 , MYC and Ras 18 , 19 , 20 , 21 , 22 , 23 . Many other noncoding RNA targets have since been reported, including other microRNAs 24 , 25 , transfer RNA fragments 16 , small nuclear RNA 26 , the intermediate of 5.8S ribosomal RNA processing 7S B 27 , the long noncoding RNA RMRP 28 , and the 7SL component of the ribonucleoprotein signal recognition particle required for endoplasmic reticulum-targeted translation 7 . The latter is probably responsible for the Perlman syndrome phenotype, with aberrant uridylated 7SL leading to endoplasmic reticulum calcium leakage that perturbs embryonic stem cell differentiation, particularly in the renal lineage 7 . Unlike a number of structurally similar homologs, Dis3L2 can degrade structured RNAs independent of external helicase activity 1 , 10 , 12 , 29 . Little is known about how Dis3L2 or other capable RNase R/II family nucleases independently degrade structured RNA. We determined the structures of an RNase R/II family nuclease bound to a series of structured RNA substrates and analyzed the kinetic profiles of wild-type Homo sapiens Dis3L2 (HsDis3L2) and engineered mutants to reveal how this nuclease achieves highly efficient degradation of structured RNA. Results Initial binding of Dis3L2 to structured substrates To understand the presubstrate binding state, we used cryogenic electron microscopy (cryo-EM) to determine the structure of RNA-free HsDis3L2 to 3.4 Å resolution (construct Dis3L2 D391N residues 1–858: carboxy (C)-terminal truncation of residues 859–885; and an engineered catalytic mutation of Asp for Asn at residue 391 in Dis3L2) (see Methods , Fig. 1a,b and Extended Data Fig. 1a–c ). RNA-free HsDis3L2 has a vase-like conformation in which three oligonucleotide/oligosaccharide-binding (OB) domains—two cold shock domains (CSDs) and an S1 domain—encircle a funnel-like tunnel that reaches into the Ribonuclease B (RNB) domain and leads to the active site (Fig. 1b and Extended Data Fig. 1d ). The OB domains provide a large positively charged surface, which probably acts as a landing pad for the negatively charged RNA (Fig. 1c ). The overall structure of RNA-free Dis3L2 is very similar to the structure of the mouse Dis3L2–ssRNA complex (MmDis3L2–U 13 ) (root mean square deviation (RMSD) = 1.2 Å, calculated over all Cα pairs) 17 . Thus, the apoenzyme is preorganized to bind single-stranded RNA (ssRNA). Fig. 1: RNA-free Dis3L2 is preorganized to bind RNA substrates. a , Domain compositions of Dis3L2 and the homologous proteins Dis3 and RNase R (green, N-terminal PIN domain; pink, CSD1; orange, CSD2; blue, RNB; purple, S1 domain). b , Side (left) and top or apical (right) views of RNA-free Dis3L2 D391N with domain labels. c , Charge distribution of the Dis3L2 surface from a side view (left) and a view of the apical face (right), as calculated using PyMol APBS at an ionic strength of 150 mM ( Methods ) where k B is the Boltzmann constant, T is the temperature in degrees Kelvin and e c is the unit of charge. Full size image To probe the initial binding of Dis3L2 to structured substrates, we designed a short hairpin RNA mimicking the base of the pre-let-7g stem, with a UUCG tetraloop for stability and a 3′ GC(U) 14 (16-nucleotide) overhang as the uridylated tail (hairpinA–GCU 14 ; Fig. 2a ). The resulting 3.1 Å cryo-EM structure of the Dis3L2 D391N –hairpinA–GCU 14 complex revealed that Dis3L2 maintains the same vase conformation as is observed in the RNA-free form (Fig. 2b and Extended Data Fig. 2 ). However, the double-helical stem of the RNA was not resolved, suggesting that double-stranded RNA (dsRNA) is not stably engaged by the nuclease upon initial substrate association. Nonetheless, the quality of the density allowed assignment of 15 of the 16 nucleotides of the single-stranded 3′ overhang. The RNA follows the same path as is seen in the MmDis3L2–U 13 structure (RMSD = 0.8 Å over Cα atom pairs) and also forms numerous base-specific hydrogen bonds with the protein (Fig. 2c–f ). As in the MmDis3L2–U 13 structure, seven nucleotides at the 3′ end are buried in the RNB tunnel (Fig. 2f ), which can only accommodate ssRNA. Fig. 2: Structured RNA is not engaged with Dis3L2 upon initial binding to the 3′ oligo-U tail. a , RNA fold of hairpinA–GCU 14 . b , Cryo-EM structure of Dis3L2 D391N in complex with hairpinA–GCU 14 . c , Alignment showing the active site of MmDis3L2 in complex with U 13 (gray) (PDB: 4PMW ) and HsDis3L2 in complex with hairpinA–GCU 14 . d , The hydrogen bond network from C20–U25 traverses the apical face of the protein and involves both CSDs and the S1 domains. e , Following U25, the RNA enters into the narrow portion of the channel. f , The hydrogen bond network continues within the RNB all the way to the active site. For C20–U32, these include base-specific hydrogen bond interactions. Full size image RNA double helix engagement by Dis3L2 To examine the structural changes occurring upon substrate processing, we shortened the 3′ overhang to 12 uridines and further modified the stem to increase stability (hairpinC–U 12 ; Fig. 3a ). We obtained a 3.1 Å structure of wild-type HsDis3L2 with hairpinC–U 12 in which the double-helical stem of the RNA hairpin is clearly visible and nestled between the two CSDs and the S1 domain (Fig. 3b,c and Extended Data Fig. 3 ). The overall conformation of Dis3L2 does not change compared with the hairpinA–GCU 14 or RNA-free Dis3L2 (RMSD = 0.56 and 0.61 Å, calculated over all Cα pairs, respectively). The basal junction of the hairpin interacts with the S1 and CSD1 domains, while the apical loop interacts with CSD2 (Fig. 3c–e ). At this point, when the 3′ overhang is 12 nucleotides long, the 5′ end at the double strand–single strand junction moves toward a loop in CSD1 (N76–H81) (Fig. 3d ). While the true start of the duplex (C2–G21) lies slightly closer to the S1 domain, the U1 5′ overhang forms a wobble base pair with U23 near the N76–H81 loop in CSD1. Fig. 3: The cryo-EM structure of HsDis3L2 in complex with hairpinC–U 12 shows engagement of the RNA duplex. a , RNA fold of hairpinC–U 12 . b , Wild-type Dis3L2 in complex with hairpinC–U 12 . c , View of b at 90°. d , Basal junction of the double-helical stem and overhang. e , Overlay of the map and structure showing the position of the double-helical stem of hairpinC–U 12 . Full size image Drastic conformational rearrangement before dsRNA unwinding Next, we designed a substrate with an even shorter, seven-nucleotide, 3′ overhang (hairpinD–U 7 ), since this is the minimal ssRNA length needed to reach the active site from the opening to the tunnel (Fig. 4a ). Cryo-EM analysis of wild-type HsDis3L2 in complex with hairpinD–U 7 resulted in a 2.8 Å structure (Fig. 4b,c and Extended Data Fig. 4a–d ). Strikingly, it is immediately evident that the conformation of this complex is markedly different from the vase conformation observed thus far (Fig. 4d–h ). The two CSDs moved ~70 Å clear to the other side of the vase rim via a hinge in the linker region between CSD2 and the RNB (Fig. 4e and Supplementary Video 1 ). This resulted in a new conformation reminiscent of a prong when viewed from the side (Fig. 4c ). This large rearrangement is accompanied by smaller conformational changes in the S1 and RNB domains. The S1 domain moves such that it angles toward the double helix where it forms new interactions with nucleotides C15 and G16 in the backbone of the double helix, while a loop in the RNB domain moves by 10 Å in response to the new positioning of the CSDs (Fig. 4g,h and Extended Data Fig. 4g ). We confirmed that the prong conformation is not an inactive trapped state caused by the high (75%) GC content in hairpinD–U 7 by testing the activity of Dis3L2 on this substrate as well as analyzing cryo-EM data of Dis3L2 in complex with hairpinE–U 7 , which had a GC content of 50% (Extended Data Fig. 4e,f ). Although we were unable to obtain a high-resolution reconstruction, the maps clearly show that Dis3L2 is in the prong conformation. Fig. 4: Once the structured RNA gets closer to the enzyme, the CSDs reposition dramatically by 70 Å. a , RNA fold of hairpinD–U 7 with a shorter, seven-nucleotide, 3′ overhang. b , Structure of wild-type Dis3L2 in complex with hairpinD–U 7 . c , A 90° view of b . d , Final 3D map of human Dis3L2 in complex with hairpinD–U 7 , superimposed with the structure of RNA-free Dis3L2 fit in the map. e , View from the top, showing the change in position of the S1 and CSD domains in the vase and prong conformation. f , Alignment of RNA-free Dis3L2 (gray; vase) and Dis3L2–hairpinD–U 7 (colored domains and RNA; prong). The CSD domains are positioned behind S1 in the prong in this view. g , h , Alignment of hairpinA-GCU 14 and hairpinD-U 7 Dis3L2 structures showing the change in the positioning of the S1 domain ( g ) and a hairpin in the RNB (residues 555–572) ( h ). Full size image The two CSDs move as a block with their relative orientations unchanged (Extended Data Fig. 4h,i ). An overlay of the RNA-free and hairpinD–U 7 Dis3L2 structures shows that the 5′ strand of the dsRNA hairpin would clash with CSD1 if the CSDs would remain in their original position (Fig. 4f ). Thus, it appears that upon shortening of the ssRNA overhang, the structured portion of the RNA substrate pokes the enzyme and triggers this large rearrangement. The consequence of this movement is that it allows the structured portion of the RNA to come into contact with the RNB while also shortening the length of the narrow tunnel to the active site by two nucleotides (Fig. 5 and Extended Data Fig. 5a ). Moreover, the RNA double-helical stem is now positioned on top of the junction between a bundle of three RNB helices and a linker connecting them to the rest of the RNB (Fig. 5b,c and Extended Data Fig. 5 ). This junction would then act as a wedge to separate the two strands of RNA, allowing the 3′ strand to enter into the narrow tunnel while the 5′ strand peels away. Six out of seven residues in the single-stranded overhang are in the same position as in the hairpinA–GCU 14 and hairpinC–U 12 structures, with five fully buried in the tunnel of the RNB (Fig. 5c,d ). There is no change in the final approach to the active site. However, the seventh base from the 3′ end, which is also the first single-stranded base in hairpinD–U 7 (nucleotide U17), is no longer pointing towards the S1 domain and N663 as it is in the vase conformation (nucleotide U28 in hairpinA–GCU 14 ) (Fig. 5a ). Instead, it flips to stack underneath G16 of the double-stranded stem and forms a pseudo base pair with R616, which emanates from the start of three α helices in the RNB (Fig. 5b,c ). Fig. 5: The base of the double-helical stem is positioned above an RNB trihelix linker. a , b , Comparison of RNA conformations near the trihelix linker in the hairpinA–GCU 14 ( a ) and hairpinD–U 7 structures ( b ). The seventh base from the 3′ end (U28 in hairpinA–GCU 14 and U17 in hairpin–U 7 ; yellow arrow) swings from interacting with the side chain of N663 (hairpinA–GCU 14 ) to pointing towards R616 (hairpinD–U 7 ), which stacks under C1. The trihelix linker forms the final barrier to the dsRNA before the tunnel to the active site. c , Overlay of the cryo-EM map of the HsDis3L2–hairpinD–U 7 structure at the hairpin basal junction. d , Nucleotides U19–U23 are buried in the tunnel of the RNB. Full size image Upon close examination of our cryo-EM data, we noticed that heterogeneous refinement yielded not only the high-resolution structure of the prong, but also a smaller three-dimensional (3D) class representing the vase (Extended Data Fig. 6a ). Using a standardized analysis, we examined cryo-EM data of Dis3L2 with a series of substrates with identical stem loops but of varying single-stranded overhang lengths and looked at their particle distributions between 3D classes after heterogeneous refinement ( Methods and Extended Data Fig. 6b–d ). This analysis revealed the point at which the drastic change between the vase and prong conformation occurs. The vase conformation is the only one observed in the RNA-free form and with long single-stranded 3′ overhangs (Fig. 6 and Extended Data Fig. 6c ). The prong conformation is first observed when the overhang is eight nucleotides long and is the only conformation observed when the overhang length is shortened to five nucleotides (Fig. 6 and Extended Data Fig. 6c ). This illustrates the shape-shifting nature of the enzyme to enable the degradation of structured RNA substrates, and suggests that this dramatic conformational change is triggered by the RNA when the overhang is roughly eight nucleotides long. A similar distribution is seen for independent datasets (Extended Data Fig. 6c ). Fig. 6: Dis3L2 undergoes a conformational change at eight-nucleotide overhang lengths. Distribution of particles after heterogeneous refinement for select datasets. The x axis shows the percentage of particles in the vase (pink) or prong (blue) conformation. The y axis shows individual datasets of RNA-free or hairpin RNA-bound Dis3L2, with numbers denoting the length of the 3′ overhang in nucleotides (nt). The deeper color indicates higher-quality 3D reconstructions, whereas gray indicates particles that did not contribute to a meaningful reconstruction. Source data Full size image Dis3L2 degrades structured substrates with high processivity To quantitatively understand how the structural features described above impact the function of Dis3L2, we carried out presteady-state kinetic assays and pulse–chase experiments to measure processivity ( P ) and elementary rate constants for RNA binding (association ( k on ) and dissociation ( k off )) and degradation (forward step ( k f )) for wild-type and mutant forms of Dis3L2 at single-nucleotide resolution (Extended Data Fig. 7a–g ). Our global kinetic analysis also provided a measure for the contribution of nonproductive binding (that is, substrate association that is not conducive to RNA degradation) (Extended Data Fig. 7a,e ). Using a 5′ 32 P-radiolabeled 34-nucleotide hairpin RNA with a seven-base pair stem and a 16-nucleotide overhang (hairpinA–GCU 14 ) as a substrate (Fig. 7a ), we found that wild-type Dis3L2 is distributive for the first step ( P = 0.23 ± 0.003), requiring approximately three binding events before cleaving the first nucleotide (Fig. 7b,c ). This may serve as an important checkpoint before initiation of processive degradation in the second phase, where multiple nucleotides are cleaved before a dissociation event (Fig. 7c ). This pattern was also observed in substrates where the terminal base pairs were switched from GU and AU to GC and CG, respectively (hairpinB–GCU 14 ), or when the overhang was composed purely of Us (hairpinA–U 16 ) (Extended Data Fig. 7h–j ). In the case of hairpinB–GCU 14 , an interesting decrease in processivity was seen at an overhang length of five nucleotides, which may reflect some stalling before dsRNA unwinding (five to four nucleotides). Fig. 7: Kinetic profile of structured RNA degradation by wild-type HsDis3L2 and mutants at single-nucleotide resolution. a , Schematic of hairpinA–GCU 14 . For simplicity, the kinetic data are numbered from 16 (3′ end) to 0 (single strand–double strand junction) to denote the nucleotide position. b , Two representative gels from presteady-state nuclease titration assays with 1 nM 5′ P 32 -radiolabeled hairpinA–GCU 14 and 25 nM HsDis3L2 and HsDis3, respectively. The overall lengths of the species and the single-stranded overhang lengths are indicated on the left and right of the panels, respectively. Each experiment was repeated independently at multiple concentrations of each enzyme with n = 2. c , Processivity ( P ) of wild-type Dis3L2 versus Dis3. d , Dissociation rate constants ( k off ) and forward rate constants ( k f ) of Dis3L2. e , Association rate constants ( k on ) of wild-type human Dis3L2. k on data point x = −8 was removed due to a large uncertainty value. The x axis shows the number of nucleotides from the start of the double-stranded stem. f , Domain composition of wild-type (WT) human Dis3L2 and the ΔCSD and Δ123H deletion mutants. g , Representative gels from pulse–chase reactions of wild-type human Dis3L2, ΔCSD and Δ123H at a 50 nM concentration with 1 nM radiolabeled hairpinA–GCU 14 . Cold chase was added to the reaction at the 3-min timepoint to a final concentration of 5,000 nM. Measurements were taken prechase at 3 min (pink dot) and postchase at 4, 5, 7.5 and 10 min (blue gradient dots; light to dark, respectively). Each experiment was repeated independently at two concentrations of each enzyme with n = 3. h , Processivity ( P ) of wild-type Dis3L2 versus ΔCSD. i , Processivity ( P ) of wild-type Dis3L2 versus Δ123H. The error bars in plots d and e represent s.e.m. from the global fit of data from the enzyme titrations (nine Dis3L2 concentrations with n = 5) and pulse–chase experiments (two Dis3L2 concentrations with n = 4). The error bars for the processivity plots in c , h and i show propagated errors calculated from the s.e.m. of k f and k off derived from the same global fit of the data. Source data Full size image During the course of the reaction, as the RNA is progressively shortened from the 3′ end, the k off decreases (Fig. 7d ). We observe a steeper decline in the k off starting around 11 nucleotides, which could reflect additional stabilizing interactions that become available when the phosphate backbone of the double-helical portion of the substrate is brought into contact with the enzyme (as was observed in the hairpinC–U 12 and hairpinD–U 7 structures). The k on also decreases with RNA length, probably due to the loss of substrate interaction points (fewer 3′ U binding sites) necessary for stable association (Fig. 7e ). Interestingly, the k f varies with the stage of substrate degradation (Fig. 7d ). After an initial slow step, k f increases as the enzyme degrades through the single-stranded overhang. When the overhang is shortened to 11 nucleotides, k f peaks and then begins to decrease as the enzyme encounters the dsRNA. This suggests that catalysis and/or translocation slow down as the dsRNA is engaged and later unwound, as k f reflects the slower of these two processes. Overall, k off has the dominant effect on processivity across all intermediate species (Fig. 7c ). When the substrate RNA initially associates with Dis3L2, it can do so productively (allowing degradation to commence from the 3′ end) or nonproductively. Examples of the latter include: the terminal nucleotide not binding fully into the active site; misorientation of the RNA; or a nonproductive/inactive conformation of Dis3L2. We observe roughly tenfold tighter nonproductive compared with productive binding for the very first step (Extended Data Fig. 7e ). To confirm that the nonproductively bound species are not due to noncleavable RNA, we performed long-time-course experiments and showed that all species do get degraded to the five- to four-nucleotide end product (Extended Data Fig. 7b–c ). This suggests that along with mediating association with substrates targeted for degradation, Dis3L2’s RNA-binding regions may play a role in non-nucleolytic RNA-binding functions. To assess whether these observations could be applicable to other substrates, we carried out additional processivity analyses of Dis3L2 degradation on two hairpins (hairpinF–U 16 and hairpinG–U 16 ) that mimic the 3′ end of the 7SL RNA (a natural substrate of Dis3L2) and one hairpin with a longer stem (hairpinI–GCU 14 ) (Extended Data Fig. 8 ). In all three cases, we observe a distributive first step followed by a processive phase, in line with our global observation of the processivity on hairpinA–GCU 14 . Although some minor differences in the kinetic profile of these substrates are observed, the overall pattern remains. The magnitude of the first distributive step and the rate by which Dis3L2 reaches high processivity are different, however. The 7SL mimic hairpinG–U 16 has the lowest first-step processivity ( P = 0.16 ± 0.038) and takes the longest to reach the very high processivity seen with hairpinA–GCU 14 in the second phase, possibly due to the bulkiness of this substrate. The 7SL mimic hairpinF–U 16 ( P 16 = 0.21 ± 0.026) and hairpinI–GCU 14 ( P 16 = 0.28 ± 0.080), which both harbor a single hairpin, have a kinetic profile more similar to hairpinA–GCU 14 . Nevertheless, the overall features for all these substrates are similar. We compared the kinetic profile of HsDis3L2 with that of HsDis3—an exosome-associated nuclease of this family that, in contrast with Dis3L2, cannot independently degrade structured substrates (Fig. 7b,c and Extended Data Fig. 9 ) 12 . HsDis3 appears to bind the oligo-U-tailed substrate much tighter and enters directly into processive degradation without an initial distributive step (Extended Data Fig. 9b,c ). It maintains high processivity up until the single-stranded overhang reaches three to two nucleotides in length, at which point there is a drastic decrease in processivity as a result of a large increase in the dissociation rate ( k off ) (Fig. 7b and Extended Data Fig. 9b,e ). This shows that, unlike HsDis3L2, HsDis3 is not able to maintain sufficient association with the substrate once it encounters the structured portion of the substrate. CSDs play multiple roles in RNA processing Since the CSDs appeared to be the initial recognition sites for the RNA, but then triggered to move to the other side of the protein upon RNA processing, we tested whether they contribute predominantly to initial substrate association or ssRNA degradation. Removal of the CSDs (ΔCSD; deletion of residues 1–365) led to lower processivity for the very first step, largely due to a lower forward rate constant, indicating that the CSDs play a role in augmenting the rate of catalysis for the first nucleotide cleavage (Fig. 7f–h and Extended Data Fig. 10a–c ). Furthermore, our analysis has shown that the CSDs provide roughly half of the nonproductive binding affinity (nonproductive K ½ = 4.2 ± 0.41 nM (wild type) versus 8.5 ± 0.73 nM (ΔCSD)) and removing them improves the productive binding fivefold (productive K ½ = 508.3 ± 1.27 nM (wild type) versus 97.7 ± 0.35 nM (ΔCSD)). During the following ssRNA degradation steps, ΔCSD has a similar processivity to wild-type Dis3L2 (Fig. 7h ). However, there is a substantial decrease in the processivity as ΔCSD approaches the structured part of the RNA, showing a marked reduction at the three- and two-nucleotide single-stranded overhang position, as well as at the −1 and −2 nucleotide positions, which now fall within the RNA stem. This is a result of a notable increase in the dissociation rate ( k off ) (Fig. 7h and Extended Data Fig. 10b ). Thus, the CSDs contribute to both initiation of RNA degradation and maintenance of substrate association during the initial unwinding steps. The RNB trihelix and linker are necessary for resolving dsRNA The Dis3L2–hairpinD–U 7 complex structure shows that the trihelix linker provides the final barrier before the narrow tunnel to the active site, suggesting a role in dsRNA unwinding. Deletion of the trihelix and linker (residues P612–M669: Δ123H) has a striking effect on substrate degradation and a buildup of intermediate species is observed at lengths close to the start of the double strand of hairpinA–GCU 14 (Fig. 7f,g,i ). Δ123H never reaches the processive phase, although there is a slight increase in the processivity during initial degradation of the ssRNA overhang. When the substrate shortens to ten nucleotides in the overhang, the dissociation rate ( k off ) increases significantly and the forward rate ( k f ) plateaus, leading to a dramatic drop in processivity and a buildup of species with three- and two-nucleotide overhangs (Extended Data Fig. 10d–f ). However, no such buildup was observed in the case of a single-stranded U 34 substrate, or with Dis3L2 mutants in which only one of the three helices was deleted (Extended Data Fig. 10g–k ). This shows that the trihelix linker module as a whole is crucial for dsRNA, but not ssRNA, degradation. Discussion Dis3L2 has emerged as a key nuclease responsible for the specific targeting and degradation of cytoplasmic uridylated RNAs, many of which are highly structured. Given that many exoribonucleases employ the help of helicases to degrade structured substrates, Dis3L2’s ability to independently degrade dsRNAs with high processivity is mechanistically interesting. However, little was known about how Dis3L2 or other capable RNase R/II family nucleases achieve independent degradation of dsRNA. Here, we discovered a large conformational change that is triggered by the dsRNA and exposes a trihelix linker module that is crucial for the degradation of dsRNA but not ssRNA. We observed engagement of the dsRNA by Dis3L2’s OB domains and uncovered an important contribution of the CSDs to initial nucleotide cleavage and duplex unwinding. We also identified the contribution of nonproductive binding. Below we discuss the implications of our findings for Dis3L2’s mechanism of action and the function of other RNase R/II nucleases. Role of the CSDs in the degradation of structured RNA substrates Analysis of a Dis3L2 mutant in which the CSDs had been removed, ΔCSD, showed an impact on both the first catalytic step and the initial unwinding phase (Extended Data Fig. 10a,b ). Moreover, before duplex unwinding, the dissociation constant increased (Extended Data Fig. 10b ). Our structural data show that the CSDs switch to the prong conformation at overhang lengths of roughly eight nucleotides (Fig. 6 ), which makes the fact that we see an impact on binding during the unwinding phase in the ΔCSD variant somewhat confusing. There are two models that might explain these observations. The CSDs could be contributing to binding indirectly, by stabilizing the repositioning of the S1 domain to directly interact with the RNA double helix in the prong conformation. Alternatively, the CSDs may partially swing back to bind the dsRNA directly in a modified vase conformation (the CSDs would clash with the 5′ strand in a full vase conformation). Since cryo-EM analysis showed that a hairpinD–U 5 substrate led to the prong conformation alone, the former model seems more likely. However, active turnover conditions may allow for more dynamic back-and-forth movement of the CSDs during degradation. Nonproductive binding and non-nuclease roles of Dis3L2 An example of nonproductive binding was observed in the crystal structure of MmDis3L2 (ref. 17 ). While the RNA substrate provided was only 13 nucleotides in length, electron density for 14 nucleotides was observed. This was due to two different positions of the RNA: with the 3′ end either right in the active site (in a productive configuration) or removed away by one nucleotide, leaving the active site open. The latter position represents a nonproductive state. Nonproductive binding has been measured in other nucleases such as RRP6 (ref. 30 ). The large contribution from nonproductive binding might indicate that Dis3L2 functions in other roles that do not require nuclease activity. A recent study of hepatocellular carcinoma appears to have identified one such case 3 . Dis3L2 was found to be highly expressed in hepatocellular carcinoma tissues and promoted alternative splicing of the Rac1 gene through a nuclease-independent mechanism. Dis3L2 was shown to bind the Rac1 pre-messenger RNA via the S1 domain and recruit heterogenous nuclear ribonucleoprotein (hnRNP)–U through its CSDs. This enabled the production of Rac1b, an isoform that promotes transformation and tumorigenesis 3 . Evolutionary analysis of Dis3L2 has revealed that the protein has lost nuclease activity at least four times during fungal evolution, while the CSDs have remained conserved, thereby suggesting a role for the protein outside of RNA degradation 31 . The Dis3L2 homolog in Saccharomyces cerevisiae , Ssd1, is an example of this, losing both canonical RNase II/R catalytic residues and acquiring a loop insertion that blocks the tunnel to the active site. Ssd1 has been reported to act as a translational repressor of certain messenger RNAs involved in cell growth and cytokinesis, and deletion of Ssd1 was found to have pleiotropic effects on stress tolerance 31 , 32 . A comprehensive model for structured RNA degradation Combining our cryo-EM and kinetic data, we propose the following model: RNA degradation by Dis3L2 proceeds via a minimum of six sequential steps: (1) substrate association and quality control; (2) initial nucleotide cleavage; (3) 3′ single strand degradation; (4) double strand engagement; (5) dramatic domain realignment; and (6) concurrent double strand unwinding and degradation (Fig. 8 ). During the first four stages, Dis3L2 is in the vase conformation, with the S1 domain and CSDs positioned to form a large, positively charged surface for the oligo-U tail of the RNA. While the S1 and RNB domains provide crucial binding interactions, the CSDs enable effective initiation of degradation by contributing to the first catalytic step. This initial step is slow and acts as a substrate checkpoint. Once cleared, the enzyme enters the highly processive phase. When the overhang is shortened to 11 or 12 nucleotides, the RNA duplex engages with the enzyme, stabilized by contacts with the S1 domain and CSDs. At this point, the forward rate constant begins to decrease as the base of the dsRNA hairpin gets closer to the tunnel in the RNB domain. When the single-stranded 3′ overhang reaches nine or eight nucleotides, the 5′ strand of the RNA double helix runs into CSD1 and triggers a large movement of the two CSDs to the other side of the enzyme (see also Supplementary Video 1 ). In the resulting prong conformation, the S1 domain angles toward the tunnel and engages the backbone of the RNA double helix, which now sits over the RNB trihelix linker. The trihelix linker module acts as a wedge between the two RNA strands to separate them and enable the 3′ strand to enter into the narrow part of the now shortened tunnel. Strand unwinding probably initiates when the overhang reaches roughly five nucleotides. Alignment of the structure of Dis3L2 in complex with hairpinD–U 7 with known structures of RNase R/II family nucleases suggests that most would have to undergo a similar conformational change to allow the double-stranded portion of the RNA access to the trihelix linker wedge. Biochemical studies of Escherichia coli RNase R have also demonstrated the importance of the trihelix in dsRNA degradation 33 , suggesting that the mechanism proposed here could be conserved in other members of the RNase R/II family of nucleases. Collectively, this work unveils a molecular mechanism for efficient, regulatory degradation of structured RNAs by a vital nuclease. Fig. 8: Model of structured RNA processing by Dis3L2. RNA-free Dis3L2 is preorganized in a vase conformation to bind RNA substrates (yellow), with a seven-nucleotide-deep tunnel leading to the nuclease active site. When the RNA overhang is shortened to ~12 nucleotides, additional contacts are made to the dsRNA. Further shortening of the overhang triggers a large rearrangement of the two CSD domains (pink and orange) to the prong conformation and allows the base of the dsRNA to access a module in the RNB domain (blue) that acts as a wedge to separate the two RNA strands and allows entry of one of the strands into the narrow tunnel leading to the active site. In this way, the enzyme ensures continued RNA degradation during RNA duplex unwinding. Full size image Methods Protein preparation Full-length human Dis3L2, mutants with domain deletions, point mutations, and HsDis3 were cloned as amino (N)-terminal Strep-Sumo-TEV fusion proteins in a pFL vector of the MultiBac baculovirus expression system 34 . Benchling ( ) was used for sequence analysis and primer design. Expression and purification followed a similar protocol to that detailed by Faehnle et al. 17 . All constructs were expressed in SF9 cells grown in HyClone CCM3 Media (Thermo Fisher Scientific) at 27 °C for 60 h. Cells were then pelleted and resuspended in wash buffer (50 mM Tris (pH 8), 100 mM NaCl and 5 mM dithiothreitol (DTT)) and a protease inhibitor cocktail was added before snap freezing with liquid N 2 for storage at −80 °C. After thawing, cells were lysed by increasing NaCl to 500 mM, followed by one round of sonication. 0.1% poly-ethylene imine was added to the lysate and cell debris were cleared by 45 min of ultracentrifugation at 35,000 r.p.m. and 4 °C. The solution was then incubated with Strep-Tactin Superflow resin (IBA BioTAGnology) for 30 min while on a rolling shaker. The slurry was applied to a gravity column and washed with 20 column volumes of wash buffer before eluting the protein with 2 mM desthiobiotin in wash buffer. The Strep-Sumo-TEV tag was cleaved using TEV protease overnight at 4 °C. Cleavage efficiency and sample purity were assessed by sodium dodecyl sulfate polyacrylamide gel electrophoresis. The protein was then diluted to a final salt concentration of 50 mM in 25 mM HEPES (pH 7.5) and 5 mM DTT and applied to a HiTrap Heparin HP affinity purification column (GE Life Sciences) equilibrated in 50 mM NaCl, 25 mM HEPES (pH 7.5) and 5 mM DTT. The bound protein was eluted by applying a linear increasing salt gradient (0.05–1 M NaCl). Pooled fractions of protein were then concentrated and loaded onto a 10/300 Superdex 200 Increase gel filtration column (GE Life Sciences) equilibrated in 20 mM HEPES (pH 7.5), 150 mM NaCl and 5 mM DTT. Protein purity was assessed by the quality of the chromatogram and by running sodium dodecyl sulfate polyacrylamide gel electrophoresis gels. The concentrated sample was frozen in liquid N 2 and stored at −80 °C. Cryo-EM sample and grid preparation RNA oligos were purchased from Dharmacon, then RNA secondary structure predictions were done using the Vienna RNAfold web server ( ) and diagrams were made using Forna (Figs. 2a , 3a , 4a and 7a and Extended Data Figs. 4a and 8a ) 35 , 36 , 37 . RNA hairpins were annealed by diluting into 20 mM HEPES (pH 7.5), 150 mM NaCl and 5 mM DTT and heated to 95 °C for 3 min before stepwise cooling (95, 50, 30 and 4 °C). Complexes of human Dis3L2 and various RNAs were prepared by mixing equimolar ratios of Dis3L2 and RNA, incubating for 15 min and loading onto a 10/300 Superdex 200 Increase gel filtration column (GE Life Sciences) equilibrated in the same buffer (for specific controls indicated in Extended Data Fig. 6c , 100 µM ethylenediaminetetraacetic acid (EDTA) was added to the buffer). Complex formation was evaluated by monitoring a peak shift and the ratio of absorbance at 260 and 280 nm. Fractions of the complex were then pooled and concentrated to roughly 0.5 mg ml −1 for Quantifoil carbon-coated Cu grids or 0.3 mg ml −1 for Au foil grids (Quantifoil). Next, 4 µl of sample was applied to glow-discharged grids and a Vitrobot plunger (Thermo Fisher Scientific) was used to freeze the grids in liquid ethane (95% humidity; 20 °C; blot force 4; blot time 2.5 s). Cryo-EM data acquisition and image processing Data were collected on a 300 kV Titan Krios electron microscope at either 160,000× (0.67 Å pixel size) or 130,000× (0.64 Å pixel size) on a Gatan K2 or K3 detector equipped with an energy filter. A similar pipeline was used for all datasets (see below). Representative micrographs are provided in Supplementary Figs. 2 – 14 . Contrast transfer function estimation, motion correction and particle picking were done concurrent to data collection with WarpEM (version 1.0.8) 38 . Good particles (as selected by WarpEM) were imported into CryoSPARC where 2D classification was done. A subselection of particles were made taking the best 2D classes with the highest resolution (below 4 Å for those that led to a high-resolution structure) and largest particle number (classes with more than 5,000 particles) with protein-like features. This particle selection was then used in multiclass ab initio reconstruction. The classes were evaluated and then used as starting references for heterogeneous refinement, this time using all of the good particles from Warp’s picking process. The best heterogeneous refinement classes and their particle subsets were then used for homogeneous refinement and in some cases nonuniform refinement in CryoSPARC (Structura Biotechnology; versions 3.0.0 and 3.1.0) 39 , 40 . The Dis3L2–hairpinA–GCU 14 dataset was also processed in Relion using their 3D classification, refinement, contrast transfer function refinement and particle polishing 41 , 42 , 43 . Two representative workflows (processing of the Dis3L2 D391N –hairpinA–GCU 14 and Dis3L2–hairpinD–U 7 datasets) are shown in Supplementary Fig. 1 . To assess the distribution of particles between different Dis3L2 conformations, we used the following standardized protocol for data processing: (1) particles from the datasets were picked using WarpEM’s neural network-based picker; (2) good particles were then classified in CryoSPARC’s 2D classification; (3) the best 2D classes (as described above) were selected for ab initio reconstruction using five classes (four classes were used for the datasets hairpinD–U 8 #2, hairpinD–U 9 #2 and hairpinD–U 7 #2); and (4) the resulting five ab initio models were used as starting references for heterogeneous refinement using all of the good particles found by the WarpEM picker (Extended Data Fig. 5a ). This allowed us to compare the proportion of particles in the full dataset that contributed to a particular Dis3L2 conformation. To ensure that the class distributions were not a result of RNA degradation, control datasets with EDTA were also analyzed and showed the same overall distribution (Extended Data Fig. 5b ). To ensure that the 2D selection was not introducing bias into the distribution, we also processed the hairpinC–U 12 dataset without 2D classification. In other words, all particles were included in the ab initio and subsequent heterogeneous refinement (Extended Data Fig. 6d,e ). Further independent repeat datasets were collected and processed for Dis3L2 complexes with hairpinD–U 5 , –U 7 , –U 8 and –U 9 (Extended Data Fig. 6c ). Atomic model building and refinement Atomic model building and refinement were done in Coot and Phenix (version 1.18-3855-000) 44 , 45 , 46 . Since the mouse and human Dis3L2 proteins are extremely similar in sequence, the mouse Dis3L2 structure (Protein Data Bank (PDB) accession code 4PMW ) was used as a starting reference for model building 17 . Once the reference structure was fit into the cryo-EM map, Real-Space Refine was used, with morphing, simulated annealing and rigid body fit in the first rounds 45 . After manual building and correction of geometric outliers and clashes using Coot, further rounds of refinement were done using secondary structure restraints, as well as global minimization, refinement of atomic displacement parameters ( B factors) and local grid search. Refinements of complexes with RNA contained further base pair and base stacking restraints in the double-stranded regions. RNA–protein interactions were found with PDBePISA ( ) and examined manually. Final model validation metrics are provided in Table 1 . Electrostatics were calculated using PyMol 2.2.3 (Schrödinger) at an ionic strength of 150 mM. All other molecular graphics were performed with UCSF ChimeraX version 0.92, developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, with support from National Institutes of Health R01-GM129325 and the Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases 47 , 48 . Table 1 Cryo-EM data collection, refinement and validation statistics Full size table Presteady-state and quasisteady-state nuclease reactions Nuclease reactions were performed in a temperature-controlled heat block at 20 °C in a total volume of 40 µl. Reaction mixtures containing 20 mM HEPES (pH 7.0), 50 mM NaCl, 5% glycerol, 100 µM MgCl 2 , 1 mM DTT and Dis3L2 were preincubated for 5 min. Presteady-state reactions were started by the addition of 5′ radiolabeled RNA substrate to a final concentration of 1 nM. The concentrations of Dis3L2 were in far excess of the RNA and ranged from 5–1,000 nM, as indicated. Measurements were taken at the time points 7 s, 15 s, 30 s, 1 min, 2 min, 3 min, 5 min, 10 min and 15 min, except for long-time-course experiments for which the times are indicated (Extended Data Fig. 7b,c ). Reactions were quenched by the addition to an equal volume of stop buffer (80% formamide, 0.1% bromophenol blue, 0.1% xylene cyanole, 2 mM EDTA and 1.5 M urea). Samples were heated to 95 °C and analyzed on sequencing gels composed of 20% acrylamide and 7 M urea. Gels were exposed to phosphor screens overnight and scanned with a Typhoon FLA 7000 imager (GE Healthcare Life Sciences). Bands were quantified using SAFA footprinting software and the values were normalized for each lane 49 . For a typical reaction with a 34-nucleotide substrate and ten time points, we quantified all species larger than the five-nucleotide end product and obtained approximately 300 data points for each Dis3L2 concentration. Pulse–chase nuclease reactions Pulse–chase reactions were performed under conditions identical to those for presteady-state reactions. Reactions were initiated by the addition of enzyme and allowed to proceed for a defined period of time ( t 1 ). At t 1 , an excess of cold scavenger RNA (×5,000-fold) was added to a final concentration of 5 µM. After incubation for the indicated time ( t 2 ), aliquots were removed and quenched in stop buffer. Samples were analyzed on sequencing gels and processed as for the above-described presteady-state reactions. Calculation of kinetic parameters Kinetic parameters were obtained using a global fit of the data from presteady-state titrations and pulse–chase experiments. Global data fitting was performed using the Kinetic Explorer software (version 8.0; KinTek Global) 50 , 51 . Initial parameters for the global fit were: observed rate constants ( k obs ), processivity values ( P ) and K ½ and k obs max values for each reaction species. Observed rate constants ( k obs ) were calculated from presteady-state experiments by fitting each experiment separately using the global data-fitting software GFIT 52 to a model that calculates rate constants for a series of irreversible, pseudo-first-order reactions. Initial parameters for GFIT were obtained by fitting the disappearance of a 34-nucleotide substrate to a first-order exponential: y = a 1 × exp(− b 1 × t ) + c , where a 1 is the amplitude, b 1 is the observed rate constant ( k obs ) and c is the offset. Processivity values for individual degradation steps were determined from the distribution of substrate species before and after scavenger addition 30 (Extended Data Fig. 7g ), where processivity ( P ) was defined as: P = k f / ( k f + k off ). The equations to calculate processivity values from distributions of species were fit using a customized script in the Mathematica software package (Wolfram) 30 . To derive the K ½ and k obs max values, we fit k obs versus Dis3L2 concentration data to a binding isotherm function defined as: k obs = ( k obs max × [Dis3L2]) × ( K ½ Dis3L2 + [Dis3L2]) −1 . K ½ is the functional equilibrium dissociation constant and k obs max is the maximal observed rate constant at enzyme saturation (Extended Data Fig. 7d ). The data were then evaluated by plotting in GraphPad Prism version 9.1.2 (GraphPad Software). These initial parameters were used as guides in setting up a range of starting values for the elementary rate constants in a global fit to the minimal kinetic model, as shown in Extended Data Fig. 7a . The K ½ values were used to constrain the ratio of dissociation and association rate constants for productive binding by linking the two values as initial parameters. The k obs max values were used to set boundaries on the forward rate constant, k f . Finally, the experimentally determined processivity values ( P ) were used as initial constraints on the ratio of k f and k off . The global data fit was done in an iterative manner by alternating combinations of fixed and floating variables while tracking the overall χ 2 value. The goodness of the fit, R 2 = 0.94, was calculated by plotting the experimental datasets versus the corresponding simulated data from the kinetic model (Extended Data Fig. 7f ). As an additional measure of the overall quality of fit, we performed FitSpace analysis 51 to determine the lower and upper boundaries of each kinetic parameter (Supplementary Tables 1 – 3 ). For a typical substrate, roughly 2,500 individual data points from enzyme titrations and 750 data points from pulse–chase experiments were used to calculate the 120 kinetic parameters that describe degradation of a 34-nucleotide substrate down to five nucleotides. Errors for elementary rate constants represent standard errors of the mean from the global data fitting. Errors for compound rate constants, such as processivity and K ½ , were calculated via the error propagation formulas shown below. $$\begin{array}{l}\sigma _{K_{1/2}} = K_{1/2}\sqrt {\frac{{\sigma _{k_{{{{\mathrm{off}}}}}}^2}}{{k_{{{{\mathrm{off}}}}}^2}} + \frac{{\sigma _{k_{{{{\mathrm{on}}}}}}^2}}{{k_{{{{\mathrm{on}}}}}^2}}} \\ \sigma _P = P\left( {1 - P} \right)\sqrt {\frac{{\sigma _{k_{{{{\mathrm{off}}}}}}^2}}{{k_{{{{\mathrm{off}}}}}^2}} + \frac{{\sigma _{k_{{{\mathrm{f}}}}}^2}}{{k_{{{\mathrm{f}}}}^2}}} \end{array}$$ Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Structure coordinates and cryo-EM data have been deposited in the PDB and Electron Microscopy Data Bank (EMDB), respectively. The structures can be found under the following accession numbers: PDB 8E27 and EMDB- 27827 (RNA-free HsDis3L2); PDB 8E28 and EMDB- 27828 (HsDis3L2 in complex with hairpinA–GCU 14 ); PDB 8E29 and EMDB- 27829 (HsDis3L2 in complex with hairpinC–U 12 ); and PDB 8E2A and EMDB- 27830 (HsDis3L2 in complex with hairpinD–U 7 ). The cryo-EM map of the low-resolution HsDis3L2 complex with hairpinE–U 7 has been deposited in the EMDB under accession code EMDB- 27831 . The structure of mouse Dis3L2 (PDB 4PMW ) was used as a reference and for comparisons. Source data are provided with this paper.
RNAs are having a moment. The foundation of COVID-19 vaccines, they've made their way from biochemistry textbooks into popular magazines and everyday discussions. Entire companies have been launched that are dedicated to RNA research. These tiny molecules are traditionally known for helping cells make proteins, but they can do much more. They come in many shapes and sizes, from short and simple hairpin loops to long and seemingly tangled arrangements. RNAs can help activate or deactivate genes, change the shape of chromosomes, and even destroy other RNA molecules. Unfortunately, when RNA malfunctions, it can result in cancer and developmental disorders. It takes a lot to keep RNAs in check. Our cells have molecular "machines" that eliminate RNAs at the right time and place. Most come equipped with a "motor" to generate the energy needed to untangle RNA molecules. But one machine in particular, named Dis3L2, is an exception. The enzyme can unwind and destroy RNA molecules on its own. This action has puzzled scientists for years. Now, Cold Spring Harbor Laboratory (CSHL) biochemists have pieced together what's happening. It turns out Dis3L2 changes shape to unsheathe an RNA-splitting wedge. Using state-of-the-art molecular imaging technology, CSHL Professor and HHMI Investigator Leemor Joshua-Tor and her team captured Dis3L2 at work. They fed the molecular machine hairpin snippets of RNA and imaged it getting "eaten" at various stages. After the machine had chewed up the tip of the RNA, it swung open a big arm of its body to peel apart the hairpin and finish the job. "It's dramatic," Joshua-Tor says. "We know things change conformation. They buckle. But opening something out like that and exposing a region in this way—we didn't quite see something like this before." Katarina Meze, the former graduate student in the Joshua-Tor lab who led this study, standing next to the lab’s cryo-EM imaging machine. The machine allows scientists to freeze molecules in place to study their structure and geometry. Credit: Joshua-Tor lab/CSHL Joshua-Tor's team then began tinkering with the Dis3L2 machine, searching for the gears and parts enabling it to unwind and destroy RNA. The researchers narrowed it down to a protruding wedge left unsheathed after the machine shifted shapes. If the researchers removed the wedge, Dis3L2 could no longer untangle the RNA hairpin, putting the machine out of commission. The findings reveal a surprising new way that RNA-controlling machines in our cells execute their tasks. Rather than solid structures, these molecular workhorses need to be considered malleable and versatile. This new outlook may help scientists develop better treatments for diseases and disorders caused by RNA gone haywire. "We have to start thinking about these things as much more dynamic entities," Joshua-Tor says, "and take that into account when we are designing therapeutics." The findings are published in the journal Nature Structural & Molecular Biology.
10.1038/s41594-023-00923-x
Biology
Lead ammunition polluting Argentina
Marcela Uhart et al. Lead pollution from hunting ammunition in Argentina and current state of lead shot replacement efforts, Ambio (2019). DOI: 10.1007/s13280-019-01178-x Journal information: AMBIO
http://dx.doi.org/10.1007/s13280-019-01178-x
https://phys.org/news/2019-04-ammunition-polluting-argentina.html
Abstract Waterfowl hunting in Argentina is a profitable industry that attracts hunters from all over the world. Most hunting occurs as high-end hunting tourism, through which registered outfitters service predominantly foreign clients on private lands. Lead pollution from hunting ammunition is increasingly recognized as a significant local problem, impacting wildlife, aquatic and terrestrial habitats, and extending to vulnerable human rural communities. Regulatory frameworks that restrict lead shot use are a budding success story but remain challenged by their constrained geographic range and limited compliance rooted in unavailable nontoxic ammunition. Changes in hunting practices in Argentina are long overdue. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Lead pollution from hunting ammunition is a global environmental health problem for which there is a simple and scientifically validated solution, but also an overwhelming resistance to change (Mateo et al. 2014 ; Arnemo et al. 2016 ; Hampton et al. 2018 ). Factors associated with the controversy surrounding lead shot replacement have been addressed elsewhere and remain a standing, unresolved issue of significant health and conservation impact (Friend et al. 2009 ; Cromie et al. 2015 ; Kanstrup 2015 ). Despite worldwide evidence of wildlife and human lead poisoning over nearly a century, rarely have nations acted until local data were garnered and local toxicity demonstrated (Avery and Watson 2009 ; Mateo 2009 ). Hunting pollution was a foreseen yet poorly addressed issue in Argentina until a decade ago when concerns over its magnitude triggered long-delayed research. In this paper, we provide a brief overview of hunting and associated lead pollution in Argentina, describe recent progress in restrictions to toxic ammunition use, highlight remaining obstacles, and recommend actions to overcome these challenges. Hunting in Argentina Sport hunting is significant in Argentina, yet there is a lack of publicly available data to fully assess its extent. An unpublished study conducted by Caselli et al. in 2011 collated information for 16 provinces from government websites ( n = 11) and/or from official responses to email surveys ( n = 5). The remaining seven provinces of the 23 in the country did not respond to the survey and had no accessible information on their websites. At that time, only seven (30%) provinces displayed hunting regulations on their websites, and 3 (13%) had maps showing areas where hunting was and was not allowed. Seven (30%) provinces kept records of annual hunting licenses sold. In terms of government control of hunting, overlap in regulatory mandates between offices charged with natural resource management and hunting control existed in seven provinces (30.4%). In nine provinces (39.1%), government responsibilities were based in different offices. In 2011, small game hunting was allowed for a total 53 native and exotic species and distributed in 16 provinces, with seasons ranging from 1 to 3 months in 6 (37.5%), 4–9 in 8 (50%) and 10–12 in 2 (12.5%). Large game hunting was permitted for 19 native and exotic species, in 12 provinces: during 1–3 months in 1 province (8.3%), 4–9 in 6 (50%), 10–12 in 3 (25%), and 2 (16.6%) provinces had variable seasons per species. Ten provinces allowed waterfowl hunting with daily bag limits for different species ranging from 5 ducks in 2 provinces, to 10 in 3, 12 in 2, and 15 in the remaining 3 provinces. Waterfowl hunting in Argentina Waterfowl hunting in Argentina is a profitable industry that attracts hunters from all over the world (Zaccagnini 2002 ). It has grown considerably since the 1990s (Zaccagnini and Venturino 1992 ; Zaccagnini 2002 ). Most hunting occurs as high-end hunting tourism, through which registered outfitters service predominantly foreign clients on private lands. Footnote 1 While ten provinces allow waterfowl hunting, the largest volume of huntsmen converge at the wetlands of Santa Fe, Corrientes and Entre Ríos provinces. These sites harbor a wide diversity of waterfowl including species protected by the Convention of Migratory Species (CMS) such as flamingos, ducks, swans and plovers, and overlap with several Important Bird Areas (IBA) and Ramsar sites (Benzaquén et al. 2017 ). Although hunting quotas are deemed conservative, there is a paucity of information on waterfowl population status and trends, enforcement is weak and information on registered hunters is often unavailable. Lead toxicity from spent ammunition in waterfowl The massive use of lead ammunition in Argentinean wetlands is relatively recent, compared to Europe and North America. Based on government data, at least 56 tons of ammunition lead were added to wetlands in Santa Fe province alone between 2007 and 2009 by hunting. Non-quantified amounts, but presumably similar levels, are also deposited in other waterfowl hunting hotspots on an annual basis. Since 2007, we have undertaken collaborative studies between local universities (Universidad Nacional del Centro de la Provincia de Buenos Aires - UNICEN, Universidad Nacional del Litoral, and Universidad Nacional del Sur) and various nongovernmental organizations in the provinces of Santa Fe, Corrientes and Buenos Aires. Between 2007 and 2013 we examined 455 hunter-killed ducks for signs of lead exposure. Our study included only authorized species collected during the hunting season by registered hunters. Specifically, we studied the whistling duck ( Dendrocygna bicolor ), white-faced tree duck ( D. viduata ), black-bellied whistling-duck ( D. autumnalis ), rosy-billed pochard ( Netta peposaca ) and Brazilian duck ( Amazonetta brasiliensis ). We found prevalences of lead pellet ingestion that varied between 7.6 and 50%, and lead accumulated in the bones of 100% of these birds due to long-term exposure (Ferreyra et al. 2009 , 2014 ; Natalini et al. 2014 ). We also documented toxic levels of lead in liver (60% prevalence) and blood of ducks (28% of 96 live ducks), which were associated with poor body condition and blood abnormalities (Ferreyra et al. 2014 , 2015 ). Lead pollution from spent ammunition in wetland environments During the same period, we also documented accumulation of lead in natural wetlands and rice fields where waterfowl hunting is practiced. We found spent shot densities in the top 15 cm of wetland sediment as high as 141 pellets/m 2 , a value that resembles the most contaminated areas in Europe (Pain 1990 ; Mateo et al. 1997 ; Romano et al. 2016 ). Likewise, densities in rice fields reached 38 pellets/m 2 (Romano et al. 2016 ). Lead dissolved in sediment amounted to 79 ppm in the former, and 14 ppm in the latter (Romano et al. 2016 ). Regrettably, there is no regulation on acceptable contaminant levels for areas under conservation in Argentina. In the case of agricultural soils (only applicable to rice fields in this case), the tolerance level for lead is 375 mg/kg (equivalent to 375 ppm) (Regulatory Decree 831/93, Law on hazardous waste 24051). However, this category does not include productive systems with periodic flooding such as rice crops in which surface freshwater can transport the toxic elements. Although we did not measure lead in rice-field water, in natural wetlands, we found lead levels between 0.008 and 0.005 ppm. These exceeded acceptable levels for the protection of aquatic life in superficial freshwater in Argentina (0.001 ppm; Hazardous Waste Law 24051), and were near the limits for livestock and irrigation—0.002 and 0.004 ppm, respectively. Finally, we found maximum lead concentrations of 10.1 ppm in several plant species that are regularly consumed by fauna and domestic livestock in wetlands where hunting occurs. Moreover, in a small sample of preharvest rice crops, we found lead mainly in the roots (up to 22 ppm), 5.6 ppm in the stems, and levels decreasing toward the grain where the average values were < 0.6 ppm (maximum acceptable value in edible plants 2 ppm, Argentine Food Codex, Article 1546, 17.9.85) (Romano et al. 2016 ). The tip of the iceberg Our work provides evidence of wetland pollution associated with high acute and chronic exposure to lead from spent ammunition in waterfowl in Argentina. Moreover, the levels found match reports from other parts of the world with severe contamination problems and worrisome impacts on their avifauna (Mateo 2009 ). We, however, acknowledge that our studies represent only a small fraction of a significantly greater problem, with consequences and impacts at the ecosystem level that far exceed waterfowl and wetlands and shotgun ammunition. In other areas of the country, for example, exposure to lead from bullets has been documented in the near-threatened Andean condor ( Vultur gryphus ) (Birdlife International 2017 ). Lambertucci et al. ( 2011 ) found lead levels as high as 21.1 ppm in condor feathers from northern Patagonia. A recent study by Wiemeyer et al. ( 2017 ) found blood lead levels ranging from 0.2 to 1400 ppm in a set of 76 free-ranging condors from across Argentina submitted for rehabilitation. Additionally, through X-ray examination they identified 15 of 62 (24.2%) condors with ammunition fragments in their bodies (Wiemeyer et al. 2017 ). Whereas secondary lead poisoning has been documented in numerous predator and scavenger bird species, particularly raptors, there are few studies involving other taxonomic groups (Tranel and Kimmel 2009 ). In an exploratory study in Argentina, Rago et al. ( 2012 ) noted lead in blood (0.005–0.066 ppm) in 12/16 Yellow anaconda ( Eunectes notaeus ) from Corrientes province, a core waterfowl hunting area, versus no lead in 30 anacondas from a hunting-free zone (Formosa province). Moreover, they observed significantly superior health parameters (i.e., body mass, blood cells, parasitism and plasma chemistries) in anacondas from Formosa. They hypothesized that lead levels and poor health in the Corrientes anacondas were associated to dietary intake of contaminated waterfowl. Yellow anacondas are considered vulnerable (Giraudo et al. 2012 ), and while the effects of lead exposure in this species are unknown, other reptiles have shown reproductive failure, anorexia, weight loss, poor growth, lethargy, and death when fed ammunition-contaminated prey (Camus et al. 1998 ; Lance et al. 2006 ). Unassessed pollution from dove hunting When dove hunting is considered, lead entering the environment rises to astounding (but undocumented) levels in Argentina. Dove shooting is currently authorized in 11 provinces, two of which allow year-round hunting with no bag limits for the species considered agriculture pests, namely the eared dove ( Zenaida auriculata ) and rock dove ( Columbia livia ). There is some variation for Picazuro pigeon ( Patagioenas picazuro ) and spot-winged pigeon ( Patagioenas maculosa ) for which quotas range between 50 and 500 per hunter per day over 6-month periods. Lodges commonly promise 1–2000 cartridges per hunter per day. With approximately 10 000 hunters visiting the “dove shooting capital” Córdoba province annually, Footnote 2 a conservative estimate is that 210 to 480 tons of lead are added to the environment per year (based on use of 21 or 24 g cartridges). Except for Santa Fe which restricts daily quotas to 50 eared doves when lead ammunition is used (since 2016 Footnote 3 ), no such regulations on lead-ammunition use exist elsewhere for this practice. Despite the magnitude of this industry, very few studies have documented lead pollution from dove hunting to date. Notwithstanding, there is increasing evidence that it is substantial (Rubio et al. 2014 ) and that it implies a significant risk for animal and human health (Wannaz et al. 2012 ; Salazar et al. 2012 ). Impacts on public health Many lodges in Argentina regularly donate hunted game (ducks, doves) to the rural poor in the vicinity of their hunting grounds. Some lodges even advertise this as a community service. Footnote 4 In 2010, Mónica Parvellotti, a teacher from San Javier Santa Fe province, publicized her concerns about the frequency with which her students gathered dead ducks left on the shooting fields and took them home for supper. Footnote 5 Acting on these cues, a preliminary study led by Drs. Caselli (UNICEN) and Loyácono (Hospital de Clínicas, Universidad de Buenos Aires) in 2015 described lead exposure in children 1–12 years in our core study area in Santa Fe. Sixty-two percent of children who regularly ate hunted game (38/61) tested positive, with average blood lead levels > 0.09 ppm (maximum 0.28 ppm) (Caselli et al. unpubl.). Moreover, lead was also found in all baby teeth donated by 6–11-year-old school children (n = 38) between 2015 and 2017. Levels ranged from 0.06 ± 0.01 to 1.87 ± 0.37 µg/g Footnote 6 (Caselli et al. unpubl.). Of 88 surveyed families in the blood lead study, equal proportions reported feeding on self-hunted versus hunter-donated ducks. Sixty-six percent of households indicated they removed pellets and the shot trace in game meat before cooking. Pellet recovery per dish at the time of eating was 3 pellets on average, maximum 10 (Caselli et al. unpubl.). Toward lead-ammunition replacement A wave of positive outcomes followed awareness of the severity of ammunition lead pollution in several Argentina provinces stemming from our research. For example, the government of Santa Fe province contributed funding for environmental studies, and the Santa Fe hunter’s association Footnote 7 facilitated waterfowl samples and covered some diagnostic expenses. Also, during this time, several participatory workshops convened all major stakeholders to discuss the problem of lead toxicity and develop a roadmap for transition to nontoxic ammunition. Finally, in November 2011, our team hosted the first national workshop on lead-ammunition toxicity and a nontoxic ammunition shooting clinic, co-convened by the Federal Wildlife Agency of Argentina and the Environment Secretariat of Santa Fe Province, and sponsored by the Argentina Firearms Registry RENAR, the Argentina Hunters Association, and several local universities. More than 80 people attended the 2-day event, including representatives from provincial governments (wildlife and environment agencies), hunters, hunting associations, hunting outfitters, and ammunition manufacturers. Instrumental to this effort were the two invited instructors from Denmark: Niels Kanstrup, a wildlife manager and head of the Danish Hunting Association; and Lars T. Andersen, a shooting instructor and ballistics specialist. As has been the case elsewhere in the world, having hunters speaking to hunters proved key to engage attendants and counter myths on non-lead-ammunition performance (Friend et al. 2009 ). Consensus on the need to transition to nontoxic ammunition was reached, yet a critical obstacle was identified: local availability, at a reasonable price and volume, of non-lead ammunition. Regulatory and transition actions in Argentina In 2011, the provinces of Santa Fe and Córdoba established regulations to limit lead-ammunition use. A pioneer in this matter, Santa Fe enacted gradual restrictions on lead shot use in wetlands, completing the total ban by 2016. Footnote 8 Moreover, current legislation in that province encourages progressive substitution of lead ammunition in all forms, for all species and habitats. Footnote 9 Unlike Santa Fe which incorporated lead bans into their hunting legislation, Córdoba prohibited the use of lead shot in wetlands through Hazardous Waste regulations. Footnote 10 Although this restriction remains in force, it does not appear annually in the small game hunting regulations which set quotas and species. Buenos Aires province initiated an exploratory substitution process in 2013 Footnote 11 , but no progress has been made since. In 2014, the federal government adhered to a global voluntary resolution to eliminate lead ammunition by 2020 (Convention of Migratory Species - UNEP 2014). In 2016, the Federal Council for the Environment (Consejo Federal de Medio Ambiente COFEMA) commended the path taken by Santa Fe province and declared lead-ammunition replacement a national environmental priority. Footnote 12 In the last 2 years, government agencies have been negotiating with local ammunition makers and retailers to facilitate non-lead-ammunition availability through importation and/or local manufacture. Progress has been slow, but some alternatives will likely reach the local market during 2019 (Roberti, pers. comm. Footnote 13 ). Unresolved challenges Lack of availability of nontoxic alternatives in the country remains the biggest limitation to lead-ammunition replacement in Argentina. Not only does this void leave a huge environmental and health problem unresolved, but it highjacks hard-earned advances to date and directly conflicts with compliance of the existing regulations (see Santa Fe and Córdoba, previous section). It also creates unnecessary antagonism with the hunting sector, which feels threatened by the lack of options should they comply with current restrictions to lead use. Moreover, it likewise hampers regulatory efforts in a scenario of already weakened enforcement due to insufficient state resources. The delay in securing local alternatives to lead is a clear loss of opportunity at a time when there is broad and hard-earned consensus on the imminent need for change. As has been noted in Europe (Kanstrup and Thomas 2019 ), availability of non-lead ammunition in Argentina is also constrained by poor demand and ill-enforced regulation. An added factor in this country is financial instability which inhibits investment through lack of market predictability. Nonetheless, a commercial activity catering overwhelmingly to affluent foreign hunters that benefit from local currency weakness should be capable of leveraging cost-related obstacles and ensure some degree of demand stability that would make local manufacture a viable option. To some extent, the hunting constituent’s sluggishness and nonchalance contribute to the delay in effectively replacing lead shot in Argentina. Complementary bottom-up approach In our study sites, communities are acutely aware of daily shootings from hunting, but much less so about the negative impacts that current unsustainable practices are having on their natural surroundings and their own health. Hence, over time we have supplemented our advocacy efforts toward policy makers and hunters, with efforts to build a knowledgeable community-based constituency. The expectation is that these empowered and conservation-minded communities will then push the lead-ammunition agenda forward from a genuine and locally embedded concern about their immediate environs. Our community interventions take place under the Community-based Territory Conservation Program (CTCP), Footnote 14 conceptualized and materialized by Dr. A. Caselli and a multidisciplinary core team from the Schools of Veterinary Medicine and Exact Sciences, UNICEN. The program’s focus is on wetlands at risk from anthropogenic actions, including but not limited to unsustainable hunting. Wetlands are used as open classrooms to develop ecological literacy, thus positively reinforcing community ownership and enabling explicit participatory and community-driven interventions to halt pollution and biodiversity loss (Caselli et al. 2018 ). Conclusions and recommendations Waterfowl hunting in Argentina is a profitable industry that attracts hunters from all over the world. Most hunting occurs as high-end hunting tourism, through which registered outfitters service predominantly foreign clients on private lands. Lead pollution from hunting ammunition is increasingly recognized as a significant local problem, impacting wildlife, aquatic and terrestrial habitats, and extending to vulnerable human rural communities. Regulatory frameworks that restrict lead shot use are a budding success story but remain challenged by their constrained geographic range and limited compliance rooted in unavailable nontoxic ammunition. It is therefore of the highest priority to: 1. Grant state policy status to halting lead toxicity from spent ammunition. Recent inclusion of this issue in a national law for biodiversity protection to be soon submitted to Congress is hopeful progress. 2. Enable and expedite importation of nontoxic ammunition or bulk steel pellets to facilitate cartridge manufacture in Argentina. Alternatives to lead munition are urgently needed in the local market to validate existing regulatory efforts. 3. Encourage and entice local manufacture of nontoxic ammunition. Once available at reasonable cost, regulations could be expanded beyond current habitat and provincial boundaries. 4. Act on the recommendation of the Federal Council for the Environment (COFEMA) to prioritize replacement of lead ammunition nationwide. Convene provincial governments to define courses of action and the terms for transition in each jurisdiction. 5. Through provincial administrations, launch awareness campaigns targeting sport hunting associations and hunters. Make local efforts to ban lead shot and existing regulations well known to foreign hunters visiting Argentina. Locally, provide evidence that counters unfounded myths on non-lead shot performance. 6. Integrate the public health sector in efforts to ban lead ammunition. Increase awareness on ill-effects to human health through dietary intake and the need to avoid exposure through all pathways, particularly in children. 7. Expand, deepen, and communicate research on lead pollution at all scales through a One Health approach, extending to human health, environment, wildlife, and agriculture sectors. 8. Reinforce interdisciplinary and participatory forums where local characteristics of lead-ammunition pollution and substitution are explored and adequately addressed to facilitate behavior change. 9. Empower local communities to act on behalf of their environmental concerns and address threats like lead pollution and biodiversity loss in their immediate surroundings, driving policy change from the bottom-up. 10. Inform the general public. Stress availability of solutions and remediation via lead-ammunition bans and timely transition to sustainable and nontoxic options. In the specific context of Argentina, where hunting is minimally for subsistence and largely for sport, lead pollution stands out as a uniquely accessible and remediable environmental problem. This is one issue for which there is a known solution with proven chances of success if constituents understand the risks at stake and are willing to contribute individually to the collective wellbeing by switching to nontoxic options. Moreover, the recommended shift is neither opposed to progress and economic gain, nor does it disagree with the needs and livelihoods of the main constituents, since hunting itself is not contested, but rather urged to adapt to current societal, bioethical, and sustainability standards (Kanstrup et al. 2018 ). Changes in hunting practices in Argentina are long overdue. Notes . . Resolution 123/16 . . . Inductively Coupled Plasma Mass Spectroscopy (ICP-MS), Comisión Nacional de Energía Atómica. Cámara de empresas de turismo cinegético y pesca deportiva de la provincia de Santa Fe. Resolution 123/16 . Resolution 10/19 . Resolution No. 1115/2011. . Resolution No. 63/13 . Resolution 7/2016 . Servicios y Aventuras ( ), Tucumán, Argentina. March 20th, 2019. .
Pollution from lead ammunition causes environmental health problems in Argentina, and progress is underway to find viable replacements for lead shot, according to an overview of lead pollution from hunting in the country. Argentina's pioneering awareness and attention to this problem may help others address this global health issue that threatens humans, animals and landscapes. The report, compiled by the University of California, Davis' One Health Institute and Universidad Nacional del Centro de la Provincia de Buenos Aires in Argentina, was published April 12 in the journal Ambio. "Lead pollution is one of the very few environmental problems for which there is a simple solution: Switch from lead to nontoxic ammunition," said lead author Marcela Uhart, a wildlife veterinarian with the UC Davis One Health Institute and director of the Latin America Program within UC Davis' Karen C. Drayer Wildlife Health Center. "We're not saying 'Don't hunt.' We're not asking anyone to change their livelihood or lifestyle. It just needs to be done sustainably, without introducing poison to the land, water, animals and people who live here." High-end hunting Hunting is a lucrative industry in Argentina, where registered outfitters cater to mostly foreign clients seeking high-end hunting tourism experiences on private lands, particularly for doves and ducks. Uhart said hunters and outfitters in the country recognize the problem, and some have been working closely with the researchers and government over the past decade to find viable solutions. Many have said they would switch to nontoxic shot if it became more easily available and affordable. A student in Argentina counts birds at a wetland. Waterfowl, wetlands and children near hunting sites in the country have been found to be exposed to lead from ammunition. Credit: Linda C. Alvarez Discussions underway in Argentina indicate that nontoxic steel shot could be manufactured locally as soon as this spring or summer. This would be a very important step forward, as it could help drive down the cost of nontoxic ammunition and encourage the shift away from lead shot. Lead in the land, animals and people Roughly 10,000 hunters visit the "dove shooting capital" of Córdoba province each year, adding between 210 and 480 tons of lead to the environment. Conservative estimates suggest at least 56 tons of lead from ammunition were added to wetlands in Santa Fe province between 2007 and 2009, one of Argentina's major sites for waterfowl hunting. Previous studies, referenced in the report, found accumulations of lead from spent shot in: The bones and bodies of hunter-killed ducksWetlands and rice fields where waterfowl hunting is practicedPlants regularly eaten by wildlife and domestic livestock in these areasThe blood and baby teeth of Argentinian children who ate hunted game Lead, a known toxicant that affects humans and animals, accumulates in the body over time, causing severe systemic disorders. Children are particularly vulnerable and can suffer permanent and severe health effects, particularly in their central nervous system. In November 2011, the authors hosted the first national workshop on nontoxic shot in Argentina, which was attended by hunters and ammunition manufacturers. Here, hunters at a shooting range demonstrate ballistics of nontoxic shot during the workshop, which was held to help dispel myths about the efficacy of nontoxic ammunition. Credit: M. Romano Levels of lead referred to in the paper for wildlife and wetlands match reports from other parts of the world with severe contamination problems, and represent just a fraction of the problem. Exposure to lead from bullets has been documented in the near-threatened Andean condor in Argentina, as well. The impacts of lead ammunition pollution on the California condor, demonstrated in part by UC Davis and its partners, was a major reason that state issued a full lead ammunition ban, which goes into effect this July. Science to policy In an example of science leading to policy change, Argentina has taken major steps over the past decade toward addressing this issue. This includes working with the authors to engage key stakeholders at multiple meetings, shooting clinics and through conservation-education community outreach efforts. Some provinces banned lead shot used for waterfowl hunting as soon as the authors shared evidence of the problem. "We want to commend Argentina for being at the frontier of addressing this while recognizing there is a lot more to be done," Uhart said. "This is a global environmental problem that is serious but avoidable. It's a real One Health example, impacting everyone—humans, the environment and wildlife. But we can change it right now, and there is proof it has worked." The paper provides 10 recommendations for policymakers in Argentina to prioritize. They include: Grant state policy status to halting lead toxicity from spent ammunition.Encourage and entice local manufacturing and availability of nontoxic ammunition.Educate foreign hunters visiting Argentina on local efforts to ban lead shot and existing regulations.Increase awareness on ill-effects to human health through dietary intake and the need to avoid exposure through all pathways, particularly in children.
10.1007/s13280-019-01178-x
Medicine
Imbalance between serotonin and dopamine in social anxiety disorder
Olof R. Hjorth et al. Expression and co-expression of serotonin and dopamine transporters in social anxiety disorder: a multitracer positron emission tomography study, Molecular Psychiatry (2019). DOI: 10.1038/s41380-019-0618-7 Journal information: Molecular Psychiatry
http://dx.doi.org/10.1038/s41380-019-0618-7
https://medicalxpress.com/news/2020-01-imbalance-serotonin-dopamine-social-anxiety.html
Abstract Serotonin and dopamine are putatively involved in the etiology and treatment of anxiety disorders, but positron emission tomography (PET) studies probing the two neurotransmitters in the same individuals are lacking. The aim of this multitracer PET study was to evaluate the regional expression and co-expression of the transporter proteins for serotonin (SERT) and dopamine (DAT) in patients with social anxiety disorder (SAD). Voxel-wise binding potentials (BP ND ) for SERT and DAT were determined in 27 patients with SAD and 43 age- and sex-matched healthy controls, using the radioligands [ 11 C]DASB (3-amino-4-(2-dimethylaminomethylphenylsulfanyl)-benzonitrile) and [ 11 C]PE2I (N-(3-iodopro-2E-enyl)-2beta-carbomethoxy-3beta-(4′-methylphenyl)nortropane). Results showed that, within transmitter systems, SAD patients exhibited higher SERT binding in the nucleus accumbens while DAT availability in the amygdala, hippocampus, and putamen correlated positively with symptom severity. At a more lenient statistical threshold, SERT and DAT BP ND were also higher in other striatal and limbic regions in patients, and correlated with symptom severity, whereas no brain region showed higher binding in healthy controls. Moreover, SERT/DAT co-expression was significantly higher in SAD patients in the amygdala, nucleus accumbens, caudate, putamen, and posterior ventral thalamus, while lower co-expression was noted in the dorsomedial thalamus. Follow-up logistic regression analysis confirmed that SAD diagnosis was significantly predicted by the statistical interaction between SERT and DAT availability, in the amygdala, putamen, and dorsomedial thalamus. Thus, SAD was associated with mainly increased expression and co-expression of the transporters for serotonin and dopamine in fear and reward-related brain regions. Resultant monoamine dysregulation may underlie SAD symptomatology and constitute a target for treatment. Introduction Social anxiety disorder (SAD) is a highly common psychiatric condition associated with anxious and avoidant behavior in any situation where the individual is subject to scrutiny or becomes the center of attention. This is often a lifelong problem affecting the personal as well as the professional domain [ 1 ]. The biological basis of this disorder is still largely unknown although functional neuroimaging studies of SAD have reported aberrant activation and functional connectivity of the amygdala, and other nodes of the brain’s fear network, in response to socially threatening stimuli [ 2 ]. Serotonin has long been implicated in the regulation of mood and anxiety [ 3 , 4 ] and because this neurotransmitter is a major target for pharmaceuticals that are effective for SAD [ 5 ] it may be of particular etiological relevance. In earlier nuclear imaging research, patients with SAD exhibited reduced serotonin-1A receptor binding in limbic and paralimbic regions including the amygdala and dorsal raphe nuclei [ 6 ]. Moreover, a PET study from our group reported increased presynaptic serotonin synthesis in the amygdala, raphe nuclei, striatum, hippocampus, and anterior cingulate cortex (ACC) [ 7 ] and these results were essentially replicated in a separate cohort of patients and controls [ 8 ]. Interestingly, amygdala serotonin synthesis capacity correlated with social anxiety symptom severity [ 7 ] and was reduced, concomitantly with stress-related amygdala activation, after successful pharmacological treatment [ 9 ]. There are also two previous nuclear imaging studies on the serotonin transporter (SERT), both noting higher SERT binding potential (BP) in SAD patients in the thalamus [ 7 , 10 ] and additionally in the raphe nuclei region, striatum, and insula [ 7 ]. The latter findings were demonstrated by use of PET and [ 11 C]DASB (3-amino-4-(2-dimethylaminomethylphenylsulfanyl)-benzonitrile), a highly selective ligand to the SERT [ 11 ]. Based on these results, we previously suggested that SAD entails an overactive presynaptic serotonergic system [ 7 , 8 ]. While mesocortical [ 12 ] and mesolimbic [ 13 ] dopaminergic neurons are also sensitive to aversive stimuli, the dopamine system has a crucial role in driving prosocial behavior, reward processing, positive affect, and approach motivation [ 14 , 15 , 16 , 17 ]. Because it has been reported that SAD is associated with diminished pleasure from social activity and social-motivational dysfunction, an etiologic role for dopamine has been suggested in this disorder [ 18 ]. However, only a few nuclear imaging studies have examined putative dopamine abnormalities in SAD [ 19 ]. Altered striatal [ 20 , 21 , 22 ] and extra-striatal [ 23 ] dopamine D2 binding have been evaluated but findings have been mixed. Results from studies targeting the dopamine transporter (DAT) are also inconclusive, noting either increased [ 10 ] or decreased [ 24 ] transporter availability in SAD as well as no difference between SAD patients and healthy controls [ 22 ]. Interestingly, Warwick et al. reported increased striatal DAT binding after the treatment of SAD with the SSRI escitalopram [ 25 ] suggesting serotonergic influences on dopamine signaling. It should be noted that all previous nuclear imaging studies targeting the DAT in SAD have used SPECT with ligands that are not specific for DATs. In comparison, PET images have higher resolution than SPECT and radioligands that bind highly selectively to DAT, like [ 11 C]PE2I (N-(3-iodoprop-2E-enyl)-2b-carbomethoxy-3b-(4-methyl-phenyl)nortropane), can now be used to improve data quality [ 26 ]. Biopsychological theories of personality have proposed that approach-avoidance conflicts in social situations, a prominent feature of SAD symptomatology, reflect the balance between serotonin and dopamine signaling in neural pathways underlying fear and reward [ 27 , 28 , 29 , 30 ]. Consistent with these theoretical models, pharmacological and anatomical studies support that the serotonin and dopamine systems have reciprocal functional influences on each other [ 31 , 32 , 33 , 34 ]. At the anatomical level, serotonergic cell bodies in the raphae nuclei project to the striatum where their axon terminals are in close proximity to dopamine cells [ 35 ] and, in rats, there is evidence of a direct serotonergic inhibitory input from the median raphe nucleus to the dopaminergic substantia nigra neurons [ 36 ]. However, it is not known if serotonin-dopamine interactions are involved in the pathophysiology of anxiety disorders like SAD, and nuclear imaging studies directly addressing this topic are therefore needed. Transporter functions may be particularly relevant targets for such studies [ 37 ]. For example, SERT availability predicts amygdala reactivity in healthy volunteers [ 38 ] and polymorphisms in the genes encoding SERT and DAT, influence amygdala responsiveness in patients with SAD [ 39 , 40 , 41 ]. The principal aim of the present multitracer PET study was to examine the intra-regional co-expression of serotonin and DATs in patients with SAD, as compared with healthy controls, by estimating the statistical interaction effect of SERT/DAT binding in fear- and reward-relevant brain regions. We used PET with [ 11 C]DASB and [ 11 C]PE2I as radiotracers to determine if there is a different brain SERT/DAT-balance in the two groups. In addition, differences between SAD patients and controls were evaluated within the two monoamine transporters separately. We here sought to replicate our earlier results on increased SERT availability in SAD [ 7 ] in a new and larger sample of patients and controls. Due to the contradictory results of earlier SPECT studies on DAT, we were also interested in putative SAD-related aberrations in DAT binding when using a more sensitive method, i.e., [ 11 C]PE2I PET. Materials and methods Participants Twenty-nine patients with SAD and forty-three healthy controls (HC) underwent [ 11 C]DASB and [ 11 C]PE2I PET imaging. Two patients with SAD were excluded from all analyses due to magnetic resonance imaging (MRI) contraindications and withdrawing from the study before completed MRI, respectively, leaving 27 patients (17 men, 10 women; mean ± SD age, 31.10 ± 10.32 years), and 43 HC (23 men, 20 women; 32.81 ± 11.56 years) in the analyses. In addition, one male HC was excluded from all SERT-related analyses because of no DASB signal in large areas of the selected regions-of-interest (ROIs). The mean duration of illness was estimated at 19.6 years. All participants were right-handed except three participants in the control group. To our knowledge, there is no evidence that handedness affects SERT/DAT distribution. The groups did not differ from each other in age ( t = 0.655, P = .51) or sex distribution ( χ 2 = 0.608, P = 0.60). Participants were recruited through advertisements in newspapers, public billboards, and the internet. Exclusion criteria were age <18 or >65 years, earlier PET-scan, contraindications for MRI, pregnancy, menopause, substance abuse or dependency, any ongoing severe somatic disease or serious psychiatric disorder (e.g., major depressive disorder, suicidality or psychosis), any ongoing treatment for psychiatric disorders or treatment that was terminated <3 months ago. All participants were screened for participation using an extensive online form and those who did not meet the exclusion criteria were administered an excerpt from the Structured Clinical Diagnostic Interview for the DSM-IV [ 42 ] and the full Mini-International Neuropsychiatric Interview [ 43 ] via telephone to ensure that all patients in the SAD group fulfilled the DSM-IV criteria for SAD as primary diagnosis, and that none in the control group had a psychiatric diagnosis. Social anxiety symptom severity was measured with the self-report version of the Liebowitz Social Anxiety Scale (LSAS-SR) [ 44 ], with higher scores indicating greater severity (range 0–144). SAD patients (mean ± SD: 84.96 ± 20.37) and HC (7.93 ± 7.46) differed significantly on this scale ( t (66) = 22.13, P < 0.001). All participants provided written informed consent and the study was approved by the Regional Ethical Review Board in Uppsala as well as the Uppsala University Radiation Safety Committee. Imaging procedure Positron emission tomography A Siemens ECAT EXACT HR+ (Siemens/CTI) was used to acquire the PET images with 63 contiguous planes of data and slice thickness of 2.46 mm resulting in a total axial field of view of 155 mm. Participants fasted for at least 3 h and refrained from alcohol, nicotine, and caffeine for at least 12 h before the scan. Participants were positioned supine in the scanner with their head gently fixated and a venous catheter for tracer injections was inserted in the participants’ arm. A 10-min transmission scan for attenuation correction was performed using three retractable germanium ( 68 Ge) rotating line sources. The participants were injected with on average 334.43 ± 22.75 MBq of the [ 11 C]PE2I tracer through an intravenous bolus and 22 frames of data were acquired over 80 min of data (4 × 60 s, 2 × 120 s, 4 × 180 s, 12 × 300 s). Following a 45–60 min waiting period to allow for sufficient radio decay, acquisition commenced for [ 11 C]DASB using an identical injection procedure and an average activity of 329.93 ± 29.70 MBq. In total, 22 frames of data were acquired over 60 min (1 × 60 s, 4 × 30 s, 3 × 60 s, 4 × 120 s, 2 × 180 s, 8 × 300 s). Magnetic resonance imaging The participants’ PET images were co-registered to their individual T1-weighted MR image to make ROI analyses possible. Thus, participants underwent an anatomical T1-weighted MR scan (echo time = 50 ms; repetition time = 500 ms; Field of view = 240 × 240 mm 2 ; voxel size = 0.8 × 1.0 × 2.0 mm 3 ; 170 contiguous slices) on a Philips Achieva 3.0 T whole body MR-scanner (Philips Medical Systems, Best, The Netherlands) with an 8-channel head-coil. Five SAD and twenty-four HC participants were scanned with a 32-channel head coil due to scanner upgrade. Data preprocessing With regard to PET data, ordered subset expectation maximization with six iterations and eight subsets and a 4 mm Hanning post filter with appropriate corrections were used to reconstruct dynamic images. The dynamic PET images were realigned to adjust for inter-frame movement using VOIager software 4.0.7 (GE Healthcare, Uppsala, Sweden). Voxel-wise parametric images of non-displaceable BP (BP ND ) were calculated for both radioligands with the cerebellum as reference region. Reference Logan [ 45 ] was used for [ 11 C]DASB (time interval 30–60 min), where BP ND was estimated as the distribution volume ratio (DVR)-1 (DVR-1 = BP). Receptor parametric mapping [ 46 ], a basis function implementation of the simplified reference tissue model, was used for [ 11 C]PE2I [ 47 ]. Cerebellar gray matter was selected as reference region for both radioligands because it has none to negligible levels of SERT and DAT. BP ND -images were automatically outlined on each participant’s anatomical T1-weighted image using the PVElab software [ 48 ]. The [ 11 C]DASB BP ND and [ 11 C]PE2I BP ND images were co-registered to the anatomical T1-weighted MR image using Statistical Parametric Mapping 8 (SPM8; (Wellcome Department of Cognitive Neurology, University College London, ) implemented in Matlab (Mathworks Inc., Nantucket, MA, USA). The T1-image was then segmented and normalized to the Montreal Neurological Institute standard space and the transformation parameters applied to the [ 11 C]DASB and [ 11 C]PE2I BP ND images, resulting in parametric images with 2 mm isotropic voxels. Images were then smoothed using a 12 mm Gaussian kernel. Statistical analysis ROIs were selected based on expected radioligand uptake and earlier neuroimaging research in SAD and anxiety disorders [ 6 , 7 , 10 , 22 , 23 , 24 , 49 , 50 ]. The a priori defined ROIs for both tracers were the amygdala, hippocampus, caudate nucleus, putamen, nucleus accumbens (NAcc), pallidum, and thalamus. The [ 11 C]PE2I uptake is largely limited to these regions. For [ 11 C]DASB the ACC, insula cortex, and raphe nuclei were added as additional ROIs. All anatomical regions except NAcc and raphe nuclei were defined using the automated anatomical labeling (AAL) library from the Wake Forest University Pickatlas [ 51 ]. The AAL library does not contain any definitions of NAcc or raphe nuclei and therefore the Hammersmith atlas [ 52 ] and PVElab were used for these regions, respectively. NAcc was included to enable the evaluation of differential effects in the ventral versus dorsal striatum. Patient and control group BP ND distributions were tested for normality and heterogeneity and were deemed to be well in range for using parametric analyses. To examine group differences (SAD vs. HC) in BP, two-sample t tests were performed for SERT and DAT BP ND separately in SPM8 with age, sex, and scanner version as covariates. The analyses were corrected for familywise error (FWE) within the ROIs using random field theory and the statistical threshold was set at P FWE < 0.05. Group differences in regional co-expression were assessed by comparing SERT-DAT partial Pearson’s product-moment correlations ( R ) between groups, with age, sex, and MR-scanner version partialled out. Correlations between SERT and DAT BP ND were calculated at the voxel level for the SAD group and HC group separately, as a measure of regional co-expression using the same methodology as in a previous multitracer PET study [ 49 ]. Correlation coefficients were then Fisher transformed to Z -values which were subsequently used in voxel-wise group comparisons with the statistical threshold set to P < 0.05 [ 49 ]. Analyses were performed in MatlabR2018a. To evaluate the relation between SAD symptom severity within SAD patients and BP, a multiple regression was performed in SPM8 with LSAS-SR score as a predictor for SERT and DAT BP ND separately with age and sex as covariates and the statistical threshold set at P FWE < 0.05. To examine the effect of SERT-DAT regional co-expression on symptom severity within the patient group in our a priori defined ROIs, BPs for SERT, DAT, and their statistical interaction were entered as predictors into voxel-wise regression models with symptom severity (LSAS-SR) as outcome variable in Matlab with the threshold set to P < 0.05. Results Serotonin transporter availability Mean BP ND values for each ROI are listed in Supplementary Table 1 . SAD patients as compared with HC had significantly higher SERT availability ([ 11 C]DASB BP ND ) in the left NAcc (Table 1 , Fig. 1c ) and, at uncorrected level, in all of the investigated ROIs except the raphae and pallidum (Supplementary Table 2 , Supplementary Fig. 1 ). SERT availability did not correlate with symptom severity (LSAS-SR) within patients at the a priori p-level, while there were several positive correlations within our ROIs at P < 0.05 uncorrected (Supplementary Table 2 ). Table 1 Serotonin (SERT; [ 11 C]DASB) and dopamine (DAT; [ 11 C]PE2I) transporter binding potential (BP ND ) differences in patients with social anxiety disorder (SAD) as compared with healthy controls (HC). Relations between transporter binding and symptom severity Liebowitz social anxiety scale (LSAS) self-report within SAD patients are also listed. Full size table Fig. 1: Altered expression of serotonin and dopamine transporters in social anxiety disorder. The left panel shows mean serotonin transporter (SERT) binding potential (BP ND ) in ( a ) social anxiety disorder (SAD) patients and ( b ) healthy controls. c A cluster with significantly enhanced SERT binding potential (BP ND ) was found in the SAD group in the nucleus accumbens but ( d ) no clusters where symptom severity was significantly related to SERT BP ND were detected at corrected p -levels. The right panel shows corresponding mean dopamine transporter (DAT) BP ND in ( e ) SAD patients and ( f ) healthy controls. g No clusters where SAD differed significantly from controls on DAT BP ND were detected at corrected p -levels but ( h ) clusters with a significant positive correlation between symptom severity, as measured with the Liebowitz social anxiety scale (LSAS-SR), and DAT BP ND were found in the amygdala, hippocampus, pallidum, and putamen. Coordinates are in Montreal Neurological Institute space. The colorbar indicates binding potentials for the two top rows. Parametric images are overlaid on a standard MRI image. Full size image Dopamine transporter availability There were no significant between-group differences in DAT binding levels ([ 11 C]PE2I BP ND ) in any of the specified ROI:s at the a priori statistical threshold. At the more liberal p < 0.05 uncorrected level, there was higher DAT availability in the patient group in the left amygdala, and bilateral hippocampus and striatum (Supplementary Table 2 , Supplementary Fig. 1 ). In the SAD group, symptom severity (LSAS-SR) correlated positively with DAT availability in the right amygdala, left hippocampus and in a cluster in the right putamen extending into pallidum, see Table 1 and Fig. 1h . A trend in the same direction was evident in several of the investigated ROIs including the left amygdala ( P FWE = 0.081), left putamen ( P FWE = 0.055), left pallidum ( P FWE = 0.065), and bilateral NAcc ( P FWE = 0.098)—see Supplementary Fig. 1 . Regional co-expression of serotonin and dopamine transporters SERT-DAT regional co-expression was almost exclusively positive within both groups, and no significant negative co-expressions were found in any region (Fig. 2a, b ). Group comparisons showed significantly higher SERT-DAT co-expression, as reflected by higher positive transporter correlations, in the SAD group relative to HC in the left amygdala, and the right-sided caudate, putamen, NAcc and posterior ventral thalamus—see Table 2 and Fig. 2c . A lower co-expression in the dorsomedial thalamus was found in the SAD group (<HC). A follow-up logistic regression analysis of these clusters showed that SAD diagnosis was significantly predicted by the interaction between transporters in the amygdala ( P = 0.032, Z = −2.15), putamen ( P = 0.036, Z = −2.09), and dorsomedial thalamus ( P = 0.013, Z = 2.49). In addition, inclusion of transporter interaction terms in the model increased the model fit, compared with main effects of transporters, as reflected by McFadden R 2 values (interaction/main effects: amygdala = 0.25/0.18; putamen = 0.24/0.19; thalamus = 0.25/0.15). Within the patient group, SERT × DAT interactions did not predict social anxiety symptom severity (LSAS-SR). Fig. 2: Altered co-expression of serotonin and dopamine transporters in social anxiety disorder. Regional co-expression of serotonin (SERT; [ 11 C]DASB BP ND ) and dopamine (DAT; [ 11 C]PE2I BP ND ) transporters, indexed by positive voxel-wise Pearson’s product-moment correlation coefficients, for ( a ) patients with social anxiety disorder (SAD), and ( b ) healthy controls. c Clusters of significantly higher regional co-expression in the SAD group in the amygdala, caudate, putamen, nucleus accumbens, and thalamus as compared with the healthy controls. The colorbar indicates Pearson’s correlation coefficients for the two top rows. Parametric images are overlaid on a standard MRI image. Full size image Table 2 Regional co-expression of serotonin and dopamine transporters, as assessed with voxel-wise correlations of binding potentials (BP ND ) using [ 11 C]DASB and [ 11 C]PE2I PET, in patients with social anxiety disorder (SAD) as compared with healthy controls (HC). Full size table Discussion We used multitracer PET, with validated and highly specific radioligands, to evaluate differences in regional expression and co-expression of serotonin and dopamine transporters between patients with SAD and healthy controls. There were indices of upregulated transporter expression within each transmitter system, and significantly higher SERT/DAT co-expression, in SAD patients in brain regions involved in fear and reward processing. Significant statistical interactions between the transporters, indicating increased SERT/DAT co-expression in SAD relative to HC, were prominent in the amygdala, a structure heavily implicated in fear and anxiety [ 53 , 54 ] strongly influenced both by serotonin [ 55 ] and dopamine [ 56 ]. This was also noted in the putamen which is involved in reinforcement learning [ 57 ] and may be structurally enlarged in SAD [ 58 ]. The thalamus was the only brain region exhibiting a mixed direction pattern since a higher positive SERT-DAT correlation in patients than controls was noted in the posterior ventral region while the reverse was true in the dorsomedial thalamus where the transporter interaction term also significantly predicted SAD diagnosis. In addition, higher positive SERT-DAT correlations in SAD (>HC) were noted in the right caudate nucleus in the dorsal striatum as well as the NAcc in the ventral striatum. The NAcc has been found to be important both for aversion and reward behaviors [ 59 ] and, together with the amygdala, it is considered to be part of a ventral system of emotion regulation [ 60 , 61 ]. While our results on transporters are interesting in the light of pharmacologic and anatomical studies demonstrating functional reciprocity between serotonin and dopamine [ 31 , 32 , 33 , 34 , 62 , 63 ], the functionality of the altered SERT/DAT co-expression in SAD is not known. Transporter interactions may, however, impact the equilibrium between neural activations in fear and reward networks of the brain with resultant proneness for approach-avoidance conflicts at the behavioral level. Local coinciding increases of dopamine and SERT availability could thus play a causal or modulatory role in SAD, however we cannot determine if altered monoamine functions result in or from SAD. Looking at the membrane transporter proteins separately, the increased SERT availability in patients with SAD is in line with previous reports from our [ 7 ] and other groups [ 10 ]. We recently reported increased striatal SERT availability in patients with SAD, and we here extend these findings by specifically targeting and reporting higher SERT availability in the NAcc, in the patient group. The NAcc has a high density of serotonergic neurons in its shell [ 64 ], and it is an important region for hedonic reward processing [ 17 , 61 ]. Communication between the amygdala and NAcc has also been found to modulate reward-seeking behavior in rodents, where amygdaloid innervation of the NAcc reinforced additional reward-seeking mediated by dopamine D1-type receptor signaling [ 65 ]. At a lenient statistical threshold, SERT availability was increased in the SAD group in all ROIs except the pallidum and raphae, along with positive correlations with symptom severity in the amygdala, hippocampus, putamen, pallidum, and caudate nucleus (Supplementary Table 2 ). Because these results did not survive correction for multiple comparisons, they should be interpreted with caution. Consistently, however, other [ 11 C]DASB PET studies have also noted positive associations between anxiety-related temperamental traits and higher SERT binding in the amygdalo-hippocampal region in rhesus monkeys [ 66 ] and the thalamus in humans [ 67 ]. The current data additionally support that DAT availability moderates SAD symptomatology as we noted significant positive correlations between symptom severity in SAD patients and DAT binding in the amygdala, hippocampus, putamen, and pallidum. However, a group difference (SAD > HC) in DAT availability was only observed in the amygdala, hippocampus, and striatum at the uncorrected p -level. Similarly, in a SPECT study on Parkinsonian patients, Moriyama et al. [ 68 ] noted no group differences in DAT BP values between SAD and non-SAD participants although SAD symptom severity correlated positively with increased DAT density in the bilateral putamen and left caudate. On the other hand, van der Wee et al. [ 10 ] noted higher striatal DAT binding in SAD patients than controls, but no correlation with behavioral measures. There are also SPECT studies of SAD reporting either null findings [ 22 ] or findings pointing in the opposite direction, i.e., lowered DAT density in patients [ 24 ]. However, in the present trial, using a larger sample and a more sensitive PET methodology with high transporter specificity, there were no brain regions demonstrating higher DAT or SERT binding in controls (>SAD) even at liberal statistical levels. Previous neuroimaging data on increased serotonin synthesis capacity in the amygdala and limbic areas [ 7 , 8 ], downregulation of serotonin-1A autoreceptors [ 6 ], and increased SERT availability [ 7 , 10 ] in SAD could reflect an elevated presynaptic serotonergic activity in this disorder, consistent with animal and human data showing anxiogenic effects of serotonin in the amygdala [ 38 , 69 , 70 ]. Our data, demonstrating increased striatal SERT availability together with increased DAT expression correlating with social anxiety, could indicate a generally overactive presynaptic monoaminergic system, possibly due to an increased number of monoamine nerve terminals, in SAD. However, in contrast to serotonin, there are reasons to suspect dopamine hypoactivity in SAD. PET studies of Parkinson’s disease have demonstrated that greater DAT levels are associated with lower dopamine turnover and lower synaptic dopamine concentrations that cannot be explained solely by dopaminergic terminal loss [ 71 ]. Studies of DAT knockout mice consistently show that decreased DAT levels correspond to increased dopamine turnover and increased synaptic dopamine levels [ 72 ]. DAT knockout mice also appear less anxious in the elevated plus maze and other anxiety-relevant paradigms [ 73 ]. Hence, it is plausible that upregulation of the DAT leads to the increased clearance of dopamine from the synaptic space, lowered dopamine concentration and lowered dopamine turnover, possibly contributing to reduced motivational drive for, and pleasure from, social interactions. Indirect evidence for this notion has also been provided by fMRI activation studies noting reduced activation of the NAcc during social reward anticipation in individuals with SAD relative to healthy controls [ 18 , 74 ]. It should be noted that the previous nuclear imaging studies of striatal DAT in SAD are inconclusive [ 10 , 22 , 24 ] but, to our knowledge, our study is the first one addressing this topic using PET. Additional multitracer PET studies, e.g., using measures of dopamine synthesis capacity and release, are needed to clarify if SAD is associated with presynaptic dopaminergic over- or under-activity. A limitation of the current study is that, while the sample size is comparatively large for a PET study, lack of power should be taken into consideration. This is one possible reason why a steady pattern of group differences along with correlations between transporter availability and symptom severity, was not always observed. Another reason could be heterogeneity within the SAD sample as there may be considerable individual differences in personality traits, for example in neuroticism facets like impulsivity, or extraversion facets like excitement seeking. Monoamine transporter parameters could be more strongly related to some of these traits than others [ 75 ]. A related caveat is that we had no scales for behavioral approach specifically in our study, and because of the uptake profile of the [ 11 C]PE2I radiotracer, we could not study DAT availability in all relevant nodes of the brain’s reward pathway such as the orbitofrontal and ventromedial prefrontal cortices. Also, the present PET data on transporter levels do not provide mechanistic insights into how serotonergic and dopaminergic signaling is altered during situational demands faced by socially anxious individuals. This question could be addressed with fMRI activation studies together with PET. A feasible way for further insight into SERT-DAT interactions could also be to study concomitant changes in both systems after effective treatment. Conclusion We demonstrate increased expression and co-expression of the transporters for serotonin and dopamine in SAD, relative to healthy controls, in fear- and reward-relevant brain regions. Presynaptic serotonergic and dopaminergic activity may be important biological factors underlying excessive social anxiety, putatively affecting aversive as well as appetitive motivation. These findings may cast further light on the pathogenesis of SAD and prove useful for the development of future anxiolytic treatments targeting the interaction between the serotonin and dopamine systems.
The balance between the neurotransmitters serotonin and dopamine may affect whether a person develops social anxiety disorder. Previous research has mainly focused on either the serotonin or the dopamine system individually. Now researchers at Uppsala University have demonstrated the existence of a previously unknown link between the two. The results are published in Molecular Psychiatry. "We see that there is a different balance between serotonin and dopamine transport in people with social anxiety disorder compared with control subjects. The interaction between serotonin and dopamine transport explained more of the difference between the groups than each carrier individually. This suggests one should not focus exclusively on one signal substance at a time, the balance between different systems may be more important," says Olof Hjorth, Ph.D. student at the Department of Psychology at Uppsala University, Sweden. Social anxiety can be a highly debilitating psychiatric disorder with negative impacts on the individual's relationships and working life. This study shows that affected people may have an imbalance between the serotonin and dopamine transporters in the amygdala and other brain areas that are important for fear, motivation and social behavior. The functioning of the brain's signal substances is affected by the amount of reuptake by the transmitter cell, which is controlled by specific transporter proteins. "Previously, we have found an increased production and altered reuptake of serotonin in sufferers of social anxiety disorder, a finding we now, in part, replicate," says Hjorth. He adds, "We can now show that dopamine reuptake is also directly related to the severity of the social anxiety symptoms that the individual is experiencing." The method used in the study is called positron emission tomography (PET), in which radioactive agents, injected into the blood stream, decay and release a signal that allows the scientists to determine the density of available transporter proteins in different areas of the brain. The researchers hope that the current findings can lead to a better understanding of the causes of social anxiety and ultimately to new, more effective treatments. "Many of the patients we meet have symptoms that affect all parts of their everyday life, and many of them have suffered for most of their lives, so understanding the cause and finding effective treatments are our highest priority," says Hjorth.
10.1038/s41380-019-0618-7
Computer
Microscopists push neural networks to the limit to sharpen fuzzy images
Jiji Chen et al, Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes, Nature Methods (2021). DOI: 10.1038/s41592-021-01155-x Journal information: Nature Methods
http://dx.doi.org/10.1038/s41592-021-01155-x
https://techxplore.com/news/2021-06-microscopists-neural-networks-limit-sharpen.html
Abstract We demonstrate residual channel attention networks (RCAN) for the restoration and enhancement of volumetric time-lapse (four-dimensional) fluorescence microscopy data. First we modify RCAN to handle image volumes, showing that our network enables denoising competitive with three other state-of-the-art neural networks. We use RCAN to restore noisy four-dimensional super-resolution data, enabling image capture of over tens of thousands of images (thousands of volumes) without apparent photobleaching. Second, using simulations we show that RCAN enables resolution enhancement equivalent to, or better than, other networks. Third, we exploit RCAN for denoising and resolution improvement in confocal microscopy, enabling ~2.5-fold lateral resolution enhancement using stimulated emission depletion microscopy ground truth. Fourth, we develop methods to improve spatial resolution in structured illumination microscopy using expansion microscopy data as ground truth, achieving improvements of ~1.9-fold laterally and ~3.6-fold axially. Finally, we characterize the limits of denoising and resolution enhancement, suggesting practical benchmarks for evaluation and further enhancement of network performance. Main All fluorescence microscopes suffer drawbacks and tradeoffs because they partition a finite signal budget in space and time. These limitations manifest when comparing different microscope types (for example, three-dimensional (3D) structured illumination microscopy 1 (SIM) offers better spatial resolution than high-numerical-aperture light-sheet microscopy 2 but worse photobleaching); different implementations of the same microscope type (for example, traditional implementations of SIM offer better spatial resolution than instant SIM (iSIM) 3 but worse depth penetration and lower speed 4 ); and, within the same microscope, longer exposures and bigger pixels increase signal-to-noise ratio (SNR) at the expense of speed and resolution 5 . Performance tradeoffs are especially severe 6 when considering live-cell super-resolution microscopy applications, in which the desired spatiotemporal resolution must be balanced against sample health 7 . Deep learning 8 , which harnesses neural networks for data-driven statistical inference, has emerged as a promising method for alleviation of the drawbacks in fluorescence microscopy. Content-aware image restoration (CARE 9 ) networks use the popular U-net 10 neural network architecture in conjunction with synthetic, semisynthetic and physically acquired training data to improve resolution, resolution isotropy and SNR in fluorescence images. U-nets have also been incorporated into generative adversarial networks (GAN 11 ) that enable cross-modality super-resolution microscopy, transforming confocal images into stimulated emission depletion (STED) images 12 or transforming a series of wide-field or sparse localization microscopy images into high-resolution (HR) localization microscopy images 13 . Other recent examples include denoising confocal 14 or SIM 15 data and deconvolving light-sheet data 16 . Here we investigate the use of an alternative network architecture, RCAN 17 , for use in super-resolution microscopy applications. RCAN has been shown to preferentially learn high-spatial-frequency detail within natural scene images, but this capability has not been exploited for image restoration in fluorescence microscopy applications, nor on longitudinally acquired image volumes. First we modify RCAN for 3D applications, showing that it matches or exceeds the performance of previous networks in denoising fluorescence microscopy data. We apply this capability for super-resolution imaging over thousands of image volumes (tens of thousands of images). Second, we characterize RCAN and other networks in terms of their ability to extend resolution, finding that RCAN provides better resolution enhancement than alternatives, especially along the axial dimension. Finally, we demonstrate four- to fivefold volumetric resolution improvement in multiple fixed- and live-cell samples when using STED and expansion-microscopy 18 ground truth to train RCAN models. Results RCAN enables super-resolution imaging over thousands of volumes The original RCAN was proposed specifically for resolution enhancement 17 . A key challenge in this task is the need to bypass abundant low spatial frequencies in the input image in favor of HR prediction. The RCAN architecture achieves this by employing multiple skip connections between network layers to bypass low-resolution (LR) content, as well as a ‘channel attention’ mechanism 19 that adaptively rescales each channel-wise feature by modeling interdependencies across feature channels. We modified the original RCAN architecture to handle image volumes rather than images, which also improves network efficiency so that our modified 3D RCAN model fits within graphics processing unit (GPU) memory (Fig. 1a , Methods and Supplementary Note 1 ). Fig. 1: Residual channel attention networks denoise super-resolution data. a , The RCAN architecture used throughout this work. Matched low and high-SNR image volumes are used to train our RCAN, a residual-in-residual structure consisting of several residual groups (dark blue, red outline) with long skip connections. Each residual group contains additional RCAB (light blue, blue outline) with short skip connections, convolution, ReLu, sigmoid and pooling operations. Long and short skip connections, as well as shortcuts within the residual blocks, allow abundant bypassing of low-frequency information through such identity-based skip connections, facilitating the learning of high-frequency information. A channel attention mechanism within the RCAB further aids the representational ability of the network in learning HR information. b , Left: noisy raw iSIM data acquired with low-intensity illumination, low-noise deconvolved GT data acquired with high-intensity illumination, RCAN, CARE, SRResNet and ESRGAN output. Lateral (upper) and axial (lower) cross-sections are shown. Samples are fixed U2OS cells expressing mEmerald-Tomm20 imaged via iSIM. Right: comparison of network output using 3D SSIM and PSNR. Means and standard deviations are reported, obtained from n = 10 volumes (Supplementary Figs. 1– 3 ). c , RCAN performance at different input SNR levels, simulated by the addition of Gaussian and Poisson noise to raw input. Noisy raw input data at SNR 2.1 (top row) and 5.1 (bottom row) were used to generate predictions, which were then compared to ground truth. SNR values are calculated as the mean of values within the yellow rectangular regions. Higher-magnification views of mitochondria (marked in yellow rectangular regions) are shown at lower right (Supplementary Fig. 6 ). d , FWHM values (mean ± s.d.) from ten microtubule filaments for deconvolved, high-SNR GT, noisy iSIM input (Raw) and network output (RCAN). e , RCAN denoising enables the collection of thousands of iSIM volumes without photobleaching. Mitochondria in live U2OS cells were labeled with pShooter pEF-Myc-mito-GFP and imaged with high- (360 W cm –2 ) and low- (4.2 W cm –2 ) intensity illumination. Top row: selected examples at high illumination power, illustrating severe photobleaching. Middle row: selected examples from a different cell imaged at low illumination power, illustrating low SNR (Raw). Bottom row: RCAN output given low SNR input. Numbers in top row indicate volume no. The graph quantifies the normalized signal in each case; ‘jumps’ in Raw and RCAN signal correspond to manual refocusing during acquisition. Maximum-intensity projections are shown (Supplementary Videos 1 and 2 and Supplementary Figs. 7 and 8 . f , Dual-color imaging of mitochondria (green, pShooter pEF-Myc-mito-GFP) and lysosomes (mApple-Lamp1) in live U2OS cells. RCAN output illustrating mitochondrial fission (orange arrowheads), mitochondrial fusion (white arrowheads) and mitochondria–lysosome contacts. Single lateral planes are shown (Supplementary Video 3 ). g , Graph showing quantification of fission, fusion and contact events quantified from 16 cells. Scale bars, 5 μm ( a , b , d – f ) and 1 μm for higher-magnification views ( c ). AU, arbitrary units. Source data Full size image To investigate RCAN denoising performance on fluorescence data, we began by acquiring matched pairs of low- and high-SNR iSIM volumes of fixed U2OS cells transfected with mEmerald-Tomm20 ( Methods and Supplementary Tables 1 and 2 ), labeling the outer mitochondrial membrane (Fig. 1b ). We programmed our acousto-optic tunable filter to switch rapidly between low- (4.2 W cm –2 ) and high- (457 W cm –2 ) intensity illumination, rapidly acquiring 40 low-SNR raw volumes and matching high-SNR data, which we deconvolved to yield high-SNR ground truth. We then used 30 of these volumes for training and held out ten for testing of network performance. Using the same training and test data, we compared four networks—RCAN, CARE, SRResNET 20 and enhanced super-resolution generative adversarial networks (ESRGAN) 21 (Supplementary Tables 3 and 4 ). SRResNet and ESRGAN are both class-leading deep residual networks used in image super-resolution, with ESRGAN winning the 2018 Perceptual Image Restoration and Manipulation challenge on perceptual image super-resolution 22 . For the mEmerald-Tomm20 label, RCAN, CARE, ESRGAN and SRResNET predictions all provided clear improvements in visual appearance, 3D SSIM and peak SNR (PSNR) metrics relative to the raw input (Fig. 1b ), also outperforming direct deconvolution on the noisy input data (Supplementary Fig. 1 ). The RCAN output provided better visual output, PSNR and SSIM values than the other networks (Fig. 1b ), prompting us to investigate whether this performance held for other organelles. We thus conducted similar experiments for fixed U2OS cells with labeled actin, endoplasmic reticulum, Golgi, lysosomes and microtubules (Supplementary Fig. 2 ), acquiring 15–23 volumes of training data and training independent networks for each organelle. In all cases RCAN output was visually on a par with, and quantitatively better than, the other networks (Supplementary Fig. 3 and Supplementary Table 5 ). Training RCAN with more residual blocks contributed to this performance (Supplementary Fig. 4 ), albeit at the cost of longer training times (Supplementary Table 3 ). An essential consideration when using any deep learning method is understanding when network performance deteriorates. Independent training of an ensemble of networks and computing measures of network disagreement can provide insight into this issue 9 , 16 , yet such measures were not generally predictive of disagreement between ground truth and RCAN output (Supplementary Fig. 5 ). Instead, we found that per-pixel SNR in the raw input ( Methods and Supplementary Fig. 5 ) seemed to better correlate with network performance, with extremely noisy input generating a poor prediction as intuitively expected. For example, for the labels mEmerald-Tomm20 and ERmoxGFP, we observed this when input SNR dropped below ~3 (Fig. 1c ). We observed similar effects when using synthetic spherical phantoms in the presence of large noise levels (Supplementary Fig. 6 ). We also examined linearity and spatial resolution in the denoised RCAN predictions. We verified that RCAN output reflected spatial variations in fluorescence intensity evident in the input data, demonstrating that linearity is preserved (Supplementary Fig. 7 ). To estimate spatial resolution we examined the apparent full width at half maximum (FWHM) of ten labeled microtubule filaments in noisy raw input, high-SNR deconvolved ground truth and the RCAN prediction (Fig. 1d ). While lateral resolution was not recovered to the extent evident in the ground truth (170 ± 13 nm, mean ± s.d.), predictions offered noticeable resolution improvement compared to the input data (194 ± 9 nm RCAN versus 353 ± 58 nm input). Next, we tested the performance of RCAN on live cells for extended four-dimensional (4D) imaging applications. At high SNR, relatively few volumes can be obtained with iSIM due to substantial volumetric bleaching. For example, when volumetrically imaging pShooter pEF-Myc-mito-GFP (labeling the mitochondrial matrix) in live U2OS cells every 5.6 s at high intensity (360 W cm –2 ; Fig. 1e and Supplementary Video 1 ), only seven volumes could be acquired before fluorescence dropped to half its initial value. Lowering the illumination intensity to 4.2 W cm –2 so that photobleaching is negligible circumvents this problem, but the resulting low SNR usually renders the data unusable (Fig. 1e ). To determine whether deep learning could help to address this tradeoff between SNR and imaging duration, we accumulated 36 matched low- (4.2 W cm –2 ) and high-intensity (457 W cm –2 ) volumes on fixed cells and trained an RCAN model, which we then tested on our low-SNR live data. This approach enabled super-resolution imaging over an extended duration, allowing capture of 2,600 image volumes (~50,000 images, 2.2 W cm –2 ) acquired every 5.6 s over 4 h with no detectable photobleaching and an apparent increase in fluorescence signal over the course of the recording (Fig. 1e and Supplementary Video 2 ). The restored image quality was sufficiently high that individual mitochondria could be manually segmented, a task difficult or impossible on the raw input data (Supplementary Fig. 8 ). To our knowledge, light-sheet microscopy is the only technique capable of generating 4D data of similar quality and duration, but the sub-200-nm spatial resolution of our method is better than that of high-numerical-aperture light-sheet microscopy 23 . In another application, a dual-color example, we applied the same strategy to imaging pShooter pEF-Myc-mito-GFP in conjunction with mApple-LAMP1-labeled lysosomes. In this case, we obtained ~300 super-resolution volumes recorded every 5.1 s in a representative cell (Supplementary Video 3 ), allowing inspection (Fig. 1f ) of mitochondrial fission and fusion near lysosomal contacts. By manual quantification of these events from 16 cells, we found that fission occurred ~2.5-fold as often as fusion (Fig. 1g ). Estimation of resolution enhancement offered by deep learning In addition to denoising fluorescence images, deep learning can also be used for resolution enhancement 9 , 12 , 13 . We were curious about the extent to which RCAN (and other networks) could retrieve resolution degraded by the optical system used, since this capability has not been systematically investigated. We were particularly interested in understanding when network performance breaks down—that is, how much blurring is too much. To empirically assess the relative performance of different networks, we simulated ground truth noiseless spherical phantoms and subjected these to increasing amounts of blur (Fig. 2 and Supplementary Videos 4 – 6 ). We trained RCAN, CARE, SRResNet and ESRGAN networks with the same 23 matched volumes of ground truth and blurred data and then challenged each network with seven volumes of previously unseen test data (Fig. 2a–c and Supplementary Videos 4 – 6 ). Fig. 2: RCAN resolution enhancement assayed with simulated spherical phantoms. a , Noiseless mages of simulated spherical phantoms were created (High resolution) and blurred (Low resolution), generating matched volumes for RCAN training. Blurred volumes unseen by the trained network were then tested to evaluate deblurring performance. b , Examples of the performance of RCAN, CARE, SRResNet and ESRGAN on increasingly blurred data (blurred with a kernel 2×, 3× and 4× larger than the iSIM PSF used for ground truth (GT) data). Axial (top row) and lateral (bottom row) cross-sections are shown. Networks are compared on the same test object, a subresolution sphere that approximates the iSIM PSF after blurring (GT, shown in leftmost column). Scale bar, 40 pixels. c , Additional examples of input data after progressively more severe blur (RAW, left column, with blurring kernels 2×, 3× and 4× the size of the iSIM PSF indicated in successive rows). Ground truth and different network outputs (right column) are also shown. Scale bar, 100 pixels; lateral (XY, top images) and axial slices (XZ, bottom images) along the dotted horizontal line are shown. Dotted rectangles and red arrows highlight features for comparison across the different networks (Supplementary Videos 4 – 6 ). d , SSIM (top) and PSNR (bottom) for data shown in c . Means and standard deviations from eight measurements are shown (Supplementary Table 6 ). Source data Full size image The RCAN network generated plausible reconstructions even with blurring fourfold greater (in all spatial dimensions) than the iSIM point spread function (PSF), largely preserving the size of the smallest particles (Fig. 2b,c ). However, RCAN performance degraded with increasingly blurry input, with SSIM and PSNR decaying from 0.99 to 0.92 and 51 dB to 32 dB, respectively, for two- to fourfold blur, and with other networks also showing worse performance at increasing blur (Fig. 2d and Supplementary Table 6 ). Compared to other networks, RCAN contained a similar number of total network parameters (Supplementary Table 7 ) yet its predictions offered better visual output and superior SSIM and PSNR (Fig. 2b–d and Supplementary Table 6 ). The performance advantage of RCAN was most noticeable at fourfold input blur, where other networks obviously failed to detect particles (Fig. 2b,c and Supplementary Video 6 ). Using RCAN for confocal-to-STED resolution enhancement in fixed and live cells Since the noiseless spherical phantoms suggested that RCAN provides class-leading performance for resolution enhancement, we sought to benchmark RCAN performance using noisy experimental data. As a first test, we studied the ability to ‘transform’ confocal volumes into those with STED-like spatial resolution (Fig. 3 ), which is attractive because confocal imaging provides gentler, higher-SNR imaging than STED microscopy but worse spatial resolution. Such ‘cross-modality’ super-resolution has been demonstrated before with GANs, but only with two-dimensional (2D) images obtained from fixed cells 12 . Fig. 3: Confocal-to-STED microscopy restoration with RCAN. a – c , Example confocal input (left), RCAN prediction (middle) and ground truth STED (right) images for fixed MEF cells with microtubules stained with ATTO 647-secondary antibodies against anti-α-tubulin primary antibodies ( a ), NPCs stained with Alexa Fluor 594-secondary antibodies against anti-NPC primary antibodies ( b ) and nuclei stained with SiR-DNA ( c ). Higher-magnification views of dotted rectangular regions are shown below a , b ; axial reslices along yellow dotted lines marked in the lateral images are shown for a – c . b . Blue arrows highlight areas of discrepancy between RCAN output and ground truth data. c , Red arrows are included to highlight areas predicted well by RCAN but barely visible in the raw data (Supplementary Figs. 10 and 11 ). Phases of the cell cycle are also indicated in Supplementary Fig. 12 . d , Average image resolution in microtubule (left) and NPC (right) images obtained from decorrelation analysis. Means (also shown above each column) and standard deviations (from n = 18 image planes) are shown for raw confocal input, ground truth STED and RCAN output. e , Live MEF cells stained with SiR-DNA were imaged in resonant confocal mode (top), and the RCAN model trained on fixed datasets similar to those shown in c was applied to yield predictions (bottom). Single planes from volumetric time series are shown (Supplementary Videos 7 and 8 ). f , Higher-magnification view from series in e 2,615 s after the start of imaging, corresponding to nuclei marked 1 and 2 in e . Red arrows highlight areas lacking SiR-DNA signal that are more easily defined in RCAN prediction versus confocal data. Scale bars, 5 μm. Live-cell experiments ( e , f ) were repeated independently for nine different MEF cells, with similar results. Source data Full size image We collected training data (22–26 volumes, Supplementary Table 2 ) on fixed, fluorescently labeled mouse embryonic fibroblast (MEF) cells using a commercial Leica SP8 3X STED microscope (Fig. 3a–c ). This system was particularly convenient because STED images could be acquired immediately after confocal images on the same instrument. We imaged fixed MEFs, immunostained with (1) ATTO 647-secondary antibodies against anti-α-tubulin primary antibodies for marking microtubules (Fig. 3a ) and (2) Alexa Fluor 594-secondary antibodies against antinuclear pore complex (NPC) primary antibodies, marking nuclear pores (Fig. 3b ). Next, we trained RCAN models and applied them to unseen data using a modified decorrelation analysis 24 ( Methods and Supplementary Fig. 9 ) to estimate average spatial resolution. Confocal spatial resolution was 273 ± 9 nm ( n = 18 images used for these measurements) in the microtubule dataset and ~313 ± 14 nm in the pore dataset, with STED microscopy providing ~twofold improvement in resolution (~129 ± 6 nm for microtubules, ~144 ± 9 nm for pores) and the RCAN prediction providing similar gains (121 ± 4 nm for microtubules, 123 ± 14 nm for nuclear pores; Fig. 3d ) that could not be matched by deconvolution of confocal data (Supplementary Fig. 10 ). We suspect that the slight improvement in spatial resolution in RCAN output relative to STED ground truth is because RCAN both denoised the data and improved resolution, resulting in higher SNR than STED ground truth. Close examination of the RCAN prediction for nuclear pores revealed slight differences in pore placement relative to STED microscopy ground truth. We suspect that this result is due to slight differences in image registration between confocal and STED data (Supplementary Fig. 11 ), perhaps due to sample drift between acquisitions or slight instrument misalignment. Application of an affine registration between confocal and STED training data improved agreement between them, enhancing network output (Supplementary Fig. 11 ). However, small deviations in nuclear pore placement between ground truth STED and RCAN predictions were still evident. We also examined a third label, SiR-DNA, a DNA stain well suited for labeling live and fixed cells in both confocal and STED microscopy 25 . Collection of matched confocal and STED volumes on fixed nuclei in a variety of mitotic stages enabled us to train a robust RCAN model that produced predictions on different nuclear morphologies (Fig. 3c and Supplementary Fig. 12 ) that were sharper and less noisy than confocal input. Improvement relative to confocal data was particularly striking in the axial dimension (Fig. 3c ). We also trained CARE, SRResNet and ESRGAN models on the same input data provided to RCAN, finding that RCAN performance matched or exceeded other networks, both visually and quantitatively, for each of the three organelles investigated (Supplementary Fig. 13 and Supplementary Table 8 ). Given the quality of the RCAN reconstructions on nuclei, we wondered whether the same RCAN model could be adapted for live samples. Point-scanning confocal imaging can produce time-lapse volumetric recordings of living cells at SNR much higher than STED microscopy, given that more signal is collected per pixel. Nevertheless, even confocal microscopy recordings are quite noisy if high-speed acquisitions are acquired. To demonstrate that our RCAN model trained on fixed cells could simultaneously denoise and improve resolution in live cells, we acquired noisy resonant confocal recordings of dividing cells labeled with SiR-DNA (Fig. 3e ). Our illumination conditions were sufficiently gentle and rapid that we could acquire tens of imaging volumes without obvious bleaching or motion blur (Supplementary Video 7 ). Although the raw resonant confocal data poorly defined nuclei and chromosomes, these structures were clearly resolved in the RCAN predictions (Fig. 3e and Supplementary Video 7 ). RCAN also better captured chromosome decondensation and the return to interphase DNA structure (Fig. 3f ; see also additional interphase cell comparisons in Supplementary Video 8 ). Use of expansion microscopy to improve iSIM resolution in fixed and live cells Our success in using fixed STED training data to improve the spatial resolution of confocal microscopy made us wonder whether a similar strategy could be used to improve spatial resolution in iSIM. Since our iSIM did not inherently possess a means to image specimens at higher resolution than that of the base microscope, we used expansion microscopy (ExM) 18 to provide higher-resolution training data (Fig. 4a ). ExM physically expands fixed tissue using a hydrogel and can improve resolution near-isotropically up to a factor given by the gel expansion. We used ultrastructure ExM 26 , a variant of the original ExM protocol, to expand mitochondria (immunolabeled with rabbit-α-Tomm20 primary, donkey-α-rabbit biotin secondary and Alexa Fluor 488 streptavidin, each at 2 µg ml –1 ) and microtubules (labeled with mouse-α-tubulin primary, donkey-α-mouse biotin secondary and Alexa Fluor 488 streptavidin (4, 2 and 2 µg ml –1 , respectively)) in fixed U2OS cells, by 3.2- and fourfold, respectively ( Methods and Supplementary Fig. 14 ); we also developed protocols to locate and image the same region before and after ExM with iSIM (Supplementary Fig. 15 and Methods ). Fig. 4: Using ExM to improve spatial resolution in fixed and live iSIM. a , (i) simplified schematic showing generation of synthetic data used for training expansion-RCAN network. Post-expansion data are acquired and deconvolved (decon), generating ground truth data (decon expansion). Post-expansion data are also blurred, noise is added and the resulting images are deconvolved to generate synthetic pre-expansion data (decon synthetic raw). Ground truth and synthetic data are then used to train RCAN models for resolution enhancement on blurry input data (Supplementary Figs. 17 and 18 ). (ii) two-step RCAN application. RCAN denoising followed by RCAN expansion is applied to deconvolved iSIM images to generate expansion predictions. b , Example input data (either synthetic or experimental), not seen by the network, mimicking deconvolved iSIM, expansion ground truth and two-step RCAN predictions. Lateral and axial (taken along dotted line in lateral view) slices are shown for mitochondria (left, labeled with EGFP-Tomm20 in fixed, expanded U2OS cells) and microtubules (right, immunolabeled with Alexa Fluor 488 secondary against anti-α-tubulin primary antibody in fixed, expanded U2OS cells) (Supplementary Fig. 19 and Supplementary Video 9 ). Red arrows indicate sparsely separated microtubules. c , Average resolution quantification from decorrelation analysis on microtubule samples. Lateral (left) and axial (right) values are shown for experimentally acquired deconvolved iSIM (left, 174 ± 16 and 743 ± 73 nm, respectively), ground truth expanded data (middle, 65 ± 2 and 200 ± 24 nm, respectively) and two-step RCAN predictions (right, 94 ± 11 and 205 ± 46 nm, respectively). Mean (shown also above each column) ± s.d. values derived from n = 12 images are shown (Supplementary Fig. 21 ). d , Images from live U2OS cells expressing EGFP-Tomm20 were imaged with iSIM, deconvolved (decon) and input into the two-step RCAN process. Top: overview of lateral and axial MIP of first volume in time series from two-step RCAN prediction. Middle: higher-magnification views of axial slice corresponding to yellow rectangular region in overview, comparing deconvolved iSIM input (left) and two-step RCAN output. Yellow arrows highlight mitochondria that are better resolved with RCAN output than input data. Bottom: higher-magnification views of red rectangular region in overview, comparing raw iSIM, deconvolved iSIM and RCAN prediction. Red arrows highlight mitochondria better resolved with RCAN than iSIM (Supplementary Videos 10 and 11 ). Experiments were repeated independently for six different U2OS cells, with similar results. e , Images from live Jurkat T cells expressing EMTB-3XGFP were deconvolved and used as input into the two-step RCAN process. Left: selected axial MIPs at indicated time points, comparing deconvolved iSIM and RCAN output. Right: lateral MIPs corresponding to dashed rectangular region in left-hand images. Blue arrows indicate deformation of lower cell cortex before T-cell spreading; red arrow indicates approximate location of centrosome; red lines indicate asymmetric deformation of microtubule bundles surrounding the nucleus; yellow arrows indicate microtubule filaments at the top of the cell better defined with RCAN than by iSIM (Supplementary Videos 12 – 14 ). Experiments were repeated independently for three different U2OS cells, with similar results. Scale bars, 5 μm. Source data Full size image We first attempted direct registration of pre- to post-ExM data to build a training dataset suitable for RCAN. Unfortunately, local distortions in the post-ExM data prevented the subpixel registration needed for accurate correspondence between pre- and post-ExM data, even when using landmark- and non-affine-based registration methods (Supplementary Fig. 16 ). Instead, we digitally degraded the post-ExM data so that they resembled the lower-resolution, pre-ExM iSIM data (Fig. 4a ). Simply blurring the post-ExM data is insufficient, because blurring also oversmooths the background to the point where the images are noticeably smoother and less noisy than acquired pre-ExM iSIM data (Supplementary Fig. 17 ). Instead, we developed methods to match noise and background signal so that the digitally degraded post-ExM iSIM data better resembled deconvolved, pre-ExM iSIM data (Supplementary Fig. 18 and Methods ). This approach allowed us to register image pairs perfectly and to train RCAN models for microtubule and mitochondrial labels ( Methods , Supplementary Video 9 and Supplementary Fig. 19 ). We note that stochastic variations in labeling are evident in the expanded ground truth data (Supplementary Fig. 20 ), and probably contribute an additional source of noise. Direct application of expansion models to deconvolved iSIM data provided some improvement in resolution (Supplementary Fig. 21 ). Interestingly, the addition of a denoising model (used in Fig. 1 ) before the expansion model substantially improved our results, both visually and quantitatively (Fig. 4a and Supplementary Figs. 19 and 21 ), so we adopted this two-step RCAN going forward. On fixed samples, the trained networks provided modest lateral resolution enhancement on synthetic data derived from ground truth images of expanded immunostained mitochondria and microtubules from fixed U2OS cells (Fig. 4b ), allowing us occasionally to resolve closely spaced filaments otherwise blurred in the synthetic images (Fig. 4b , red arrows). However, the axial resolution enhancement offered by RCAN was more dramatic, showing clear improvement similar to the ground truth images. Using decorrelation analysis to estimate the degree of resolution enhancement on the microtubule data, we found that RCAN offered a 1.7-fold increase laterally and 3.4-fold increase axially relative to synthetic deconvolved data, compared to 2.2-fold (lateral) and 3.5-fold (axial) improvement offered by ground truth data (Supplementary Fig. 21 ). We observed similar enhancements on experimentally acquired pre-expansion data: 1.9- and 3.6-fold improvement laterally and axially by RCAN, versus 2.7- and 3.7-fold improvement, respectively, in ground truth data (Fig. 4c ). The improvements noted in fixed cells prompted us to apply our ExM-trained RCAN models to living cells imaged with iSIM in volumetric time-lapse sequences (Fig. 4d,e and Supplementary Videos 10 – 13 ). In a first example, we applied RCAN to mitochondria labeled with EGFP-Tomm20 in live U2OS cells (Fig. 4d and Supplementary Video 10 ). Modest improvements in lateral resolution and contrast with RCAN offered better definition of individual mitochondria, including void regions contained within the outer mitochondrial space (Fig. 4d , red arrows). As with fixed cells, improvements in axial views of the specimen were more dramatic (Supplementary Video 11 ), allowing us to discern closely packed mitochondria that were otherwise blurred in the deconvolved iSIM data (Fig. 4d , yellow arrows). In a second example, we applied our expansion-RCAN model derived from immunostained U2OS cells to live Jurkat T cells transiently expressing EMTB-3xGFP 27 , a protein that labels microtubule filaments. Jurkat T cells settled onto anti-CD3-coated activating coverslips (Fig. 4e and Supplementary Videos 12 – 14 ), which mimic antigen-presenting cells and enable investigation of the early stages of immune synapse formation 28 . Dynamics and organization of the actin and microtubule cytoskeleton during cell spreading are important regulators of this phenomenon. RCAN output offered clear views of the microtubule cytoskeleton during the initial stages of this dynamic process, including the deformation of microtubule bundles surrounding the nucleus. We observed pronounced deformation of central microtubule bundles at the dorsal cell surface as spreading initiated (Fig. 4e , blue arrows), suggesting that these bundles may be anchored to the actin cortex. Anchoring of microtubules to the actin cortex allows repositioning of the centrosome, a hallmark of immune synapse maturation 29 . Interestingly, we observed higher deformation of microtubule bundles on the right side of the cell shown in Fig. 4e , probably due to the forces that push and pull the centrosome towards the substrate (initially also located on the right side of the cell: red arrow at 228 s). RCAN output offered views with better resolution and contrast than the deconvolved iSIM input, particularly axially and towards the top of the cell. In some cases, dim or axially blurred filaments barely discerned in the input data were clearly resolved in the RCAN view (Fig. 4e (yellow arrows) and Supplementary Videos 12 and 14 ). Discussion Here we focused on 4D imaging applications, because sustained volumetric imaging over an extended duration at diffraction-limited or better spatial resolution remains a major challenge in fluorescence microscopy. We have shown that RCAN denoises and deblurs fluorescence microscopy image volumes with performance on a par with, or better than, state-of-the-art neural networks. The improvement over SRResNet and ESRGAN is unsurprising given that these are 2D networks that ignore information along the axial dimension; however, RCAN also outperformed CARE, which uses 3D information. In live 4D super-resolution applications, which typically exhibit pronounced bleaching that limits experiment duration, RCAN restoration allows illumination to be turned down to a level where the rate of photobleaching is drastically reduced or even negligible. Unacceptably noisy images can be restored, allowing for extended volumetric imaging similar to that attained with light-sheet microscopy but with better spatial resolution. We suspect that RCAN—carefully combined with HR, but noisy, confocal microscopy—may thus challenge the current primacy of light-sheet microscopy, particularly when imaging thin samples. At the same time, we expect RCAN denoising to synergize with light-sheet microscopy, allowing even greater gains in experiment duration (or speed) with that technique. RCAN also deblurs images, with better performance than the other networks tested (Fig. 2 ). We used this feature to improve spatial resolution in confocal microscopy (Fig. 3 ), achieving 2.5-fold improvement in lateral resolution, and iSIM data (Fig. 4 ), 1.9-fold improvement laterally and ~3.6-fold improvement axially. Our findings highlight the limitations of current neural networks and workflows and point the way to further improvements. First, on denoising applications we found ‘breaking points’ of the RCAN network at low-input SNR. Estimating such input SNR may be useful in addition to computing measures of network disagreement 9 , especially given that the latter were not especially predictive of differences between ground truth and denoised data (Supplementary Fig. 5 ). Better tools for describing the uncertainties 30 inherent in modern image restoration are needed. Second, for resolution enhancement applications, our simulations on noiseless data revealed that all networks suffer noticeable deterioration when attempting to deblur at blur levels greater than threefold. Perhaps this explains why attempts to restore blurry microscopy images with neural networks have enabled only relatively modest levels of deblurring 9 , 14 . The fact that RCAN yielded better reconstructions than other networks at fourfold blurring, despite a similar number of total network parameters (Supplementary Table 7 ), suggests that network architecture itself may have substantial impact on deblurring performance. In further support of this assertion, during the review period for this manuscript another paper incorporating and improving the RCAN architecture showed class-leading performance in 2D SIM applications 31 . Our simulations also show that increased degradation in network output correlates with increased blur (Fig. 2d ), implying that caution is prudent when attempting extreme levels of deblurring. Exploring the fundamental limits of deblurring with neural networks would be an interesting avenue of further research. Third, practical factors still limit the performance of network output, suggesting that further improvement is possible. For example, tuning network parameters is critical for network performance and we found that more residual blocks substantially improved RCAN performance relative to other networks (Supplementary Fig. 4 ). Whether the overall improvement we observed is due to the RCAN architecture or simply the added convolutional layers (or both) is currently unknown. However, given that RCAN with three residual blocks performed similarly to CARE, future experiments might address this point by adding further layers to CARE and comparing it to five-block RCAN. Ongoing improvements in GPUs will also help such studies, enabling easier hyperparameter optimization, investigation of different loss functions 32 and further investigation into the limits of network performance. For confocal-to-STED restoration, local deviations in spatial alignment between training data pairs probably contribute to error in nuclear pore placement (Supplementary Fig. 11 ), suggesting that a local registration step during training would boost the quality of restorations. In regard to ExM data, although we bypassed the need for fine registration of input and ground truth data by simulation of pre-expansion data, improved registration schemes may enable direct use of experimentally derived pre- and post-expansion pairs. We suspect this would further improve the degree of resolution enhancement, because complex noise and background variations in the data could be incorporated into the training procedure. We also expect that increasing label density would further improve the quality of our training data (Supplementary Fig. 20 ), probably also increasing SSIM and PSNR in expansion predictions (Supplementary Fig. 19 ). Our finding that successive application of denoising and resolution enhancement networks improved expansion prediction also merits further investigation, as it suggests that ‘chaining’ neural networks may be advantageous in other applications. Finally, achieving better spatial resolution in live samples usually demands corresponding improvements in temporal resolution, lest motion blur defeat gains in spatial resolution. We did not attempt to further increase the speed of our live recordings to account for this effect, but doing so may result in sharper images. Despite these caveats, our 3D RCAN in its current form improves noisy super-resolution acquisitions, enabling image capture over tens of thousands of images; quantification, segmentation and tracking of organelles and organelle dynamics; and the prediction and inspection of fine details in confocal and iSIM data otherwise hidden by blur. We hope that our work inspires further advances in the rapidly developing field of image restoration. Methods Use of neural networks for image restoration 3D RCAN The RCAN network consists of multiple residual groups which, themselves, contain residual structure. Such ‘residual in residual’ structure forms a very deep network consisting of multiple residual groups with long skip connections (Fig. 1a ). Each residual group also contains residual channel attention blocks (RCAB) with short skip connections. The long and short skip connections, as well as shortcuts within the residual blocks, allow bypassing of low-spatial-frequency information and thus facilitating the prediction of high-spatial-frequency information. Additionally, a channel attention mechanism 19 within the RCAB is used to adaptively rescale channel-wise features by considering interdependencies among channels, further improving the capability of the network to achieve higher resolution. We extended the original RCAN 17 to handle image volumes. Since 3D models of large patch size may consume prohibitive levels of GPU memory, we also changed various network parameters to ensure that our modified RCAN fits within 12-GB GPU memory. These changes relative to the original RCAN model include the following. (1) We set the number of residual groups to G = 5 in the RIR structure; (2) in each residual group, the RCAB number was set to 3 or 5; (3) the number of convolutional (Conv) layers in the shallow feature extraction and residual in residual (RIR) structure is C = 32; (4) the Conv layer in channel downscaling has C / r = 4 filters, where the reduction ratio r was set to 8; (5) all 2D Conv layers were replaced by 3D Conv layers; and (6) the upscaling module at the end of the network was omitted because network input and output are of the same size in our case. In the original RCAN paper 17 , a small patch of size 48 × 48 was used for training. By contrast, we used a much larger patch size, often 256 × 256 × 8 (Supplementary Table 3 ). We tried using a smaller patch size, but the training process was unstable and the results were poor. We suspect this is because microscopy images may show less high-spatial-frequency content than natural images, so a larger patch is necessary to extract sufficient gradient information for back-propagation. Data augmentation (rotation and flip) was used except for the data in Fig. 2 , where we noticed that turning on data augmentation introduced a translational shift in RCAN output that degraded both SSIM and PSNR. The percentile-based image normalization proposed in the CARE manuscript 9 is applied as a preprocessing step before training. In microscopy images, foreground objects of interest may be distributed sparsely. In such cases the model may overfit the background, failing to learn the structure of foreground objects if the entire image is used indiscriminately for training. To avoid overfitting, patches of the background were automatically rejected in favor of foreground patches during training. Background patch rejection is performed on the fly during data augmentation. We implemented training in a 3D version of RCAN using Keras 33 with a TensorFlow 34 backend (additional hyperparameters and training times for the datasets used in this paper are provided in Supplementary Table 3 ). Application of the denoising model on a 1,920 × 1,550 × 12 dataset using a desktop with a single GTX 1080 Ti GPU required ~63.3 s per volume; this also includes the time required to save the volume (with 32-bit output). On similar datasets with the same XY dimensions (but different number of Z -slices), application of the model required ~3.9–5.2 s per Z -slice. Further details are provided in Supplementary Note 1 . SRResNet and ESRGAN SRResNet is a deep residual network utilized for image super-resolution that, in 2017, obtained state-of-the-art results 20 . Building on ResNet 35 , SRResNet has 16 residual blocks of identical layout. Within each residual block there are two convolutional layers with small, 3 × 3 kernels and 64 feature maps, followed by batch-normalization layers and a parametric rectified linear unit (ReLU) as activation function. Generative adversarial networks 11 provide a powerful framework for generation of plausible-looking natural images with high perceptual quality in computer vision applications. GAN are used in image super-resolution applications to favor solutions that resemble natural images 20 . Among such methods, ESRGAN 21 came out top in the Perceptual Image Restoration and Manipulation challenge on perceptual super-resolution in 2018 (ref. 22 ). Thus, we selected ESRGAN as an additional reference method to evaluate performance on fluorescence microcopy images. The key concept underlying ESRGAN is to train a generator, G with the goal of fooling a discriminator, D that is trained to distinguish predicted from real HR images. Generator network G has 16 residual-in-residual dense blocks 21 (RRDB) of identical layout, which improves the RB design in SRResNet. RRDB has a residual-in-residual structure where multilevel residual learning is used. In addition, RRDB contains dense blocks 36 that increase network capacity due to the dense connections contained within each dense block. Discriminator network D is based on relativistic GAN 37 . It has eight convolutional layers with small, 3 × 3 kernels as in the VGG network 38 , and the resulting feature maps are followed by two dense layers. A relativistic average discriminator 20 is used as the final activation function to predict the probability that a real HR image is relatively more realistic than a fake. In this work we used published SRResNet and ESRGAN (PyTorch implementation, ) to process image volumes in a slice-by-slice manner. Before training, we normalized LR and HR images by percentile-based image normalization 9 to reduce the effect of hot and dead pixels in the camera. We then linearly rescaled the range of LR and HR images to [0,1]. SRResNet and ESRGAN networks were trained on an NVIDIA Quadro P6000 GPU. In all experiments (except the spherical phantoms), for each minibatch we cropped 16 random, 480 × 480, overlapping image patches for training; patches of background were not used for training. To determine whether a patch pair was from the background, we simply compared the mean intensity of the patch versus the whole image—if the mean intensity of the patch was <20% of that of the whole image, the patch pair was not used for training. In spherical phantom experiments, we selected 16 random 2D image slices (256 × 256) for each minibatch. For SRResNet, Adam optimization were used for all experiments with β 1 = 0.9, β 2 = 0.99, a learning rate of 2 × 10 –4 and 10 5 update iterations. During testing, batch-normalization updating was turned off to obtain an output HR image dependent only on the input LR image. For ESRGAN we used Adam optimization for all experiments, with β 1 = 0.9 and β 2 = 0.99. Generator G and discriminator D were alternately updated with learning rate initialized as 10 –4 and decayed by a factor of 2 every 10 4 updates. Training times were ~4–9 h for SRResNet and ~8–24 h for ESRGAN (Supplementary Table 4 ). Application usually required between ~60 s (SRResNet) and 120 s (ESRGAN) for the image volumes shown here. CARE The CARE framework has been described in detail. 9 We implemented CARE through Keras and TensorFlow via GitHub ( ). CARE networks were trained on an NVIDIA Titan RTX GPU card in a local workstation. Typically for each image volume, 2,048 patches of size 128 × 128 × 8 were randomly cropped and used to train a CARE network with a learning rate of 2 × 10 –4 . From the extracted patches, 10% was used as validation data. The number of epochs for training was 250, and mean absolute error was used as loss function. Training time for a given model was 8–12 h, application of the model on a 1,920 × 1,550 × 28-sized image volume required ~90 s (further hyperparameters and training times for the datasets used in this paper are provided in Supplementary Table 4 ). For all networks, we evaluated 3D PSNR and SSIM 29 on normalized input, network output and ground truth with built-in MATLAB (Mathworks) functions. We also performed some training and model application in the Google Cloud Platform using a virtual machine with two NVIDIA Tesla P100 GPUs (each with 16 GB of memory). iSIM U2OS cell culture and transfection U2OS cells were cultured and maintained at 37 °C and 5% CO 2 on glass-bottom dishes (MatTek, no. P35G-1.5-14-C) in 1 ml of DMEM medium (Lonza, no. 12-604 F) containing 10% fetal bovine serum (FBS). At 40–60% confluency, cells were transfected with 100 μl of 1× PBS containing 2 μl of X-tremeGENE HP DNA transfection reagent (Sigma, no. 6366244001) and 2 μl of plasmid DNA (300–400 ng μl –1 ; see Supplementary Table 1 for plasmid information) and maintained at 37 °C and 5% CO 2 for 1–2 days. Immunofluorescence labeling U2OS cells were fixed with 4% paraformaldehyde (Electron Microscopy Sciences, no. 15710) and 0.25% glutaraldehyde (Sigma, no. G5882) in 1× PBS at room temperature (RT) for 15 min. Cells were rinsed three times with 1× PBS and permeabilized by 0.1% Triton X-100 (Sigma, no. 93443) in 1× PBS for 1 min. Cells were treated with 300 μl of Image-iT FX Signal enhancer (ThermoFisher, no. R37107) for 30 min at RT followed by 30-min blocking with 1% bovine serum albumin (BSA)/PBS (ThermoFisher, no. 37525) at RT. Cells were then labeled with fluorescent antibodies and/or fluorescent streptavidin (Supplementary Table 1 ) in 0.1% Triton X-100/PBS for 1 h at RT. After antibody labeling, cells were washed three times with 0.1% Triton X-100 and stained with DAPI (Sigma, no. D9542; 1 μg ml –1 ) in 1× PBS for 5 min at RT. DAPI staining was used for expansion factor estimation and rapid cell or region localization throughout the ExM process. iSIM imaging for denoising iSIM data was obtained on our previously reported in-house system 3 . A ×60/1.42 numerical aperture (NA) oil objective (Olympus) was used for all imaging, except for the training data acquired for iSIM in ExM cross-modality experiments (which used a 1.2 NA water-immersion lens, described in the ExM section in more detail). To obtain high- and low-SNR image pairs for training, high (usually 33 mW for 488 nm, 72 mW for 561 nm) and low power (0.3 mW for 488 nm, 0.6 mW for 561 nm) were rapidly switched via an acoustic-optic tunable filter. Green and red fluorescence images were acquired with a filter wheel (Sutter, nos. FG-LB10-BIQ and FG-LB10-NW) and notch filters (Semrock, nos. NF03-488E-25 and NF03-561E-25). Samples were deposited on a 35-mm-diameter, high-precision 1.5 dish (Matek, no. P35G-0.170-14-C). For live-cell imaging, dishes were mounted within an incubation chamber (Okolab, no. H301-MINI) to maintain temperature at 37 °C. Estimation of illumination intensity A power meter (Thorlabs, no. PM100D) was used to measure excitation laser power immediately before the objective. Average intensity was calculated using measured intensity divided by field of view (106 × 68 µm 2 ). Jurkat T-cell culture, substrate preparation and iSIM imaging E6-1 wild-type Jurkat cells were cultured in RPMI 1640 supplemented with 10% FBS and 1% penicillin/streptomycin. Cells were transiently transfected with EMTB-3XGFP plasmid using the Neon (ThermoFisher Scientific) electroporation system 2 days before imaging, using the manufacturer’s protocol. Coverslips attached to eight-well Labtek chambers were incubated in 0.01% w/v poly- l -lysine (PLL) (Sigma-Aldrich) for 10 min. PLL was aspirated and the slide left to dry for 1 h at 37 °C. T-cell-activating antibody coating was performed by incubation of slides in a 10 μg ml –1 solution of anti-CD3 antibody (Hit-3a, eBiosciences) for 2 h at 37 °C or overnight at 4 °C. Excess anti-CD3 was removed by washing with L-15 imaging medium immediately before the experiment. Imaging of live EMTB-3XGFP-expressing Jurkat cells was performed at 37 °C using iSIM, with a ×60/1.42 NA lens (Olympus) and 488-nm laser for excitation using the same in-house system as above 3 . For volumetric live imaging, the exposure was set to 100 ms per slice, spacing between slices to 250 nm and intervolume temporal spacing to 12.3 s. Linearity estimation Linearity was assessed by measuring intensity in different regions in maximum-intensity projections (MIPs) of raw images of fixed cells expressing U2OS cells expressing the mEmerald-Tomm20 label, and the corresponding RCAN predictions (Supplementary Fig. 7 ). Small regions of interest (8 × 8 pixels) were selected and the average intensity value in each region used in comparisons between raw input and RCAN predictions. ExM Expansion microscopy was performed as previously described 26 . Immunolabeled U2OS cells were post-fixed in 0.25% glutaraldehyde/1× PBS for 10 min at RT and rinsed three times in 1× PBS. Fixed cells were incubated with 200 μl of monomer solution (19% w/v sodium acrylate (Sigma, no. 408220) and 10% w/v acrylamide (Sigma, no. A3553), 0.1% w/v N , N -methylenebis(acrylamide) (Sigma, no. 146072) in 1× PBS) for 1 min at RT. To start gelation, the monomer solution was replaced by fresh solution containing 0.2% v/v ammonium persulfate (ThermoFisher, no. 17874) and 0.2% v/v tetramethylethylenediamine (ThermoFisher, no. 17919). Gelation was allowed to proceed for 40 min at RT and the resulting gel was digested in 1 ml of digestion buffer (0.8 M guanidine hydrochloride and 0.5% Triton X-100 in 1× Tris base, acetic acid and EDTA buffer) by Proteinase K (0.2 mg ml –1 ; ThermoFisher, no. AM2548) for 1 h at 45 °C. After digestion, gels were expanded in 5 ml of pure water (Millipore, Direct-Q 5UV, no. ZRQSVR5WW) and fresh water was exchanged three or four times every 15 min. Pre- and post-ExM on the same cell To compare images between pre- and post-ExM, the same group of cells must be located and imaged before and after ExM (Supplementary Fig. 15 ). After initial antibody (Supplementary Table 1 ) and DAPI staining, pre-ExM cells were imaged under a wide-field microscope with a ×20/0.5 NA air objective (Olympus, UPlanFL N). Based on DAPI signal, the nuclear shape, diameter and distribution pattern of selected cells can be recorded, a useful aid in finding the same cells again if post-ExM images are acquired on the wide-field microscope. The coarse location of a group of cells was marked by drawing a square with a Sharpie marker underneath the coverslip. The marked cells were then imaged on our in-house iSIM 3 before and after ExM in later steps. Before expansion, the marked region was imaged on iSIM with a ×60/1.2 NA water objective (Olympus, PSF grade) to acquire pre-ExM data. The correction collar was adjusted to the 0.19 setting, which was empirically found to minimize spherical aberration. After ExM, a square portion of expanded gel was cut out based on the marked region drawn underneath the cover glass, then remounted on a poly- l -lysine-coated glass-bottom dish (MatTek, no. P35G-1.5-14-C) and secured by deposition of 0.1% low-melt agarose around the periphery of the gel. To create the coated glass-bottom dish we applied poly- l -lysine (0.1% in water, Sigma, no. P892) for 30 min at RT, rinsed three times with pure water and air dried. The same group of cells was then found on the wide-field microscope using DAPI stain and the ×20 air objective. By comparison to the wide-field DAPI image acquired before expansion, both coarse estimation of the expansion factor and potential cell distortion/damage can be assayed. Finally, another square was drawn underneath the coverslip to locate the expanded cells, which were then imaged on the iSIM with the same objective and correction collar settings for post-ExM image acquisition. Attempts to register pre- and post-expansion data Pre- and post-expansion images were registered using the landmark registration module in 3D Slicer 39 ( ). Landmark-based registration in 3D Slicer is an interactive registration method that allows the user to view registration results and manipulate landmarks in real time. We first rescaled pre-expansion images according to the estimated expansion factor in the x , y and z axes. During the registration process, pre-expansion images were used as fixed volumes and post-expansion images as moving volumes. Pre- and post-expansion images were coarsely aligned by affine registration based on two or three manually selected landmarks. Image registration was further refined using thin-plate spline registration by interactive manipulation of landmarks. Finally, a transformation grid was generated to transform post- to pre-expansion images (Supplementary Fig. 16 ). Estimation of expansion factor Pre- and post-expansion mitochondrial and microtubule data were inspected in 3D Slicer and registered with landmark-based registration as described in the previous section. Apparent distances between feature points were manually measured and ratioed to obtain the local expansion factor, which varied between 3.1 and 3.4 for mitochondria and between 3.9 and 4.1 for microtubules (Supplementary Fig. 14 ). Based on this analysis, we used a value of 3.2 for mitochondria and 4.0 for microtubules in all downstream processing. Stage scanning with iSIM For rapid tiling of multiple iSIM image fields to capture large expanded samples we added a stage scan function to our control software, available on request from the authors. In this software, a step size of 0–150 μm can be selected for both horizontal ( X ) and vertical ( Y ) directions. We set this step size to ~70 μm, a value smaller than the field of view to ensure that each image had at least 20% overlap with adjacent images for stitching. We used up to 100 steps in both directions. The stage scan experiment was performed in a ‘zigzag’ format (adjacent rows were scanned in opposite directions), to avoid large movements and maintain sample stability. At each stage position, 3D stacks were acquired. Stacks were stitched in Imaris Stitcher (Bitplane). Generation of synthetic pre-expansion data To first order, we can interpret the post-expansion image as enlargement of object s by an expansion factor M and blurring with the system PSF, sPSF: $$g_{{\mathrm{POST}}} = s_M \ast {\mathrm{sPSF}}$$ where s M is the expanded object, g POST is the post-expansion image of the expanded object and * is the convolution operation. Similarly, if we upsample the pre-expansion image by factor M we can approximate it as $$g_{{\mathrm{PRE}}} = s_M \ast {\mathrm{ePSF}}$$ where ePSF is sPSF enlarged M times. We seek to express g PRE in terms of g POST , thus obtaining an estimate of g PRE in terms of the measured post-expansion image. Fourier transforming (FT) both equations, dividing to eliminate the object spectrum and rearranging terms, we obtain $$G_{{\mathrm{PRE}}} = \left( {G_{{\mathrm{POST}}}} \right)\left( {{\mathrm{mOTF}}} \right)$$ where mOTF is a modified optical transfer function (OTF) equivalent to the ratio of the OTFs corresponding to ePSF and sPSF—that is, mOTF = FT(ePSF)/FT(sPSF). To avoid zero or near-zero division in this calculation, we set the amplitude of FT(sPSF) to 1 beyond the cutoff frequency of sPSF. Finally, inverse FT yields a synthetic estimate of g PRE . We improved this estimate by also modifying the background and noise levels to better match experimental pre-expansion images, computing SSIM between the synthetic and experimental pre-expansion images as a measure of similarity. We tried to maximize SSIM by (1) laterally and axially modification of the modeled sPSF so that the FWHM value is equal to that measured with 100-nm beads and resolution-limited structures in the experimental images; (2) modifying the background level—that is, adding or subtracting a constant value; and (3) adding Gaussian and Poisson noise. We optimized these parameters in a range ±15% of the values derived from experimental pre-expansion data (two or three pre-expansion images that could be reasonably well registered to corresponding post-expansion data), and then applied these optimized parameters for all synthetic data. Finally, we performed a visual check before deconvolving synthetic and post-expansion data in preparation for RCAN training. Fifteen iterations of Richardson–Lucy deconvolution were applied, using sPSF for the expanded images and the modified ePSF for synthetic data. These steps are shown in Supplementary Fig. 18 . One- and two-step RCAN methods for expansion prediction We trained the RCAN expansion model with matched pairs of deconvolved post-expansion and synthetic pre-expansion images. To obtain one-step RCAN results, iSIM data were deconvolved and upsampled by the expansion factor. The expansion model was then applied to the deconvolved, upsampled images to generate the predicted expansion images. For the two-step RCAN method, after deconvolving and upsampling the iSIM data, first the RCAN denoising model developed for the data in Fig. 1 was applied and then the expansion model applied to yield the final expansion predictions. Estimation of SNR in experiments We assumed a simple model for per-pixel SNR, accounting for Poisson noise arising from signal and read noise from the camera. After subtracting a constant background offset (100 counts) and converting the digital signal in each pixel to photons using the manufacturer-supplied conversion factor (0.46 photoelectrons per digital count), we used $${\mathrm{SNR}} = {\it{S}}/\left( {{\it{S}} + {\it{N}}_{\mathrm{r}}^2} \right)^{0.5}$$ where S is the observed, background-corrected signal in photoelectrons and N r the read noise (1.3 electrons according to the manufacturer). Spherical simulations For the images in Fig. 2 and images and analysis in Supplementary Fig. 6 , the simulated ground truth images consisted of spheres seeded at random locations and of random size and intensity, generated with ImgLib2 (ref. 40 ). The maximum radius of the spheres was set at three pixels and the intensity range to 1,000–20,000. We generated a set of 30 such images of size 256 × 256 × 256. Ground truth images were generated by blurring this set of 30 images with the iSIM PSF (simulated as the product of excitation and emission PSFs, generated in PSF generator ( ) with an NA of 1.42 and wavelength 488 and 561 nm, respectively). Noisy phantom images were obtained by adding Gaussian noise (simulating the background noise of the camera in the absence of fluorescence) and Poisson noise (proportional to the square root of the signal) to the ground truth images. The 2×, 3× and 4× blurred noiseless phantom images were obtained by blurring the initial 30 images with a kernel 2×, 3× and 4× the size, respectively, of the iSIM PSF. Estimation of spatial resolution The resolution measures in Fig. 1d were estimated by computing FWHM as a measure of the apparent size of a subdiffractive object (microtubule width). However, all other resolution estimates were based on decorrelation analysis 24 . This method estimates average image resolution from the local maxima of a series of decorrelation functions, providing an estimated resolution that corresponds to the highest spatial frequency with sufficient SNR, rather the Abbe resolution limit. There are four main steps in the algorithm. First, the FT of the input image I ( k ) and its normalized version I n ( k ) are cross-correlated using Pearson correlation, producing a single value between 0 and 1, denoted d . Second, the normalized FT I n ( k ) is repeatedly filtered by a binary circular mask with different radius, r ∈ [0,1] (here r i s expressed as a normalized spatial frequency), and the cross-correlation between I ( k ) and each filtered I n ( k ) is recalculated, yielding a decorrelation function d ( r ). This decorrelation function exhibits a local maximum of amplitude A 0 that indicates the spatial frequency r 0 of best noise rejection and signal preservation ratio. Third, the input image is repeatedly filtered with different Gaussian high-pass filters to attenuate the energy of low frequencies. For each filtered image, another decorrelation function is computed, generating a set of [ r i , A i ] pairs, where r i and A i are the position and amplitude, respectively, of the local maximum. Last, the most suitable peak position (that is, selected from r i ) is selected as the estimate of resolution. In the original algorithm 24 , two choices are used and validated in many applications: (1) the peak corresponding to the highest frequency (that is, the maximum r i value); and (2) the peak corresponding to the highest geometric mean of r i and A i . However, we found that both criteria often failed when using them on our images—that is, the estimated resolution was often a value much beyond the theoretic resolution limit. Plotting [ r i , A i ] pairs shows three phases: A i first increases in phase I, then gradually decreases in phase II and finally increases again in phase III (Supplementary Fig. 9a ). Resolution values in phase III exist due to digital upsampling of pixel size, but are not reliable because they extend past the Abbe limit. We thus modified the algorithm by (1) setting a theoretical resolution limit in computing the decorrelation functions and (2) adopting a new criterion to determine the resolution estimate. Our new criterion finds the local minimum of A i to locate r i at the transition between phases II and III, which provides a reliable resolution estimate that is robust to changes in pixel size. We validated this strategy on a microtubule image with 1×, 1.5× and 3× digital upsampling (Supplementary Fig. 9b ), finding that our criterion gave identical estimates of spatial resolution in each case. For estimation of lateral and axial resolution in our data (input, ground truth and deep learning outputs), we first interpolated the stacks along the axial dimension to achieve isotropic pixel size. We then performed our modified decorrelation analysis on a series of xy slices to obtain a lateral resolution estimate (with mean and standard deviations derived from the slices). For axial resolution, we implemented sectorial resolution estimate 24 on a series of xz slices where the binary circular mask was replaced with a sectorial mask (22.5° opening angle; Supplementary Fig. 9c ) that captured spatial frequencies predominantly along the z dimension. The MATLAB code used for resolution estimation is available upon request from the corresponding author. Confocal and STED microscopy Sample preparation Mouse embryonic fibroblasts were grown in no. 1.5 glass-bottom dishes (MatTek, no. P35G-1.5-20-C) using DMEM (Gibco, no. 10564011) supplemented with 10% FBS (Quality Biological, no. 110-001-101HI). For microtubule and nuclear pore samples, we fixed and permeabilized cells with –20 °C methanol (Sigma-Aldrich, no. 322415) for 10 min at –20 °C. Samples were rinsed and blocked for 1 h with 1× Blocker BSA (ThermoFisher Scientific, no. 37525) and incubated overnight with a 1:500 dilution of antibodies primary rabbit anti-alpha tubulin (Abcam, no. ab18251, 4 µg ml –1 ) and mouse antinuclear pore complex (Abcam, no. ab24609, 4 µg ml –1 ) in 1× Blocker BSA at 4 °C. Samples were washed three times for 5 min with 1× Blocker BSA. After the last washing step, we fluorescently labeled samples by incubation with a 1:500 dilution of antibodies secondary Alexa Fluor 594 goat anti-mouse (ThermoFisher Scientific, no. A-11005, 2 µg ml –1 ) and ATTO 647 N goat anti-rabbit (Sigma-Aldrich, no. 40839, 2 µg ml –1 ) in 500 µl of 1× Blocker BSA for 4 h at RT. Samples were washed four times for 5 min with 1× Blocker BSA. After a final washing, samples were mounted in glass-bottom dishes using 90% glycerol (Sigma-Aldrich, no. G2025) in PBS (KD Medical, no. RGF-3210). For SiR-DNA imaging we used live MEF cells, grown as before, and MEF cells fixed with 4 °C, 4% formaldehyde (Sigma-Aldrich, no. 252549) in PBS for 20 min at RT. Sample labeling was performed with the SiR-DNA kit (Spirochrome, no. SC007) following the manufacturer’s protocol: . Fixed samples were mounted as above. Imaging We acquired 33 matched sets of confocal/STED volumes for microtubule- and nuclear pore complex-labeled samples. For these experiments all images were acquired using a Leica SP8 3X STED microscope, a white-light laser for fluorescence excitation (470–670 nm), a Leica HyD SMD time-gated photomultiplier tube and a Leica 100x (1.4 NA) STED White objective (Leica Microsystems, Inc.). ATTO 647 was excited at 647 nm and emission was collected over a bandwidth of 657–700 nm; Alexa Fluor 594 was imaged at 580 nm and emission was collected over a bandwidth of 590–650 nm. All images (both confocal and STED) were acquired with a pinhole size of 0.7 arbitrary unit (A.U.), a scan speed of 600 Hz, a pixel format of 1024 × 1024 (pixel size, 25 nm), a six-slice z-stack acquired at an interslice distance of 0.16 μm and time gating on the HyD SMD set to a range of 0.7–6.5 ns. STED images for both labels were acquired with depletion at 775-nm laser (pulsed at 80 MHz) at a power of 105 mW at the back aperture for ATTO 647-labeled microtubules (25% full laser power), and of 85 mW at the back aperture for Alexa Fluor 594-labeled nuclear pore complexes (20% full laser power). Fluorescence excitation for STED imaging was set to 4× and 1.5× the confocal excitation power levels for ATTO 647 and Alexa Fluor 594, respectively. For ATTO 647, HyD SMD gain was set to 100% for confocal and STED imaging; for Alexa Fluor 594, HyD SMD gain was set to 64% for confocal imaging and 100% for STED imaging. For both colors, confocal images were acquired with a two-frameline average and STED images were acquired with a two-frameline average combined with two-frame integration. SiR-DNA-labeled MEF cells were imaged in both fixed- (confocal and STED) and live-cell (confocal only) mode. Low-SNR confocal and high-quality STED image replicates were taken on similar fixed samples (35 datasets) to train a deep learning model for application to live-cell confocal data. Low-excitation-level (that is, low-SNR), live-cell confocal images were followed over time to capture cell division. For these experiments, the same microscope hardware listed above was used but scanned in resonant mode (to afford more rapid imaging capable of capturing cell division). For live-cell confocal images, stacks of 25 or more slices (interslice distance of 0.16 μm) were taken approximately every 1 min (2 s per frame) continuously for a period of ~30–45 min. Images were taken at a scan rate of 8,000 Hz, eight-line average, pinhole set to 1 A.U, 647-nm excitation (5% total laser power), an emission bandwidth of 657–637 nm and at a pixel size of 25 nm at a format of 2,048 × 2,048. For fixed-cell experiments the confocal settings were the same except that line averaging was set to 16, frame rate was 6 s per frame, excitation power at 647 nm was set to 0.1% total laser power (to approximately match SNR in the live-cell data) and only one z-stack was taken. STED experiments were the same, except that 647-nm excitation was set to 1.5% and depletion power at 775 nm was 7.5% (approximately 35 mW at the back aperture). Time-gating windows on the HyD SMD was set to 0.3–6.5 or 0.76.5 ns for confocal and STED experiments, respectively. For live experiments, the temperature was set to 37 °C using a culture dish heater and temperature control unit (nos. DH-35 and TC-344B, Warner Instruments) and an objective heater (Bioptechs). Deconvolution Huygens Professional (v.19.1, Scientific Volume Imaging) was used to deconvolve some confocal images. All deconvolution was based on idealized point spread functions using the classic maximum-likelihood estimation deconvolution algorithm. In some cases, the object stabilizer module was used to compensate for drift and minor mechanical instabilities. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Training and test datasets for organelle denoising, synthetic phantoms, confocal-to-STED and iSIM-to-ExM predictions are publicly accessible at the Zenodo repository ( ). Source data are provided with this paper. Code availability The code and sample data used in this study are available at . An installation guide, data and instructions for use are also available from the same webpage.
Fluorescence imaging uses laser light to obtain bright, detailed images of cells and even sub-cellular structures. However, if you want to watch what a living cell is doing, such as dividing into two cells, the laser may fry it and kill it. One answer is to use less light so the cell will not be damaged and can proceed with its various cellular processes. But, with such low levels of light there is not much signal for a microscope to detect. It's a faint, blurry mess. In new work published in the June issue of Nature Methods a team of microscopists and computer scientists used a type of artificial intelligence called a neural network to obtain clearer pictures of cells at work even with extremely low, cell-friendly light levels. The team, led by Hari Shroff, Ph.D., Senior Investigator in the National Institute of Biomedical Imaging and Bioengineering, and Jiji Chen, of the trans-NIH Advanced Imaging and Microscopy Facility call the process "image restoration." The method addresses the two phenomena that cause low-light fuzzy images—low signal to noise ratio (SNR) and low resolution (blurriness). To tackle the problem they trained a neural network to denoise noisy images and deblur blurry images. So what exactly is training a neural network? It involves showing a computer program many matched pairs of images. The pairs consist of a clear, sharp image of, say, the mitochondria of a cell, and the blurry, unrecognizable version of the same mitochondria. The neural network is shown many of these matched sets and therefore "learns" to predict what a blurry image would look like if it were sharpened up. Thus, the neural network becomes capable of taking blurry images created using low-light levels and converting them into the sharper, more detailed images scientists need in order to study what is going on in a cell. To work on denoising and deblurring 3D fluorescence microscopy images, Shroff, Chen and their colleagues collaborated with a company, SVision (now part of Leica), to refine a particular kind of neural network called a residual channel attention network or RCAN. Images of nuclear pores created with diffraction-limited confocal microscope (left) are blurry. Using a super-resolution microscope the nuclear pores are much better resolved (GT, ground truth image). At the far right the RCAN network was shown the blurry confocal image and predicted the sharp image, which much better resembles the high resolution GT image. Scale bar = 5 micrometers. Credit: Jiji Chen In particular, the researchers focused on restoring "super-resolution" image volumes, so-called because they reveal extremely detailed images of tiny parts that make up a cell. The images are displayed as a 3D block that can be viewed from all angles as it rotates. The team obtained thousands of image volumes using microscopes in their lab and other laboratories at NIH. When they obtained images taken with very low illumination light, the cells were not damaged, but the images were very noisy and unusable—low SNR. By using the RCAN method, the images were denoised to create a sharp, accurate, usable 3D image. "We were able to 'beat' the limitations of the microscope by using artificial intelligence to 'predict' the high SNR image from the low SNR image," explained Shroff. "Photodamage in super-resolution imaging is a major problem, so the fact that we were able to circumvent it is significant." In some cases, the researchers were able to enhance spatial resolution several-fold over the noisy data presented to the 3D RCAN. Another aim of the study was determining just how messy of an image the researchers could present to the RCAN network—challenging it to turn a very low resolution image into a usable picture. In an "extreme blurring" exercise, the research team found that at large levels of experimental blurring, the RCAN was no longer able to decipher what it was looking at and turn it into a usable picture. "One thing I'm particularly proud of is that we pushed this technique until it 'broke,'" explained Shroff. "We characterized the SNR regimen on a continuum, showing the point at which the RCAN failed, and we also determined how blurry an image can be before the RCAN cannot reverse the blur. We hope this helps others in setting boundaries for the performance of their own image restoration efforts, as well as pushing further development in this exciting field."
10.1038/s41592-021-01155-x
Medicine
Researchers retract paper that suggested Chinese CRISPR twins might die early
Xinzhu Wei et al. CCR5-∆32 is deleterious in the homozygous state in humans, Nature Medicine (2019). DOI: 10.1038/s41591-019-0459-6 Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-019-0459-6
https://medicalxpress.com/news/2019-10-retract-paper-chinese-crispr-twins.html
Abstract We use the genotyping and death register information of 409,693 individuals of British ancestry to investigate fitness effects of the CCR5 -∆32 mutation. We estimate a 21% increase in the all-cause mortality rate in individuals who are homozygous for the ∆32 allele. A deleterious effect of the ∆32/∆32 mutation is also independently supported by a significant deviation from the Hardy–Weinberg equilibrium (HWE) due to a deficiency of ∆32/∆32 individuals at the time of recruitment. Main In late 2018, a scientist from the Southern University of Science and Technology in Shenzhen, Jiankui He, announced the birth of two babies whose genomes were edited using CRISPR 1 . No presentation of the experiment has appeared in the scientific literature, however online information 2 describes an introduction of mutations in the CCR5 gene with the aim of mimicking the effect of the CCR5 -∆32 mutation, which provides protection against HIV in European individuals 3 . Although the mutations were not identical to CCR5 -∆32 (ref. 2 ), and the consequences of the mutations are unknown, the stated purpose was nevertheless the prevention of HIV. The CRISPR experiment raises a number of obvious ethical issues. In addition, it is not clear whether the ∆32 mutation is beneficial. A mutation can be advantageous or disadvantageous depending on environmental conditions 4 and developmental stages 5 . In fact, despite the protection that ∆32 provides against HIV, and possibly other pathogens such as smallpox 6 and flavivirus 7 , and although it facilitates recovery after stroke 8 , it also appears to reduce protection against certain other infectious diseases such as influenza 9 . Direct fitness effects of individual segregating mutations are expected to be small, and are therefore very hard to measure directly. However, owing to the recent availability of large databases of genomic data, direct studies of fitness effects of individual mutations have now become feasible 10 . We might expect that the ∆32 mutation is deleterious in the homozygous state based on previous reports in smaller data sets, which show that individuals with the ∆32/∆32 genotype have increased mortality when infected by influenza 9 and are four times more likely to develop certain infectious diseases 11 . Here we investigate this hypothesis using the genotyping and death register information of 409,693 individuals of British ancestry in the UK Biobank 12 . ∆32 has a frequency of 0.1159 in the British population and the UK Biobank contains data from thousands of individuals who are homozygous for the ∆32 allele, providing an opportunity to compare the mortality of these individuals to that of ∆32/+ and +/+ individuals. We calculate the survival rate (1 − death rate) per year for each of the three ∆32 genotypes, from age 41 to age 78 (see Methods ), which is the entire range allowed by the data available (Fig. 1a ). Owing to the small sample size at ages 77 and 78, we primarily report the survival probability before age 76 (see Methods ). The death rate from age 70 to 74 in the UK Biobank volunteers is 46–56% lower than that in the general UK population of the same age 13 , probably owing to an ascertainment bias known as the ‘healthy volunteer effect’ 14 . Nevertheless, the relative death rates among different genotypes can still be compared to provide information about the fitness effects of specific mutations. The uncorrected survival probabilities to age 76 of individuals enrolled in the study is 0.8351 for ∆32/∆32, 0.8654 for ∆32/+, and 0.8638 for +/+ (Fig. 1a ), which implies that ∆32/∆32 has an approximately 21% higher aggregated death rate before age 76 than the other genotypes. The average age of enrollment is 56.5 years, so the data largely reflect differences in mortality in individuals above this age. We can partially correct for the death registration delay and biased ascertainment using the general population’s death rate per year. After correction, the individuals with the ∆32/∆32 genotype are approximately 20% less likely to reach age 76 than individuals with the other genotypes (see Methods ). To test the significance of the nominally lower survival rate of ∆32/∆32, we first perform a log-rank test comparing the death rate of ∆32/∆32 individuals to that of the other two genotypes ( Z score = 2.37, one-tailed P = 0.0089). We also bootstrap the sample 1,000 times and find that ∆32/∆32 individuals have a significantly higher death rate than the other two genotypes, whereas ∆32/+ and +/+ individuals have similar death rates (Supplementary Table 1 ). The increase in mortality of ∆32/∆32 individuals is the highest at age 74, at which point it is 26.4% higher than the mortality of +/+ individuals (95% bootstrap confidence interval (3.0%,49.5%)). Similarly, a Cox model 15 for left truncated and right censored data also suggests that ∆32/∆32 individuals have an average 21.4% elevated death rate across all ages (95% confidence interval 3.4% and 42.6%, one-tailed P = 0.0089). The fifth principal component is associated with Irish ancestry 12 and is also associated with a difference in mortality (two-sided P = 2.5 × 10 −16 ) in the Cox model. However, when correcting for this effect using prinicipal component analysis (PCA) loadings as covariates, the increase in mortality of ∆32 is maintained (see Supplementary information ). We note that despite the nominally large detected effect on survivorship, the P value of 0.0089 is only moderately small, owing to the low frequency of ∆32/∆32 individuals and the generally low mortality in the cohort. The accuracy of the estimates will probably improve in future years as the mortality rate of the cohort increases. Fig. 1: CCR5 -∆32 is deleterious in the homozygous state. a , Survival probabilities of the three ∆32 genotypes (+/+, ∆32/+ and ∆32 / ∆32). The one-tailed P values from the log-rank tests up to age 76 are shown. The number of samples for which age information and genotype at ∆32 are both available is 395,704. b , The histogram of inbreeding coefficients, F , from 5,932 SNPs whose allele frequencies closely resemble that of ∆32. The black arrow points to the observed F of ∆32 ( F ∆32 / ∆32 = −0.19), calculated for the ∆32 / ∆32 individuals. The sample size used in estimating F for each of the 5,932 SNPs varies from 7,896 to 409,607 with a mean of 405,428, and the sample size for ∆32 is 395,714. Full size image Selection against homozygous individuals will lead to deviations from the HWE, which can be measured by the inbreeding coefficient ( F ). Deviations from the HWE at the time of enrollment, which is the time at which samples are obtained for genotyping, provides an assessment of the differential fitness of ∆32 genotypes that is independent from the previous analyses using death registry information obtained after enrollment. We test for deviations from the HWE consistent with a deleterious effect of ∆32 in homozygous individuals by calculating the allele-specific inbreeding coefficient F ∆32 / ∆32 . However, there might be deviations from HWE in the data for multiple other reasons, including inbreeding and population structure. Therefore, we compare F ∆32 / ∆32 (see Methods ) with the locus-specific value of F for other variants in the data with minor allele frequencies similar (± 0.0025) to that of ∆32. Only 20 out of 5,932 variants have a smaller F than F ∆32 / ∆32 (Fig. 1b ; empirical one-tailed P = 0.0034). In addition, the deviation from the HWE for each age group also correlates with the deviation predicted by the survival probability (Spearman’s ρ = 0.67, P = 1.4 × 10 −4 ; see Supplementary information and Extended Data Fig. 1 ). These two independent analyses are largely consistent with each other and both indicate a substantial increase in mortality associated with the ∆32/∆32 genotype. Our results show that being homozygous for the ∆32 mutation is associated with reduced life expectancy in a modern cohort, despite the protective effect of the mutation against HIV 3 . This finding echoes the previous reports that ∆32 reduces resistance against influenza 9 and other infectious diseases 11 . We did not observe any difference in mortality between ∆32 / + and +/+ individuals (Supplementary Table 1 ), despite the fact that ∆32 / + also provides protection against HIV 3 . It could reflect the healthy volunteer effect in the UK Biobank cohort 13 if individuals affected by HIV, or suffering from higher mortality due to HIV infection, are less likely to be recruited. In that case, our estimates of death rates reflect individuals that have reduced exposure to HIV, and the conclusion regarding increased mortality of ∆32 / ∆32 is then with reference to such individuals. If so, it would also imply that ∆32 is overdominant in the presence of HIV; that is, that individuals heterozygous for the mutation have the highest fitness. In the absence of HIV or other infectious agents for which the mutation provides protection, the mutation will be under negative directional selection. However, because only approximately 0.16% of the current British population is infected by HIV 16 , the benefit from this protection is probably too small to have a detectable influence on survival probability in our study. It is unclear exactly which factors are most important for the fitness effects of the ∆32 mutation. There are many phenotypic associations that are significant at 5% significance level after correction for multiple testing in the UK Biobank (see Supplementary information for the phenotypes), and the mutation is probably highly pleiotropic. Out of the 5,932 single-nucleotide polymorphisms (SNPs) with matching allele frequencies, only 76 have more phenotypic associations than ∆32 in terms of the UK Biobank phenotypes (empirical one-tail P = 0.0128, see Supplementary information ). It is perhaps not unexpected that homozygosity for a deletion in a functional gene is associated with reduced fitness. It underscores the idea that introduction of new or derived mutations in humans using CRISPR technology, or other methods for genetic engineering, comes with considerable risk even if the mutations provide a perceived advantage. In this case, the cost of resistance to HIV may be increased susceptibility to other, and perhaps more common, diseases. Methods The study population This study uses the UK Biobank data under application number 33672 and basket IDs 10997 and 2000429. It complies with ethical regulations of the University of California (UC) Berkeley and the data are accessed under the Material Transfer Agreement between the UK Biobank and UC Berkeley. In the UK Biobank, 409,693 volunteers have self-reported British ancestry confirmed by PCA 12 , which constitutes roughly 0.62% of the entire British population. Our main analysis is performed on these volunteers, unless otherwise stated. There are 75,970 volunteers in the UK Biobank whose data are labeled as of non-British ancestry, which are used to investigate the effect of ∆32 in populations other than the British. The UK Biobank volunteers were recruited during 2006–2010 and 2.9% of the volunteers (13,831) have a recorded age at death (all cause). Marker selection and validation SNP rs62625034 (coordinate 3:46414975 in GRCh37) is a directly genotyped SNP that is used to identify ∆32 (rs333) based on the following validations. First, the Affymetrix probe used for this SNP is CCATACAGTCAGTATCAATTCTGGAAGAATTTCCA[G/T]ACATTAAAGATAGTCATCTTGGGGCTGGTCCTGCC, based on annotation files ‘Axiom_UKBiLEVE.na34.annot.csv’ and ‘Axiom_UKB_WCSG.na34.annot.csv’. The targeted region of this probe fully includes the 32-bp deletion in rs333, given rs333 (∆32) has coordinate 3: 46414947-46414978 in GRCh37. Second, rs62625034 is not called as a SNP in the 1000 Genome database and a recent study on variants in CCR5 (ref. 17 ) also confirmed that it could be detected only in one of the Denisovian samples. However, the detected allele frequency by the probe of rs62625034 in the UK Biobank is 0.1159 among the British ancestry genomes, which does not resemble the frequency of rs62625034 but closely resembles the frequency of rs333 (0.1237) in the European and the British population (CEU and GBR) in the 1000 Genomes data. Third, SNP rs113010081, a directly genotyped SNP in the UK Biobank data, is in strong linkage disequilibrium with rs333 in the 1000 Genomes data, with a r 2 of 0.93 combining CEU and GBR in 1000 Genomes data ( ). We calculate the Pearson correlation between rs113010081 and the probe of rs62625034 using the UK Biboank British ancestry genotypes and obtain r 2 = 0.94, which again resembles the correct linkage disequilibrium between rs113010081 and rs333. In addition, there is no other SNP that is in as strong a linkage disequilibrium with rs113010081 in the targeted region of this probe ( ). Last, we also estimate the survival probability for rs113010081, and the results are similar to that obtained for rs62625034 (not shown). Estimation of survival probability The UK Biobank death records are updated quarterly with the UK National Health Service (NHS) Information Centre for participants from England and Wales, and by NHS Central Register, Scotland for participants from Scotland. However, the death records are not made available immediately to researchers. The latest date of death among all registered deaths in the downloaded data is 16 February 2016, and we use this date to approximate the time of last death entry, and assume that after this date we have no mortality or viability information for the volunteers. We use five entries from the UK Biobank data—the age at recruitment, the date of recruitment, the year of birth, month of birth, and the age at death—to calculate the number of individuals ( N i ) who are ascertained from age i to age i + 1, and the occurrence of death observed from these N i individuals during the interval of age i to age i + 1 is O i . Using this information, we calculate the ascertained age for each individual. We ignore the partially ascertained age to avoid biases from censoring. For example, an individual recruited at age 45.2, and reaching age 52.3 on 16 February 2016, who does not have a reported death in our data, is treated as being observed from age 46 to age 52, thus this volunteer contributes to N 46 , N 47 , N 48 , N 49 , N 50 , N 51 . As another example, a person who is recruited at age 65.7, and who could have reached age 72.6 by 16 February 2016 but has a reported death at age 69.7 will contribute to N 66 , N 67 , N 68 , N 69 , and this volunteer will also contribute to O 69 . This volunteer does not contribute to N 70 , because death has already occurred before age 70. The death rate per year is then calculated as h i = O i /N i , and the probability of surviving to age i + 1 is \(S_i = \mathop {\prod }\limits_{n = 1}^{n = i} h_n\) . The UK Biobank data allow estimation of death rates from h 41 to h 77 , but because N 77 is smaller than 800, we have to assume that h 76 = h 77 and combined these two ages in our estimation. We estimate h i separately for the three different ∆32 genotypes. We mainly report the survival probability before age 76, as there is sufficient data to obtain accurate estimates, but the estimated survival probabilities to age 77 and 78 are also shown in Fig. 1 . As the exact birth dates of the volunteers are considered sensitive, we do not have access to these. The age at recruitment in the UK Biobank is rounded down to nearest integer age, and we approximate the exact age using the date of recruitment, the year of birth, and month of birth, assuming that everyone is born on the 15th of their birth month. In rare cases, when the date of recruitment is very close to a person’s birthday, the approximated age could be smaller than the age at recruitment provided by the UK Biobank and in these rare cases we instead round up the estimated age. After applying this rounding scheme, if there are no errors in the data, under no scenario should the estimated age be smaller than the integer age at recruitment. However, there are 17 individuals whose estimated age is smaller than the age at recruitment, and we exclude these individuals in the death rate calculation. Among them, 15 are of British ancestry. Although the UK Biobank routinely imports death records from the national databases, the healthy volunteer effect 13 can still lead to a substantial underestimation of the death rate per year h i compared to the general population. The delay of the death records may be affected by many factors, including time of recruitment, age of death, cause of death, and various socioeconomic factors 18 . However, if we assume that these biases are independent of the ∆32 genotype, we can then estimate the death rate correction factor C i for each age i , and estimate the death rate per year and the survival probability for the three different ∆32 genotypes in the general population. To do this, we download the national life tables in the UK (nltuk1517reg.xls) from the Office of National Statistics ( ), which contain the death rate per year for the entire British population each year from 1980 to 2017, estimated for males and females separately. We average the death rate per year from 2006 to 2016 to represent the death rate H i of the general population. We then use h i / H i to estimate C i . We then calculate a corrected death rate for each ∆32 genotype. For example, the corrected death rate for +/+ is h i ,+ / + / C i . We use the corrected death rates to estimate the corrected survival probability ( S C ). The inferred survival probability after correction ( S C ) to age 76 are 0.7565, 0.7589, and 0.7111 for genotypes +/+, ∆32/+, and ∆32 / ∆32, respectively. With this crude correction, the probability of death before age 76 in the general population is (1 − S C ,∆32 / ∆32 )/(1 − S C ,∆32 / + ) − 1, approximately 20% higher for ∆32 / ∆32 individuals than for heterozygous individuals. We note that although the calculations of death rates could be more accurate, for example by using exact birthdays (which we did not have access to), the significant difference in death rates between genotypes is unlikely to be explained by this effect. However, our survival analyses may underestimate the beneficial effects of ∆32 in some age groups owing to ascertainment biases caused by the healthy volunteer effect 13 . Estimation of F F ∆32 / ∆32 is estimated from the equation P ∆32 / ∆32 = (1 + F ∆32 / ∆32 ) P ∆32 P ∆32 , where P ∆32 and P ∆32 / ∆32 are the observed frequencies of ∆32 and ∆32/∆32, respectively. When F ∆32 / ∆32 is significantly lower than 0, it implies that the observed fraction of ∆32/∆32 individuals is lower than expected under the HWE, consistent with increased mortality of ∆32/∆32 individuals. The F of other SNPs are estimated similarly. Statistical analysis One-tailed P values from log-rank tests are used in Fig. 1a and Supplementary Table 1 . In Fig. 1b , the empirical one-tailed P value from the F of 5,932 SNPs is used. Bootstrap 95% confidence intervals are shown as error bars in Extended Data Fig. 1a , and are used in Supplementary Table 1 . Spearman’s correlation is used in Extended Data Fig. 1 . In addition, the details of the statistical tests are given where they are mentioned. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data, code, and research notebook availability The genotype and death registry information are available with the permission of the UK Biobank. Analytical results and scripts are accessible from . In addition, a detailed experimental notebook covering the entire development of this project is available at the following depository: . Change history 08 October 2019 An amendment to this paper has been published and can be accessed via a link at the top of the paper.
A pair of researchers from the University of California has retracted a paper they had published in the journal Nature Medicine in which they claimed to have found evidence that the Chinese CRISPR twins might die early. In their retraction, Xinzhu Wei and Rasmus Nielsen report that the reason for the retraction was genotyping bias in UK Biobank data that they used to conduct their research. Last year, a team of researchers in China announced that they had used the CRISPR gene-editing technique to disable the CCR5 gene (the result is known as delta-32, found naturally in some people) in twin babies who were described as "healthy" when they were born. The team disabled the gene in the twins as part of research toward improving resistance to HIV. The news made headlines, with critics denouncing the use of gene editing on human embryos. The news also led other research efforts to determine if disabling the CCR5 gene in humans might lead to previously unknown side effects. One of those efforts was carried out by Wei and Nielsen—their study involved filtering data from the U.K. Biobank. In so doing, they found evidence that they claimed showed that people with dual copies of delta-32 were slightly more likely to die before reaching the age of 76 than the rest of the population. They also reported finding that the database had fewer people with dual copies of delta-32 than there should be based on evolutionary theory. The paper by Wei and Nielsen, which was published just four months ago, attracted immediate attention from people both in and outside of the field. Other researchers began searching the U.K. Biobank to see if they could replicate what Wei and Nielsen had found, but were unable to do so. Another team at Harvard Medical School found a discrepancy in the way dual copies of delta-32 were counted by Wei and Nielsen—a discrepancy that had led to undercounting many people in the U.K. Biobank with dual copies of delta-32. Wei and Nielsen acknowledge their false result in their retraction, though they continue to refer to it as a genotyping error in the database. They also admit there were tests they could have conducted to verify their results, but neglected to do.
10.1038/s41591-019-0459-6
Earth
Rising seas set to double coastal flooding by 2050: study
Scientific Reports (2017). nature.com/articles/doi:10.1038/s41598-017-01362-7 Journal information: Scientific Reports
http://nature.com/articles/doi:10.1038/s41598-017-01362-7
https://phys.org/news/2017-05-seas-coastal.html
Abstract Global climate change drives sea-level rise, increasing the frequency of coastal flooding. In most coastal regions, the amount of sea-level rise occurring over years to decades is significantly smaller than normal ocean-level fluctuations caused by tides, waves, and storm surge. However, even gradual sea-level rise can rapidly increase the frequency and severity of coastal flooding. So far, global-scale estimates of increased coastal flooding due to sea-level rise have not considered elevated water levels due to waves, and thus underestimate the potential impact. Here we use extreme value theory to combine sea-level projections with wave, tide, and storm surge models to estimate increases in coastal flooding on a continuous global scale. We find that regions with limited water-level variability, i.e., short-tailed flood-level distributions, located mainly in the Tropics, will experience the largest increases in flooding frequency. The 10 to 20 cm of sea-level rise expected no later than 2050 will more than double the frequency of extreme water-level events in the Tropics, impairing the developing economies of equatorial coastal cities and the habitability of low-lying Pacific island nations. Introduction Global sea level is currently rising at ~3–4 mm/yr 1 , 2 and is expected to accelerate due to ocean warming and land-based ice melt 3 , 4 . Sea-level rise (SLR) projections range from 0.3 to 2.0 m by 2100, depending on methodology and emission scenarios 5 , 6 , and recent work suggests that accepted methodologies significantly underestimate the contribution of Antarctica 7 . Coastal regions experience elevated water levels on an episodic basis due to wave setup and runup 8 , tides 9 , storm surge driven by wind stress and atmospheric pressure, contributions from seasonal and climatic cycles, e.g., El Niño/Southern Oscillation 10 , 11 and Pacific Decadal Oscillation 12 , and oceanic eddies 13 (Fig. 1 ). Figure 1 The water-level components that contribute to coastal flooding. Full size image Coastal flooding often occurs during extreme water-level events that result from simultaneous, combined contributions, such as large waves, storm surge, high tides, and mean sea-level anomalies 11 , 14 . SLR leads to (1) passive high-tide inundation of low-lying coastal areas 15 , (2) increased frequency, severity, and duration of coastal flooding 16 , (3) increased beach erosion 17 , (4) groundwater inundation 18 , 19 , (5) changes to wave dynamics 20 , and (6) displacement of communities 21 . Predicting regions vulnerable to passive inundation is relatively simple with the aid of high-resolution digital elevation models 22 . However, predicting the effect of SLR on episodic flooding events is difficult due to the unpredictable nature of coastal storms, nonlinear interactions of physical processes (e.g., tidal currents and waves), and variations in coastal geomorphology (e.g., sediments, bathymetry, topography, and bed friction). Local-scale assessments of coastal hazard vulnerability typically rely on detailed, computationally-onerous numerical modeling efforts 23 in order to simulate wave-related nearshore water levels, interactions with local topography, and the resulting flooding. Global-scale coastal hazard vulnerability assessments, on the other hand, rely on extreme value theory applied to water-level observations. Extreme-value theory Extreme-value theory 24 , 25 is a statistical method for quantifying the probability or return period of large events. The generalized extreme value (GEV) distribution, sometimes called the Fisher-Tippet distribution, is a powerful and general statistical model for extremes 26 (Coles 2001). The GEV distribution models the probabilities of the maxima of a random variable 24 , 27 , 28 using three parameters μ , σ , and k , the location (mean), scale (width), and shape (family type), respectively 26 . Oceanographic and coastal engineering studies often rely on GEV theory to describe the frequency of extreme waves 29 , water-level events 30 , flooding impacts 31 , and to understand the effects of SLR 32 . As sea level increases, the probability increases that a fixed elevation will experience flooding (Fig. 2 ). Equivalently, the return period or recurrence interval of flooding at a fixed elevation decreases 33 , 34 . In the example shown in Fig. 2B , 1 m of SLR causes the 5 m flood level (the former 100-year flood) to recur every 25 years. Figure 2 Example: by elevating the exceedance probability distribution, a 1 m increase in SL increases the frequency ( A ) and lowers the return period ( B ) of the 5m-flood level. Note that the steeper the probability distribution in A, the flatter the return time curve in B, i.e., the greater the increase in frequency and the reduction in return time. Thus regions with lower variability in flood level will experience larger increases in flooding frequency under SLR. See Methods and extended data Figs 1 and 2 . Full size image SLR can affect flood magnitude and frequency directly (Fig. 2 ) or indirectly via hydrodynamic feedbacks: SLR alters water depths, changing the generation, propagation, and interaction of waves, tides, and storm surges. Thus, SLR and long-term changes in wave climate, e.g., changes in magnitude, frequency, and tracks of storms 35 , 36 , 37 and storm surge, can alter the parameters of extreme water-level distributions and the evolution of coastal hazards over time. In the proposed work, we assume parameter stationarity based on projections of minor changes (5–10% 35 , 36 , 37 ) in mean annual wave conditions and storm surge over large regions of the ocean. In specific locations, such as the Pacific Northwest, trends in extreme wave climate may be significant 38 and lead to a greater flooding hazard than SLR over at least the next several decades 39 , calling for nonstationary methods 40 in future research. Investigations of increased flooding frequency due to SLR are often site-specific and rely only on water-level data from tide stations. For example, Hunter (2012) [ref. 41 ] and the Intergovernmental Panel on Climate Change (IPCC) 2013 report 3 estimate the factor of increase in the frequency of flooding events due to 0.5 m of SLR at locations of 198 tide stations around the globe [Hunter 41 Fig. 4 and IPCC 3 Fig. 13.25]. Hunter 41 and IPCC 3 found that regions with low variability of extreme water levels will experience large increases in flooding frequency. This finding, introduced qualitatively by Hoozemans et al . [ref. 33 ], is critical to predict the global regions most vulnerable to SLR. However, global-scale coastal hazard assessments using this methodology encounter three challenges: (1) Water-level observation stations are sparsely located around the globe, especially in the Indian Ocean and South Atlantic; (2) wave-driven water-level contributions, i.e., setup and swash, are not included; and (3) the global variability of the GEV shape parameter has not been considered, although it can be as influential as the scale parameter in determining vulnerability. Here we meet the three challenges by using extreme-value theory to combine sea level, wave, tide, and storm-surge models to predict increases in extreme water-level frequency on a global scale. Application Flooding results from the complex interaction of extreme water levels, topography, and the built environment. Here we use the frequency of extreme water levels as a proxy for regional-scale increases in flooding frequency, while recognizing that the relationship between water level and flooding is location dependent because of coastal topography, coastal defense structures, and drainage systems. We apply sea-level projections and global wave, tide, and storm surge models to predict the future return periods (associated with the former 50-yr extreme water level) due to SLR. As in Hunter 41 and IPCC 3 , we begin by investigating increases in flooding frequency due to a globally-uniform amount of SLR, acknowledging that spatial variability in the regional rate of SLR (e.g., driven by ocean circulation patterns, glacial fingerprinting) and the local relative rate of SLR (e.g., due to tectonic activity, glacial isostasy, land subsidence) will affect flooding predictions for specific locations 42 . Later we take the inverse approach, estimating the amount of SLR that doubles the frequency of extreme water-level events. Using maximum likelihood estimates, we fit GEV probability distributions to the top three annual maximum water-level events from 1993–2013 obtained via synthesis of the Global Ocean Wave (GOW) reanalysis 43 , Mog2D storm-surge model 44 , and TPXO tide model 45 as discussed in Methods. Figure 3 shows the global variability of the mean ( μ ), scale ( σ ), and shape ( k ) parameters for extreme total water level in panels A, B, and C, respectively. The GEV parameters provide necessary inputs to the factors of increase, f inc , and the future return period of the former 50-yr water level based on Eq. ( 3 ) (see Methods). Figure 4 shows the factor of increase for the SLR projections μ SL = +0.1, +0.25, +0.5 m on a global scale. Finally, the GEV parameters allow for global estimation of the amount of SLR that doubles the exceedance probability of the 50-yr water-level elevation [see Fig. 5 and Methods Eq. ( 4 )]. Analyzing the amount of SLR leading to a doubling in flooding (Fig. 5 ) is equivalent to the factor-of-increase results shown in Fig. 4 , but it provides a more intuitive picture of the effects of small amounts of SLR. Table 1 summarizes the global, tropical, and extra-tropical mean values of the quantities presented in Figs 3 and 5 . Although the plotted distributions apply only to coasts, they are calculated ocean-wide in order to reveal the continuous global pattern of vulnerability of both continental coastal settings and non-contiguous island nations throughout the world’s oceans. Figure 3 Global estimates of the location ( μ ), scale ( σ ), and shape ( k ) parameters of the GEV distribution of extreme water-level (the sum of wave setup, tide, and storm surge) shown in panels A, B, and C, respectively. The dashed and solid lines in panel C represent contours of k that are significantly different from zero at the 75% and 95% confidence levels, respectively. The maps in this figure were made using Matlab 2016a ( ). Full size image Figure 4 Global estimates of the expected factor of increase in exceedance probability, f inc , and the future return period, T R , of the 50-yr water level, for SLR projections: μ SL = +0.1, +0.25, +0.5 m. We note that the estimated increase in flooding potential is purely due to SLR and not due to changes in climate or storminess. White lines indicate the Tropic of Cancer and Tropic of Capricorn. The maps in this figure were made using Matlab 2016a ( ). Full size image Figure 5 The upper bound of SLR that doubles the exceedance probability of the former 50-year water level. This SLR is the upper limit of a 95% confidence interval based on a Monte Carlo simulation of the GEV parameter estimates and their associated confidence bands (see Methods). Red areas represent regions particularly vulnerable to small amounts of SLR. The maps in this figure were made using Matlab 2016a ( ). Full size image Table 1 Mean values of GEV parameters (Fig. 3 ), factors of increase (Fig. 4 ), and doubling SLR (Fig. 5 ) in the tropics, extratropics, and worldwide. Full size table Discussion We first consider the GEV parameters for extreme water levels (Fig. 3 ), then the frequency increases (Fig. 4 ), followed by the SLR threshold that doubles exceedance of the 50-yr water level (Fig. 5 ). The spatial variability in the GEV location parameter ( μ ) is shown in Fig. 3A . Globally, 99% of the values of μ fall between 0.50 and 2.13 m. The location parameter strongly resembles the M 2 tidal amplitude 45 yet is also influenced by global wave climate. The parameter is largest in the North Pacific and North Atlantic due to large tides and the occurrence of extratropical storms that track mainly west to east, producing large, latitudinally-isolated waves. The scale parameter ( σ ) ranges from 0.024 to 0.118 m (Fig. 3B ) and is correlated to the location parameter with r = 0.47. In other words, the regions that experience the largest water levels also experience the largest variance in those levels. The spatial variability of the shape parameter ( k ) is uncorrelated with that of the other GEV parameters. The shape parameter ranges from −0.18 to 0.20 (Fig. 3C ) with a global mean of −0.024. Notably, the geographic regions in Fig. 3C with large (positive) values of the shape parameter are regions with high densities of tropical storm tracks, i.e., the Tropics and lower mid-latitudes of the western Pacific and Atlantic Oceans. The range and geographic variability of the shape parameter in Fig. 3C is remarkably similar to previously reported results for the shape parameter of extreme wave heights 46 , underscoring the importance of wave-driven water-level components (See Extended Data Figs 3 and 8 for details) and the role of tropical cyclones on the magnitude and spatial distribution of the shape parameter. In theory, negative values of the shape parameter, i.e., bounded water-level distributions, are expected based on the notion that upper bounds on tide, storm surge, and maximum wave heights exist due to limiting processes (e.g., wave breaking and physical limits in wind speed, fetch, and duration prevent unbounded wave heights). On the other hand, positive values of the shape parameter, i.e., unbounded water-level distributions, indicate the probability of exceedingly large yet inconsistent water-level events relative to an annual event. In practice, both positive and negative values of the shape parameter are possible because of the limited amount of data available for parameter estimation and the possibility of outliers. Thus, it is difficult to assess, a priori, whether the large values of the shape parameter result from a proper characterization of the variability of tropical cyclones or from the presence of outliers among a temporally-limited data set. We expect that more than 21 years of data (used here) would likely improve the characterization of extreme events due to tropical cyclones and the estimation of the shape parameter. The dashed and solid lines in panel C (Fig. 3 ) represent contours of k that are significantly different from zero at the 75% and 95% confidence levels, respectively. The near-zero mean and the limited extent of the statistically significant non-zero values of the shape parameter in Fig. 3C suggests that the Gumbel distribution [the GEV family when k = 0, as in Hunter 41 and IPCC 3 ] might suffice for global-scale assessments of SLR impacts. However, for smaller-scale regions of interest, particularly the Caribbean Sea, the Central North Pacific, and North Atlantic, the variability of the shape parameter should be accounted for when predicting the effects of SLR. Next, we discuss how the global GEV parameters characterize the increased frequency of flooding due to SLR (Figs 4 and 5 ). Although the behavior of the scale parameter is well known [as introduced by Hoozemans et al . 33 , and further explored in Hunter 41 and IPCC 3 ], these figures provide the first continuous, global demonstration of that behavior, as well as the first incorporation of wave-driven water levels. The factor of increase in frequency of the 50-yr extreme water-level event, f inc , and the future return period of the former 50-yr extreme water level due to SLR, \(50\,{f}_{{inc}}^{-1}\) , are shown in Fig. 4 . For fixed SLR, decreasing values of the scale and shape parameters increase f inc and thus reduce the return period of the present 50-yr water level. The increase in f inc is larger in the Tropics (white lines on Fig. 4 ) compared to the Extratropics. The results presented in Fig. 4 and Table 1 indicate that the average factor of increase in flooding, f inc , in the Tropics with only 10 cm of SLR is approximately 25 times present levels, and the former 50-yr event occurs every 4.9 years. Outside the Tropics, the average factor of increase is 5.5, and the former 50-yr event occurs every 10.9 years. Note that the results given in Table 1 do not exactly follow the reciprocal relationship between the increase in frequency ( f inc ) and the reduction in return period ( \(50\,{f}_{{\rm{inc}}}^{-1}\) ) because of the spatial averaging operation. Finally, we note that the estimated increase in flooding potential is purely due to SLR and not due to possible future changes in wave climate or storm patterns. The upper bound of the doubling SLR, μ 2x , (Fig. 5 ) is estimated as the upper limit of the 95% confidence intervals of the GEV parameter estimates using Eq. ( 4 ) in Methods. As shown in Fig. 5 , only 5–10 cm of SLR, expected under most projections to occur between 2030 and 2050 5 , doubles the flooding frequency in many regions, particularly in the Tropics, and would occur even more rapidly in areas where regional SLR exceeds the eustatic rate 12 . Less than 5 cm of SLR doubles the frequency of the 50-yr water level in the tropical Atlantic and northwestern Indian Ocean. The maps of increased flooding potential (Figs 4 and 5 ) suggest a dire future for the top 20 cities (by GDP) vulnerable to coastal flooding due to SLR 47 , and for many wave-exposed cities such as Mumbai, Kochi, Grande Vitoria, and Abidjan which may be significantly affected by only 5 cm of SLR. Less than 10 cm of SLR doubles the flooding potential over much of the Indian Ocean, the south Atlantic, and the tropical Pacific. Only 10 cm of SLR doubles the flooding potential in high-latitude regions with small shape parameters, notably the North American west coast (including the major population centers Vancouver, Seattle, San Francisco, and Los Angeles), and the European Atlantic coast. The only regions where 15 cm of SLR does not double the flooding potential are regions with large shape parameters (likely influenced by tropical storm tracks): the mid-latitudes of the northwestern Pacific below Japan, the mid-latitudes of the northwestern Atlantic (the U.S. east coast, Gulf of Mexico, and Caribbean Sea), and the southwest tropical Pacific encompassing Fiji and New Caledonia (discussed below). The Tropics experience limited water-level variance due to consistently smaller wave heights (due to latitudinal gradients in storm activity) and smaller tide ranges (due to the presence of tidal amphidromes) throughout the region. Consequently, SLR represents a larger percentage of the water-level variance as explained in Fig. 2 and Methods. The mid-latitudes of the northwestern Pacific and the northwestern Atlantic experience smaller increases in extreme water-level frequency due to large values of the scale and shape parameter, respectively. Notably, the mid-latitudes of the northwestern Pacific below Japan experience large values of the scale parameter without correspondingly large values of the location parameter as in most of the north Pacific and north Atlantic, possibly due to the consistency of tropical storms in the region. The mid-latitudes of the northwestern Atlantic (e.g., the U.S. east coast, Gulf of Mexico, and Caribbean Sea), on the other hand, have elevated values of the shape parameter due to the intermittent occurrence of tropical cyclones, which correspond to elevated probabilities of large extremes rather than bounded extremes. This suggests that although the continued and accelerating impacts of SLR-driven nuisance flooding is a major concern in many of these areas 16 , the rare occurrence of extreme events (e.g., hurricanes) – and not SLR – will remain the dominant hazard on wave-exposed coastlines in the lower mid-latitudes of the western Pacific and Atlantic for several decades. Conclusions Regions with limited variability in extreme water levels, such as the Tropics, will experience greater increases in flooding frequency due to SLR than regions with significant water-level variability, e.g., the Extratropics. Small amounts of SLR, e.g., 5–10 cm, may more than double the frequency of extreme water-level events in the Tropics as early as 2030. This is an especially critical finding as numerous low-lying island nations in the Tropics are particularly vulnerable to flooding from storms today, and a significant increase in flooding frequency with climate change will further challenge the very existence and sustainability of these coastal communities across the globe 48 . Methods Generalized Extreme Value (GEV) distribution The cumulative distribution function (CDF) of the Generalized Extreme Value (GEV) distribution is given by $$F(x;\mu ,\sigma ,k)=\{\begin{array}{c}{e}^{-{(1+k(\frac{x-\mu }{\sigma }))}^{-1/k}}\,{\rm{for}}\,k\ne 0\\ {e}^{-{e}^{-(\frac{x-\mu }{\sigma })}}\qquad \quad {\rm{for}}\,k=0\end{array}$$ (1) where F is the probability that water level x will not be exceeded in any one-year period, and μ , σ , and k are the location, scale, and shape parameters, respectively 26 . The GEV distribution includes as special cases three families of extreme value distributions: Gumbel (type I), Fréchet (type II) and Weibull (type III), corresponding to values of the shape parameter k = 0, k > 0, and k < 0, respectively. Depending on the value of the shape parameter, k , the support of F ( x ) is either the entire real axis when k = 0 or \(\{x:1+k(x-\mu )/\sigma > 0\}\) when k ≠ 0 26 . From Eq. ( 1 ), the exceedance probability distribution, i.e., the probability that water level x is exceeded in any one-year interval, is E = 1− F . Thus E ( x ) is the expected frequency (with units of years −1 ) of events exceeding x . The return period, T R , or expected time-interval between events of level x or greater is therefore $${T}_{R}=1/E(x),$$ (2) with units of years. For example, a 100-year event has an exceedance probability of 0.01, that is, a 1% chance of occurring in any year. Although return period carries exactly the same information as exceedance probability, it is often more intuitive. The factor of increase in exceedance probability for SLR μ SL > 0 relative to a baseline ( μ SL > 0) is given by $${f}_{{\rm{inc}}}(x;\mu ,{\mu }_{{\rm{SL}}},\sigma ,k)=\frac{E(x;\mu +{\mu }_{{\rm{SL}}},\sigma ,k)}{E(x;\mu ,\sigma ,k)},$$ (3) and the factor of decrease in return period is \({f}_{{\rm{inc}}}^{-1}\) . For example, for the 50-yr event, \({T}_{R}(x;\mu ,\sigma ,k)=50\) years, hence the future return period of the former 50-yr water-level elevation is \(50{f}_{{\rm{inc}}}^{-1}\) as shown in Fig. 4B,D and F . Finally, we reframe the extreme value analysis to determine the amount of SLR leading to a doubling in exceedance of a particular water-level elevation. Note that in Fig. 2 , the SLR leading to a 4x increase in probability of the former 100-yr event (e.g., the 25-yr event with +1.0 m of SLR), is simply the difference between the 100-yr water level, \(x({T}_{R}=100;\mu ,\sigma ,k)\) , and the 25-yr water level, \(x({T}_{R}=25;\mu ,\sigma ,k)\) , of the unaltered distribution. Thus, the doubling SLR is given by $${\mu }_{2{\rm{x}}}({T}_{R})=x({T}_{R};\mu ,\sigma ,k)-x(\tfrac{1}{2}{T}_{R};\mu ,\sigma ,k)$$ (4) For the example shown in Fig. 5 , we use T R = 50 years. Note that the magnitude of μ 2x in Eq. ( 4 ) and Fig. 5 is controlled by the gradient of the return time function x ( T R ), as explained in Fig. 2B , and that that gradient is controlled by the scale and shape parameters. For low-gradient return time functions, the difference in x for the 50 and 25-yr return times is small, and in Fig. 5 the gradient is low for all levels exceeding that of the 10-yr event. Application Well-validated global tide 45 , wave 43 , and storm surge 44 reanalysis models, each with different spatial and temporal resolutions, are interpolated onto a consistent 1° × 1° grid with hourly time resolution and their water-level components are summed to provide a time series of total water level (TWL). In the proposed approach, we ignore mean sea-level anomalies (MSLA) due to seasonal effects and climate cycles (e.g., El Niño), which, for example, can raise sea level by more than 20 cm along the US west coast 11 , yet are typically less than 20 cm over much of the globe. Large-scale storm surge due to extratropical storms is included in the analysis, but the coarse resolution of the water-level model 44 precludes simulation of large, spatially isolated hurricane storm surge. On the other hand, the wave fields emanating from hurricanes and tropical cyclones have considerably larger spatial extents and, therefore, are well resolved by the wave model 43 apart from the near-field generation regions. We limit the time scales considered in our investigation due to the availability of only 21 years of coincident data for waves, tides, and storm surge: extrapolation of 21 years of data to predict 100-year and longer return period events is often problematic. Hourly time series of tidal water level are computed from 13 harmonic constituents provided by the TPXO tidal inversion model 45 with native resolution of 0.25° × 0.25° linearly interpolated onto a global grid of 1° × 1°. Time series of wave setup are estimated using the empirical relationship for the 2% exceedance runup on dissipative beaches 8 $${R}_{{\rm{setup}}}=0.016\sqrt{{H}_{0}{L}_{0}},$$ (5) where H 0 and L 0 are the deep-water wave height and wavelength, respectively. We exclude wave swash, the time-varying components of wave runup at incident and infragravity frequencies, because of the large uncertainties associated with the estimation of swash magnitude. For example, wave swash is sensitive to local geological characteristics, notably the beach slope. Wave swash is a time-dependent process, which may or may not affect persistent flood levels. In certain locations, wave swash can significantly contribute to persistent coastal flooding via overtopping of seawalls. Therefore, we include the contribution of wave swash to TWL in Extended Data Figures 5 , 6 and 7 , which depict the same analyses shown in Figs 3 , 4 and 5 (which do not include wave swash). In Extended Data Figures 5 , 6 and 7 , the magnitude of the 2% exceedance wave swash is estimated using the empirical relationship for dissipative beaches 8 given by $${R}_{{\rm{swash}}}=0.027\sqrt{{H}_{0}{L}_{0}}$$ (6) which is approximately 1.69 times larger than the wave setup component, Eq. ( 5 ). We note that dissipative beach conditions are assumed for the wave runup components in Eqs ( 5 ) and ( 6 ) in order to avoid the dependence on beach slope. Time series of H 0 and wave period ( T ) are obtained via the hourly 1° × 1.5° Global Ocean Wave (GOW) reanalysis 43 , and linearly interpolated onto a 1° × 1° grid. The time series of wavelength L 0 = gT 2 /(2 π ) is calculated using linear wave theory from the time series of wave period. Time series of storm surge are obtained from the Mog2D barotropic model 44 with native resolution of 0.25° × 0.25° at 6-hour intervals, interpolated to an hourly dataset with 1° × 1° resolution. The resulting hourly time series of wave setup, storm surge, and tidal water level for each 1° × 1° grid cell are summed to produce an hourly time series of total water level from 1993–2013. Nonlinear interactions between tide, surge, and wave-driven water levels are not accounted for using this approach. However, processes such as tide-surge interactions may be important in coastal regions around the globe, particularly those adjacent to continental shelves or shallow bathymetry 49 . In general, tides provide the dominant contribution (51% on average) to the total water level (see Extended Data Fig. 3 ). However, when wave swash is included, wave runup (i.e., wave setup + wave swash) provides the dominant contribution (66% on average) to the total water level (see Extended Data Fig. 8 ). Next, GEV distributions are fitted to the top three ( r = 3) annual maxima ( n = 63) of the 21-year time series of total water level at each grid point to obtain spatially-varying estimates of the parameters μ , σ , and k . This approach, called the r -largest order statistic model, is consistent with the GEV distribution for block maxima 26 . To avoid the case where the r -highest values were taken from successive hours, a minimum peak separation criterion of 12 hours was applied. This criterion ensures that the block maxima are independent as required by the r -largest order statistic model 26 . The spatial variability of the GEV parameters is smoothed using a penalized least-squares method 50 . Data on the GEV parameter estimates and confidence intervals are available online (see “GEV_data.xlsx”). The GEV parameters μ , σ , and k control the factor of increase f inc and the future 50-yr return period \(50\,{f}_{{\rm{inc}}}^{-1}\) based on Eq. ( 3 ), for different values of SLR and event level x . Here we set x to be the 50-yr water-level event; however the behavior is consistent across a range of extreme values for x , particularly those exceeding the 10-yr water level as noted above. Finally, we calculate the sea-level rise, μ 2x , that doubles the exceedance of the former 50-yr water-level elevation based on Eq. ( 4 ). To account for the uncertainty in the GEV parameter estimates, a Monte Carlo simulation with 100,000 realizations is applied for each grid point. Each realization generates random values of μ , σ , and k based on the 95% confidence intervals arising from the maximum likelihood estimates and applies Eq. ( 4 ) to calculate μ 2x . Next, the upper bound of the doubling sea level (Fig. 5 ) is calculated as the 95% cumulative probability (%5 exceedance probability) for the empirical distribution of μ 2x . Figure 5 shows the upper end of the 95% confidence level for the SLR that will double (or more than double) the frequency of the 50-yr water-level event.
Rising sea levels driven by global warming are on track to dramatically boost the frequency of coastal flooding worldwide by mid-century, especially in tropical regions, researchers said Thursday. A 10-to-20 centimetre (four-to-eight inch) jump in the global ocean watermark by 2050—a conservative forecast—would double flood risk in high-latitude regions, they reported in the journal Scientific Reports. Major cities along the North American seaboard such as Vancouver, Seattle, San Francisco and Los Angeles, along with the European Atlantic coast, would be highly exposed, they found. But it would only take half as big a jump in ocean levels to double the number of serious flooding incidents in the tropics, including along highly populated river deltas in Asia and Africa. Even at the low end of this sea rise spectrum, Mumbai, Kochi and Abidjan and many other cities would be significantly affected. "We are 95 percent confident that an added 5-to-10 centimetres will more than double the frequency of flooding in the topics," lead author Sean Vitousek, a climate scientist at the University of Illinois at Chicago, told AFP. Small island states, already vulnerable to flooding, would fare even worse, he added. "An increase in flooding frequency with climate change will challenge the very existence and sustainability of these coastal communities across the globe." The Indian metropolis of Mumbai is among many major cities across the globe threatened by rising sea levels which will see coastal flooding dramatically increase over the next 30 years, researchers say Coastal flooding is caused by severe storms, and is made worse when large waves, storm surge and high tides converge. Hurricane Sandy in the United States (2012), which caused tens of billions or dollars in damage, and Typhoon Haiyan in the Philippines (2013), which left more than 7,000 dead or missing, both saw devastating flooding. Rising seas—caused by the expansion of warming ocean water and runoff from melting ice sheets and glaciers—is also a contributing factor. Sea level 'wild card' But up to now, global estimates of future coastal flooding have not adequately taken into account the role of waves, Vitousek said. "Most of the data used in earlier studies comes from tidal gauge stations, which are in harbours and protected areas," he explained. "They record extreme tide and storm surges, but not waves." To make up for the lack of observational data, Vitousek and his colleagues used computer modelling and a statistical method called extreme value theory. "We asked the question: with waves factored in, how much sea level rise will it take to double the frequency of flooding?" Los Angeles is one US coastal city which could face the fallout from rising sea levels, amid a feared doubling of the frequency of flooding by 2050, according to a study by climate scientists who also warn of risk to Europe's Atlantic coast Not much, it turned out. Sea levels are currently rising by three to four millimetres (0.10 to 0.15 inches) a year, but the pace has picked up by about 30 percent over the last decade. It could accelerate even more as continent-sized ice blocs near the poles continue to shed mass, especially in Antarctica, which Vitousek described as the sea level "wild card." If oceans go up 25 centimetres by mid-century, "flood levels that occur every 50 years in the tropics would be happening every year or more," he said. The US National Oceanic and Atmospheric Administration (NOAA) predicts global average sea level will rise by as much as 2.5 metres (98 inches) by 2100. Global average temperatures have increased by one degree Celsius (1.6 degrees Fahrenheit) since the mid-19th century, with most of that happening in the last 70 years. The 196-nation Paris Agreement, inked in 2015, calls for capping global warming at well under 2C (3.6F), a goal described by climate scientists as extremely daunting.
nature.com/articles/doi:10.1038/s41598-017-01362-7
Medicine
Neuroimaging categorizes four depression subtypes
Andrew T Drysdale et al. Resting-state connectivity biomarkers define neurophysiological subtypes of depression, Nature Medicine (2016). DOI: 10.1038/nm.4246 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.4246
https://medicalxpress.com/news/2016-12-neuroimaging-categorizes-depression-subtypes.html
Abstract Biomarkers have transformed modern medicine but remain largely elusive in psychiatry, partly because there is a weak correspondence between diagnostic labels and their neurobiological substrates. Like other neuropsychiatric disorders, depression is not a unitary disease, but rather a heterogeneous syndrome that encompasses varied, co-occurring symptoms and divergent responses to treatment. By using functional magnetic resonance imaging (fMRI) in a large multisite sample ( n = 1,188), we show here that patients with depression can be subdivided into four neurophysiological subtypes ('biotypes') defined by distinct patterns of dysfunctional connectivity in limbic and frontostriatal networks. Clustering patients on this basis enabled the development of diagnostic classifiers (biomarkers) with high (82–93%) sensitivity and specificity for depression subtypes in multisite validation ( n = 711) and out-of-sample replication ( n = 477) data sets. These biotypes cannot be differentiated solely on the basis of clinical features, but they are associated with differing clinical-symptom profiles. They also predict responsiveness to transcranial magnetic stimulation therapy ( n = 154). Our results define novel subtypes of depression that transcend current diagnostic boundaries and may be useful for identifying the individuals who are most likely to benefit from targeted neurostimulation therapies. Main Depression is a heterogeneous clinical syndrome that is diagnosed when a patient reports at least five of nine symptoms. This allows for several hundred unique combinations of changes in mood, appetite, sleep, energy, cognition and motor activity. Such remarkable heterogeneity reflects the consensus view that there are multiple forms of depression, but their neurobiological basis remains poorly understood 1 , 2 . So far, most efforts to characterize depression subtypes and develop diagnostic biomarkers have begun by identifying clusters of symptoms that tend to co-occur, and by then testing for neurophysiological correlates. These pioneering studies have defined atypical, melancholic, seasonal and agitated subtypes of depression associated with characteristic changes in neuroendocrine activity, circadian rhythms and other potential biomarkers 3 , 4 , 5 . Still, the association between clinical subtypes and their biological substrates is inconsistent and variable at the individual level, and unlike diagnostic biomarkers in other areas of medicine, they have not yet proven useful for differentiating individual patients from healthy controls or for reliably predicting treatment response at the individual level. An alternative to subtyping patients on the basis of co-occurring clinical symptoms is to identify neurophysiological subtypes, or biotypes, by clustering subjects according to shared signatures of brain dysfunction 6 . This type of approach has already begun to yield insights into how differing biological mechanisms may give rise to overlapping, heterogeneous clinical presentations of psychotic disorders 6 , 7 . Neuroimaging biomarkers of abnormal brain function have proven utility in the assessment of pain 8 and have also shown promise for depression, for both the prediction of treatment response 9 , 10 , 11 , 12 , 13 and treatment selection 14 . Resting-state fMRI (rsfMRI) is an especially useful modality because it can be used easily in diverse patient populations to quantify functional network connectivity in terms of correlated, spontaneous MR signal fluctuations. Depression is associated with dysfunction and abnormal functional connectivity in frontostriatal and limbic brain networks 15 , 16 , 17 , 18 , 19 , 20 , in accordance with morphological and synaptic changes in chronic stress models in rodents 21 , 22 , 23 , 24 . These studies raise the intriguing possibility that fMRI measures of connectivity could be leveraged to identify novel subtypes of depression with stronger neurobiological correlates that predict treatment responsiveness. To this end, we developed a method for defining depression subtypes by clustering subjects according to distinct, whole-brain patterns of abnormal functional connectivity in resting-state networks, unbiased by assumptions about the involvement of particular brain regions, and tested it in a large, multisite data set. Our analyses revealed four biotypes that were defined by homogeneous patterns of dysfunctional connectivity in frontostriatal and limbic networks, and that could be diagnosed with high sensitivity and specificity in individual subjects. Importantly, these biotypes were also prognostically informative, predicting which patients responded to repetitive transcranial magnetic stimulation (TMS), a targeted neurostimulation therapy. Results Frontostriatal and limbic connectivity define four depression biotypes We began by designing and implementing a preprocessing procedure (Online Methods ) to control for motion-, scanner- and age-related effects in a multisite data set that comprised rsfMRI scans for 711 subjects (the 'training data set', n = 333 patients with depression; n = 378 healthy controls). No subjects had comorbid substance-abuse disorders, and patients and controls were matched for age and sex. Data that support our approach to controlling for motion-related Blood-oxygen-level dependent (BOLD) signal effects, a particularly important source of rsfMRI artifact 25 , 26 , 27 , are presented in Supplementary Figure 1. After co-registering the functional volumes to a common (Montreal Neurological Institute (MNI)) space, we applied an extensively validated parcellation system 28 to delineate 258 functional network nodes that spanned the whole brain and had stable signals across all sites and scans in this data set ( Fig. 1a ). Next, we extracted BOLD signal residual time series and calculated correlation matrices between each node, which provided an unbiased estimate of the whole-brain architecture of functional connectivity in each subject ( Fig. 1b ). Figure 1: Canonical correlation analysis (CCA) and hierarchical clustering define four connectivity-based biotypes of depression. ( a ) Data analysis schematic and workflow. After preprocessing, BOLD signal time series were extracted from 258 spherical regions of interest (ROIs) distributed across the cortex and subcortical structures. The schematics (top) show lateral (left) and medial (right) views of right-hemisphere ROIs projected onto an inflated cortical surface and colored by functional network (lower left). Left-hemisphere ROIs (data not shown) were similar. For each subject, whole-brain functional-connectivity matrices were generated by calculating pairwise BOLD signal correlations between all ROIs, as in this example of correlated signals ( r 2 = 0.88) for DLPFC (solid line) and PPC (dashed line) nodes of the FPTC network in a representative subject. ( b ) Whole-brain, 258 × 258 functional-connectivity matrix averaged across all healthy controls ( n = 378 subjects). z = Fischer transformed correlation coefficient. ( c , d ) CCA was used to define a low-dimensional representation of depression-related connectivity features and identified an “anhedonia-related” component (canonical variate; c ) and an “anxiety-related” component ( d ), represented by linear combinations of connectivity features that were correlated with linear combinations of symptoms. The scatterplots in c and d illustrate the correlation between low-dimensional connectivity scores and low-dimensional clinical scores for the anhedonia-related ( r 2 = 0.91) and anxiety-related components ( r 2 = 0.95), respectively ( P < 0.00001, n = 220 patients with depression). To the left of each scatterplot, clinical score loadings (i.e., the Pearson correlation coefficients between specific symptoms and the anhedonia- or anxiety-related clinical score (canonical variate)) are depicted for those symptoms with the strongest loadings (HAMD item # , indicated by numbers in superscript; for all loadings on all symptoms, see Supplementary Fig. 2 ). Below each scatterplot, connectivity score loadings are summarized by depicting the neuroanatomical distribution of the 25 ROIs (top 10%) that were most highly correlated with each component (summed across all significantly correlated connectivity features for a given ROI), colored by network, as in a . Projections to the medial wall map are for both left- and right-hemisphere ROIs. ( e ) Hierarchical clustering analysis. The height of each linkage in the dendrogram represents the distance between the clusters joined by that link. For reference, the dashed line denotes 20 times the mean distance between pairs of subjects within a cluster. For analyses of additional cluster solutions and further discussion, see Supplementary Figure 3 . ( f ) Scatterplot for four clusters of subjects along dimensions of anhedonia- and anxiety-related connectivity. Gray data points indicate subjects with ambiguous cluster identities (edge cases, cluster silhouette values < 0; n = 15, or 6.8% of all subjects). ACC, anterior cingulate cortex; amyg, amygdala; antPFC, anterior prefrontal cortex; a.u., arbitrary units; AV, auditory/visual networks; CBL, cerebellum; COTC, cingulo-opercular task-control network; D/VAN, dorsal/ventral attention network; DLPFC, dorsolateral prefrontal cortex; DMN, default-mode network; DMPFC, dorsomedial prefrontal cortex; FPTC, frontoparietal task-control network; GP, globus pallidus; LIMB, limbic; MR, memory retrieval network; NAcc, nucleus accumbens; OFC, orbitofrontal cortex; PPC, posterior parietal cortex; precun, precuneus; sgACC, subgenual anterior cingulate cortex; SS1, primary somatosensory cortex; SN, salience network; SSM, somatosensory/motor networks; subC, subcortical; thal, thalamus; vHC, ventral hippocampus; VLPFC, ventrolateral prefrontal cortex; VMPFC, ventromedial prefrontal cortex; vStr, ventral striatum; n.s., not significant. See Supplementary Table 4 for MNI coordinates for ROIs in b and c. Full size image Each correlation matrix comprised 33,154 unique connectivity features, which thus necessitated a protocol for selecting a subset of relevant, nonredundant connectivity features for use in clustering. We reasoned that biologically meaningful depression subtypes would be best characterized by a subset of connectivity features that were significantly correlated with depressive symptoms. Therefore, to select connectivity features for use in clustering, we used canonical correlation analysis (Online Methods ) to define a low-dimensional representation of connectivity features that were associated with weighted combinations of clinical symptoms, as quantified by the 17-item Hamilton Depression Rating Scale (HAMD), a commonly used, clinician-rated assessment. To ensure that cluster discovery was not confounded by site-related differences in subject recruitment criteria or by other unidentified variables, the cluster-discovery analysis was restricted to a subset of patients (the 'cluster-discovery subset', n = 220 of the 333 patients with depression) from two sites with identical inclusion and exclusion criteria and statistically equivalent depression-symptom scores (see Supplementary Tables 1–3 for details). This analysis identified linear combinations of connectivity features (analogous to principal components) that predicted two distinct sets of depressive symptoms ( Fig. 1c,d ). The first connectivity component (canonical variate) defined a combination of predominantly frontostriatal and orbitofrontal connectivity features that were correlated with anhedonia and psychomotor retardation ( Fig. 1c , Supplementary Fig. 2 and Supplementary Table 4 ). The second component defined a distinct set of predominantly limbic connectivity features involving the amygdala, ventral hippocampus, ventral striatum, subgenual cingulate and lateral prefrontal control areas, and that was correlated with anxiety and insomnia ( Fig. 1d ). Thus, this empirical, data-driven approach to feature selection and dimensionality reduction identified two sets of functional connectivity features that were correlated with distinct clinical-symptom combinations. We then tested whether abnormalities in these connectivity feature sets tended to cluster in patient subgroups. Multiple statistical learning approaches are available for discovering notable structure in large data sets ('unsupervised learning'). Here we chose to use hierarchical clustering—a standard approach that has been used extensively in the biological sciences 29 , 30 —to discover clusters of patients, by assigning them to nested subgroups with similar patterns of connectivity (Online Methods ). This analysis revealed four patient clusters defined by distinct and relatively homogeneous patterns of connectivity along these two dimensions ( Fig. 1e,f ) and comprising 23.6%, 22.7%, 20.0% and 33.6% of the 220 patients with depression, respectively. This four-cluster solution was optimal for defining relatively homogeneous subgroups that were maximally dissimilar from each other (maximizing the ratio of between-cluster to within-cluster variance), while ensuring individual cluster sample sizes that provided sufficient statistical power to detect biologically meaningful differences ( Supplementary Fig. 3 ). Therefore, we focused our subsequent analyses on characterizing and validating these four putative subtypes of depression. Biotype-specific clinical profiles predicted by frontostriatal and limbic network dysfunction To understand the neurobiological basis of these biotypes, we began by testing for differences in the whole-brain architecture of functional connectivity between patients ( n = 220) and age-, sex- and site-matched healthy controls ( n = 378) and for connectivity features that differed between patient subgroups. We observed a common neuroanatomical core of pathology underlying all four biotypes and encompassing areas spanning the insula, orbitofrontal cortex, ventromedial prefrontal cortex and multiple subcortical areas ( Fig. 2a,b and Supplementary Table 5 )—all of which have been implicated in depression previously 15 , 16 , 17 , 18 , 19 , 20 . This led us to ask whether these connectivity features predicted the severity of 'core' symptoms that were present in almost all patients, regardless of biotype. We found that, of the 17 symptoms quantified by the HAMD, three were present in almost all patients with depression (>90%): mood (“feelings of sadness, hopelessness, helplessness,” 97.1%), anhedonia (96.7%) and anergia or fatigue (93.9%). Across subjects, regardless of biotype, abnormal connectivity in this shared neuroanatomical core (as indexed by the first principal component in a principal-component analysis (PCA)) was correlated with severity scores on these three symptoms ( Fig. 2c ; r = 0.72–0.82). Figure 2: Connectivity biomarkers define depression biotypes with distinct clinical profiles. ( a ) Neuroanatomical distribution of the 25 ROIs (top 10%) with the most abnormal connectivity features shared by all four biotypes (summed across all connectivity features for a given ROI), identified using Wilcoxon rank–sum tests to test for connectivity features that were significantly abnormal in all four biotypes relative to healthy controls in data set 1 ( n = 378). ROIs are colored by network, as in Figure 1a . ( b ) Heat maps depicting a pattern of abnormal connectivity ( P < 0.05, false-discovery rate (FDR) corrected) shared by all four biotypes for the top 50 most abnormal ROIs, colored on the basis of Wilcoxon rank–sum tests comparing patients and controls, as in a . Warm colors represent increase and cool colors decrease in depression as compared to controls. ( c ) Correlations ( r = 0.72–0.82, *** P < 0.001, Spearman) between shared abnormal connectivity features (as indexed by the first principal component (PC) of the features depicted in b and the severity of the core depressive symptoms. Insets depict the prevalence of each symptom. Symptom severity measures are z -scored with respect to controls and plotted as the mean for each quartile, ± s.e.m. ( d ) Neuroanatomical distribution of dysfunctional connectivity features that differed by biotype, as identified by Kruskal–Wallis analysis of variance (ANOVA) ( P < 0.05, FDR corrected), summarized for the 50 ROIs (top ∼ 20%) with the most biotype-specific connectivity features (i.e., the 50 ROIs with the largest test statistic summed across all connectivity features, showing cluster specificity at a threshold of P < 0.05, FDR corrected). Nodes (ROIs) are colored to indicate the biotype with the most abnormal connectivity features and scaled to indicate how many connectivity features exhibited significant effects of biotype. ( e ) Heat maps depicting biotype-specific patterns of abnormal connectivity for the functional nodes illustrated in d , plus selected limbic areas, colored as in b . Green boxes highlight corresponding areas in each matrix discussed in the main text. ( f ) Biotype-specific clinical profiles for the six depressive symptoms that varied most significantly by cluster ( P < 0.005, Kruskal–Wallis ANOVA). Symptom severities (HAMD) are z -scored with respect to the mean for all patients in the cluster-discovery set. See Supplementary Figure 4 for all 17 HAMD items and for replication in data from subjects left out of the cluster-discovery set. ( g ) Boxplot of biotype differences in overall depression severity (total HAMD score), in which boxes denote the median and interquartile range (IQR) and whiskers the minimum and maximum values. In f and g , asterisk (*) indicates significant difference from mean symptom severity rating for all patients ( z = 0) at P < 0.05; error bars depict s.e.m.; n.s., not significant. Aud, auditory cortex; HC, hippocampus; lat PFC, lateral prefrontal cortex; lat OFC, lateral orbitofrontal cortex; MTG, middle temporal gyrus; PHC, parahippocampal cortex; PCC, posterior cingulate cortex; SSM, primary sensorimotor cortex (M1 or S1); STG, superior temporal gyrus; vis, visual cortex. Other abbreviations are as in Figure 1 . See Supplementary Table 5 for Montreal Neurological Institute coordinates for ROIs in a and d . Full size image In addition, we found that, superimposed on this shared pathological core, distinct patterns of abnormal functional connectivity differentiated the four biotypes ( Fig. 2d,e ) and were associated with specific clinical-symptom profiles ( Fig. 2f ). For example, as compared to controls, reduced connectivity in frontoamygdala networks, which regulate fear-related behavior and reappraisal of negative emotional stimuli 31 , 32 , 33 , was most severe in biotypes 1 and 4, which were characterized in part by increased anxiety. By contrast, hyperconnectivity in thalamic and frontostriatal networks, which support reward processing, adaptive motor control and action initiation 20 , 34 , 35 , 36 , 37 , were especially pronounced in biotypes 3 and 4 and were associated with increased anhedonia and psychomotor retardation. And reduced connectivity in anterior cingulate and orbitofrontal areas supporting motivation and incentive-salience evaluation 38 , 39 , 40 was most severe in biotypes 1 and 2, which were characterized partly by increased anergia and fatigue. Importantly, although the connectivity-based biotypes revealed in our analysis were associated with differences in clinical symptoms, they did not simply reflect differences in overall depression severity. Although overall depression severity scores were modestly but significantly decreased in biotype 2 as compared to the other three groups (by 15–16%), there were no significant differences in severity between biotypes 1, 3 and 4 ( Fig. 2g ; see Supplementary Fig. 4 for convergent findings in independent data acquired from subjects not included in the cluster-discovery analysis). Furthermore, they did not simply recapitulate subtypes derived strictly from clinical-symptom measures; whereas clustering according to functional connectivity features in random patient subsamples yielded stable clustering outcomes, clustering according to clinical symptoms yielded unstable outcomes with relatively low longitudinal stability over time ( Supplementary Fig. 5 ). Functional connectivity biomarkers for diagnosing depression biotypes By reducing diagnostic heterogeneity, we reasoned that clustering could be leveraged to develop classifiers for the diagnosis of depression biotypes solely on the basis of fMRI measures of functional connectivity, which have shown promise in smaller-scale, single-site studies of depression 41 , 42 , 43 and other neuropsychiatric disorders 44 , 45 , but that have not performed as well when tested in multisite data sets 44 . To this end, we developed classifiers for each depression biotype, testing and optimizing standard, extensively used methods for brain parcellation, subject clustering, feature selection and classification to identify empirically the most successful approach to clustering and classification ( Fig. 3a and Online Methods ). Throughout, clustering analysis was performed in the same cluster-discovery sample ( n = 220), whereas classification of patients versus controls was optimized in the full training data set ( n = 333 patients; n = 378 controls), and leave-one-out cross-validation and permutation testing were used to assess performance and significance ( Supplementary Fig. 6 ; for additional analysis confirming the stability of cluster assignments, see Supplementary Fig. 3d–f ). The optimization process yielded progressive improvements in classifier performance ( Fig. 3b ). Support-vector machine (SVM) classifiers (using linear kernel functions) performed best, yielding overall accuracy rates of up to 89.2% for the clusters characterized above, on the basis of connectivity features associated with the neuroanatomical areas summarized in Figure 3c–f . In cross-validation (leave-one-out), individual patients and healthy controls were diagnosed correctly with sensitivities of 84.1–90.9% and specificities of 84.1–92.5% ( Fig. 3g ). Figure 3: Functional connectivity biomarkers for diagnosing neurophysiological biotypes of depression. ( a ) Data analysis schematic and workflow (Online Methods for additional details). ( b ) Optimization of diagnostic-classifier performance (accuracy) across the indicated combinations of methods for parcellation, clustering and classification. * P < 0.005, as estimated by permutation testing (Online Methods ). Double asterisk (**) indicate the best performing protocol for parcellation, clustering and classification, and the focus of all subsequent analyses. ( c – f ) The neuroanatomical locations of the nodes with the most discriminating connectivity features are illustrated for each biotype for the four-cluster solution denoted by the double asterisk in b , colored and scaled by summing the results of Wilcoxon rank–sum tests of patients as compared to controls across all connectivity features associated with that node. Red represents increased and blue decreased functional connectivity in depression. ( g ) Sensitivity and specificity by biotype for the most successful classifiers identified in b (**). Error bars depict 95% confidence interval for the mean accuracy across all iterations of leave-one-out cross-validation. ( h ) Reproducibility of cluster assignments in a second fMRI scan ( n = 50) obtained 4–5 weeks after the initial scan (χ 2 = 112.7, P < 0.00001). ( i ) Classifier performance in an independent, out-of-sample replication data set ( n = 125 patients, 352 healthy controls). Cross-hatched bars depict classifier accuracy with more stringent data quality controls (Online Methods ) and excluding equivocal classification outcomes (the 10% of subjects with the lowest absolute SVM classification scores). Error bars depict 95% confidence intervals. Full size image To further validate the biotypes, we asked whether biotype diagnosis (cluster membership) was stable over time by testing these classifiers on a subset of patients ( n = 50) who received a second fMRI scan while they were actively experiencing depression, 4–6 weeks after the first scanning session. We found that, overall, 90.0% of subjects were assigned to the same biotype in both scans ( Fig. 3h ; χ 2 = 84.6, P < 0.0001). There were no significant between-group differences in age, medication usage or head motion during scanning, variables that may affect rsfMRI connectivity measures ( Supplementary Fig. 7 ). It is well established in the machine-learning literature that iterative training and cross-validation on the same data overestimate classifier performance 46 , and other studies have raised questions about the capacity for classifiers trained on one data set at a single site to generalize to data collected at multiple sites 44 . Therefore, we tested the most successful classifier for each depression biotype in an independent replication data set that consisted of 125 patients and 352 healthy controls acquired from 13 sites, including five sites that were not included in the original training data set ( Supplementary Table 3 ). To avoid overestimating diagnostic sensitivity, only one classifier—the classifier for the best-fitting biotype—was tested on each subject (Online Methods ). Overall, 86.2% of subjects in this independent, out-of-sample replication data set were correctly diagnosed, including >90% of patients in biotypes 3 and 4 ( Fig. 3i ; Supplementary Table 6 ). By implementing stricter data quality controls and by treating subjects with ambiguous classification outcomes (the lowest absolute SVM classification scores; Online Methods ) as equivocal test results, as is common practice for biomarkers in other areas of medicine, these accuracy rates exceeded 95%. Connectivity biomarkers predict responsiveness to rTMS Treatment-response prediction is an important element of validating biomarkers and establishing potential for clinical actionability, and neuroimaging measures have already shown promise for predicting treatment response in depression 9 , 10 , 11 , 12 , 13 , 14 . Repetitive transcranial magnetic stimulation (rTMS) is a noninvasive neurostimulation treatment for medication-resistant depression that modulates functional connectivity in cortical networks 47 , 48 , 49 . Although the left dorsolateral prefrontal cortex is the most common target for stimulation 48 , recent studies have demonstrated efficacy for a dorsomedial prefrontal (DMPFC) target 13 , which raises the intriguing possibility that biotype differences in dysfunctional connectivity at the DMPFC target ( Fig. 2d ) site may give rise to differing treatment outcomes. To test this, we asked first whether the four depression biotypes were differentially responsive to rTMS in 124 subjects who received repetitive high-frequency stimulation of the dorsomedial prefrontal cortex for 5 weeks, beginning shortly after their fMRI scan (Online Methods ). Treatment response varied significantly with cluster membership (χ 2 = 25.7, P = 1.1 × 10 –5 ). rTMS was most effective for patients in biotype 1, 82.5% of whom ( n = 33/40) improved significantly (>25% HAMD reduction), as compared to 61.0% for biotype 3 ( n = 25/41) and only 25.0% and 29.6% for biotypes 2 ( n = 4/16) and 4 ( n = 8/27), respectively (see Fig. 4a,b full response rates (>50% reduction) and percentage change in depression severity by total HAMD score). Figure 4: Connectivity biomarkers predict differential antidepressant response to rTMS. ( a ) Differing response rates to repetitive transcranial magnetic stimulation (rTMS) of the dorsomedial prefrontal cortex across patient biotypes (clusters) in n = 124 subjects. Response rate indicates percentage of subjects showing at least a partial clinical response to rTMS (χ 2 = 25.7, P = 1.1 × 10 −5 ), defined conventionally as >25% reduction in symptom severity by HAMD. Full response rates (>50% reduction by HAMD, cross-hatched bars) also varied by biotype (χ 2 = 22.9, P = 4.3 × 10 –5 ). ( b ) Boxplot of percent improvement in depression severity by biotype ( P = 1.79 × 10 –6 , Kruskal–Wallis ANOVA), in which boxes denote the median and interquartile range and whiskers the minimum and maximum up to 1.5 × the IQR, beyond which outliers are plotted individually. Percent improvement = total HAMD score before treatment – total HAMD score after treatment/total HAMD score before treatment. ** P = 0.00001–0.002 (Mann–Whitney), indicating significantly increased versus biotypes 2–4; * P = 0.007 (Mann–Whitney), indicating significantly increased versus biotype 4. ( c ) Functional connectivity differences in the DMPFC stimulation target in treatment responders versus nonresponders (Wilcoxon rank–sum tests, thresholded at P < 0.005). Warm colors represent increased and cool colors decreased functional connectivity in treatment responders as compared to nonresponders. The 12 ROIs depicted here were located within 3 cm of the putative DMPFC target site, estimated in a previously published report to be located at Talairach coordinates, x = 0, y = +30, z = +30 (ref. 13 ). ( d ) The neuroanatomical distribution of the most discriminating connectivity features for the comparison of rTMS responders versus non-responders, summarized by illustrating the locations of the 25 (top 10%) most discriminating ROIs indexed by summing across all significantly discriminating connectivity features and colored by functional network as in Figure 1a . The red arrows denote the rTMS target site in the two (lower) medial panels. ( e ) Heat maps depicting differences in functional connectivity in patients who subsequently improved after receiving rTMS ( n = 70), as compared to those who did not ( n = 54). ( f – i ) Confusion matrices depicting the performance of classifiers trained to identify subsequent treatment responders on the basis of the most discriminating connectivity features ( f ), connectivity features plus biotype diagnosis ( g ), clinical symptoms alone ( h ) or connectivity features plus biotype diagnosis in an independent replication set ( i , n = 30 patients with depression). NR, nonresponder; R, responder. ( j ) Summary of performance (overall accuracy) for classifiers in f – i . **significantly greater than clinical features alone ( P < 0.001) and connectivity features alone ( P = 0.003) by permutation testing; * P = 0.04 (significantly greater than clinical features alone by permutation testing). Cross-hatched bars depict classifier accuracy with more stringent data quality controls (Online Methods ) and excluding equivocal classification outcomes (the 10% of subjects with the lowest absolute SVM classification scores). Error bars depict s.e.m. in a and 95% confidence intervals in j . All abbreviations as in Figures 1 and 2 . See Supplementary Table 7 for MNI coordinates for ROIs in d . Full size image Next, we tested whether connectivity-based biotypes could be used to predict treatment response more effectively than clinical symptoms alone. To this end, we trained classifiers to differentiate responders and nonresponders using the same approach to feature selection, training and leave-one-out cross-validation. The most discriminating connectivity features involved the dorsomedial prefrontal stimulation target and the left amygdala, left dorsolateral prefrontal cortex, bilateral orbitofrontal cortex and posterior cingulate cortex ( Fig. 4c ; Supplementary Table 7 ). Connectivity between other neuroanatomical areas that were not directly stimulated by the rTMS protocol—including the ventromedial prefrontal cortex, thalamus, nucleus accumbens and globus pallidus—also predicted treatment response ( Fig. 4d,e ). Connectivity features predicted individual differences in the rTMS responsiveness with 78.3% accuracy in leave-one-out cross-validation ( Fig. 4f,j ). Classification according to connectivity features plus biotype diagnosis yielded the highest predictive accuracy (89.6%; Fig. 4g,j ). By contrast, clinical symptoms alone were not strong predictors of rTMS treatment responsiveness at an individual level. To test this, we trained classifiers to differentiate responders and nonresponders solely on the basis of clinical data. We found that clinical features (insomnia, anhedonia and psychomotor retardation by HAMD) were only modestly (62.6%) predictive of treatment responsiveness ( Fig. 4h,j ). Overall, classifiers based on connectivity features and biotype diagnosis significantly outperformed those based on clinical features alone ( Fig. 4j ; P < 0.005). Furthermore, just as we observed for diagnostic classifiers in Figure 3 , accuracy rates could be improved further (>94%, Fig. 4j ) by implementing stricter data quality controls and treating subjects with ambiguous classification outcomes as equivocal test results (Online Methods ). Finally, to further evaluate predictive validity, we tested the best-performing classifier, which used a combination of connectivity features and biotype diagnosis, in an independent replication set ( n = 30 subjects) and obtained comparable accuracy rates (87.5–92.6%; Fig. 4i,j ). By contrast, subtyping subjects on the basis of clinical symptoms yielded highly variable, longitudinally unstable clustering outcomes that failed to predict treatment response ( Supplementary Fig. 5 ). Depression biotypes transcend conventional diagnostic boundaries Collectively, these findings show that our current diagnostic system merges groups of patients with at least four distinct patterns of abnormal connectivity under a single diagnostic label—major depressive disorder. We concluded our study by testing whether the converse also occurs: that is, does our diagnostic system assign different diagnostic labels to patients who exhibit the same connectivity biotype? Motivated by studies identifying common neuroanatomical and functional changes that are shared across mood and anxiety disorders 50 , 51 , 52 , 53 , we first asked whether patients diagnosed with generalized anxiety disorder (GAD; n = 39) shared similar patterns of abnormal connectivity with one or more of the depression biotypes identified above. GAD was associated with widespread connectivity differences in resting-state networks ( Fig. 5a–c ) that overlapped significantly with those in depression (χ 2 = 5,457; P < 0.0001; Fig. 5a–c ). Next, to test whether subsets of patients with GAD resemble one or more depression biotypes, we applied the optimized classifiers developed above to the GAD cohort (Online Methods ). Although none of the patients with GAD in this analysis met clinical criteria for a diagnosis of depression, 69.2% of them were nevertheless classified as belonging to one of the depression biotypes, and a majority of these (59.3%) were assigned to the anxiety-associated biotype 4 ( Fig. 5d ). Figure 5: Connectivity biomarkers of depression biotypes transcend diagnostic boundaries. ( a ) Abnormal connectivity features in patients with generalized anxiety disorder (GAD, n = 39) relative to healthy controls ( n = 378). In this matrix depicting the 50 neuroanatomical nodes with the most significantly different connectivity features (Wilcoxon rank–sum tests, summed across all 258 features), elements in warm and cool colors depict connectivity features that are significantly increased or decreased in GAD, respectively. ( b ) 30.2% of connectivity features that were significantly abnormal in GAD (threshold of P < 0.001 versus controls, Wilcoxon) were also abnormal in depression (χ 2 = 5,457, P < 0.0001). ( c ) The neuroanatomical distribution of the most discriminating connectivity features for the comparison of GAD patients versus controls. The nodes are colored and scaled by summing across all significantly abnormal connectivity features associated with that node. Red represents increased and blue decreased functional connectivity in GAD. ( d ) Distribution of biotype diagnoses in patients with GAD. ( e ) No significant biotypes differences in anxiety symptom severity ( P = 0.692; Kruskal–Wallis ANOVA). BAI, Beck anxiety inventory. ( f , g ) Significantly ( P < 0.005, Kruskal–Wallis) elevated total depressive-symptom severity ( f ; BDI, Beck depression inventory) and anhedonia severity ( g ; BDI item 12) in GAD patients who tested positive for a depression biotype as compared to those who did not. * P < 0.01, † P = 0.064 in post hoc Mann–Whitney tests relative to “not depressed” group. ( h ) Distribution of biotype diagnoses in patients with schizophrenia ( n = 41). Error bars depict s.e.m. throughout. All abbreviations as in Figures 1 and 2 . Full size image Although anxiety symptom severity did not vary significantly by biotype classification ( Fig. 5e ), depressive symptom severity ( Fig. 5f ) and anhedonia ( Fig. 5g ) were significantly increased in patients with GAD who tested positive for one of the depression biotypes, as compared to patients with GAD who did not test positive. Furthermore, just as anhedonia was increased in patients with depression in biotypes 3 and 4, patients with GAD showed a similar trend ( Fig. 5g ; P < 0.05). Finally, to understand whether these classifiers were detecting pathological connectivity related specifically to mood and anxiety as opposed to nonspecific differences associated with psychiatric illness in general, we tested them on patients with schizophrenia ( n = 41), a disorder that is not thought to be closely related to unipolar depression. Just 9.8% of patients with schizophrenia tested positive for a depression biotype ( Fig. 5h ). Discussion Increasingly, diagnostic heterogeneity has emerged as a major obstacle to understanding the pathophysiology of mental illnesses and, in particular, depression. Although major depressive disorder—especially highly recurrent depression—is up to 45% heritable 54 , identifying genetic risk factors has proven challenging, even in extremely large genome-wide association studies 55 . Likewise, efforts to develop new treatments have slowed, owing in part to a lack of physiological targets for the assessment of treatment efficacy and the selection of individuals who are most likely to benefit 56 . All of these challenges have been attributed in part to the fact that our diagnostic system assigns a single label to a syndrome that is not unitary and that might be caused by distinct pathological processes, which would thus require different treatments. Here we have defined four subtypes of depression associated with differing patterns of abnormal functional connectivity and distinct clinical-symptom profiles that transcend conventional diagnostic boundaries, and we have shown how neuroimaging biomarkers can be used to diagnose them. Our sample size, cross-validation in strictly independent samples and replication in independent data sets support these results. However, this is to our knowledge the first effort to apply this type of statistical clustering for the purpose of defining depression subtypes and diagnosing them in individual patients, so caution is warranted. Replication of our findings in additional, independent, prospectively acquired data sets will be crucial for addressing some of the limitations inherent in our retrospective, multisite sample. We designed a preprocessing scheme specifically to control for site- and scanner-related artifacts, and we performed our clustering analysis on data from just two sites with nearly identical acquisition protocols and recruitment criteria. Still, it will be essential to replicate these findings in an equally large sample acquired from a single site. Furthermore, more extensive and uniform clinical phenotyping—especially within the relatively broad domains of anhedonia and anxiety—will be crucial for further understanding how connectivity-based biotypes relate to distinct symptoms and behaviors. Importantly, we regard the four biotypes identified here as just one, initial solution to the problem of diagnostic heterogeneity in a system that relies primarily on the reporting of clinical symptoms. This solution is capable of predicting treatment response in a controlled, laboratory setting and advances our understanding of how heterogeneous symptom profiles in depression might be related to clustered patterns of dysfunctional connectivity. But alternative solutions to the problem of depression subtyping also exist, even in our 220-subject hierarchical clustering analysis, which was suggestive of additional subtypes nested within these four clusters. It is likely that relatively restrictive patient-recruitment criteria, the size of our cluster-discovery data set, and the ordinal nature of our clinical-symptom assessments were also limiting factors. For these reasons, clinical and neuroimaging data acquired from much larger populations will be useful for characterizing more complex associations between connectivity features and symptoms; for defining robust low-dimensional representations of this connectivity feature space; and for optimizing the mapping between diagnostic subtypes and their underlying neurobiology. It will also be crucial to evaluate how these biomarkers perform in real-world, clinical settings, in which clinical assessments and treatments might be administered with varying fidelity, which could potentially diminish diagnostic and prognostic performance. These caveats notwithstanding, our results have several potential applications. They may inform recent initiatives to rethink our system for diagnosing psychiatric disorders and investigating their neurophysiological and genetic basis, by stratifying subjects into subgroups defined by shared neurobiological substrates 1 . They might also guide optogenetic and other circuit neuroscience approaches to investigating how dysfunction in specific circuits contributes to depression- and anxiety-related behaviors in experimentally tractable animal models 57 , 58 , 59 . Finally, these biomarkers also have prognostic potential. Patients in biotype 1 were approximately three times more likely to benefit from TMS of the dorsomedial prefrontal cortex than those in biotypes 2 or 4, and together, biotype diagnosis and functional connectivity features could be leveraged to accurately differentiate treatment responders from nonresponders on an individual basis. Validating and adapting them for use in naturalistic clinical settings will be a key challenge, but our data are also consistent with other recent reports that highlight the potential of neuroimaging tools to predict treatment response 9 , 10 , 11 , 12 , 13 , 14 , a major priority for a condition in which most treatments are effective only after several months. Biomarkers have already transformed the diagnosis and management of cancer, diabetes, heart disease and even pain syndromes 8 , but they have proven more elusive for psychiatry. Our results define one approach for using neuroimaging biomarkers to delineate and diagnose novel subtypes of mental illness characterized by uniform neurobiological substrates. Methods Subjects. All analyses were conducted in one of two data sets, unless otherwise noted (see also 'Statistical analysis' section below for subject details for each analysis, organized by figure panel). Data set 1 ( n = 711 subjects, 333 patients and 378 controls) was used for all analyses, except those depicted in Figures 3i , 4i and 5 . That is, data set 1 was used to identify clusters (biotypes) of patients with distinct patterns of dysfunctional connectivity in resting-state networks, testing for neurobiological and clinical correlates of these biotypes, and for training and testing classifiers to diagnose them. To ensure that cluster discovery was not confounded by site-related differences in subject recruitment criteria or other unidentified variables, the cluster-discovery analysis ( Fig. 1 ) was restricted to a subset of patients in data set 1, the 'cluster-discovery set' ( n = 220 of the 333 patients), who were recruited and scanned from just two sites with identical inclusion and exclusion criteria. Subjects in the cluster-discovery set were adult patients meeting Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria for (unipolar) major depressive disorder and seeking treatment for a currently active, nonpsychotic major depressive episode. They had a history of failure to respond to at least two antidepressant medication trials at adequate doses, including at least one during the current episode. Patients in the cluster-discovery set were excluded from enrollment if they had a currently active substance-use disorder, a psychotic disorder, bipolar depression, a history of seizures, unstable medical conditions, current pregnancy or other contraindications to MRI (for example, implanted devices, claustrophobia or head injury with loss of consciousness). As described in Supplementary Table 1 , subjects from the two sites included in the cluster-discovery set were matched for age, sex and depression severity (HAMD-17 total score). Supplementary Table 1 also describes medication status, co-morbid diagnoses and additional details about the scanning protocols for data acquired at these two sites. Classifier training, cross-validation and optimization was performed in the full data set 1, i.e., the 'training data set,' which included patients diagnosed with unipolar major depressive disorder and a currently active major depressive episode ( n = 333, 59.2% female, mean age = 40.6 years) and healthy control subjects without any history of a psychiatric condition ( n = 378, 57.7% female, mean age = 38.0 years). The patient and control groups did not differ significantly in age ( P = 0.189, Mann–Whitney) or sex (χ 2 = 0.61, P = 0.688). The patient scans were acquired at separate sites by five principal investigators (the two sites from the cluster-discovery set plus three additional sites). The control scans were acquired at these same five sites, as well as from seven additional sites that have provided unrestricted public access to their data through the 1000 Functional Connectomes Project ( ). Inclusion and exclusion criteria were generally similar to those described above for the two sites in the cluster-discovery set, except that a history of treatment resistance was not a requirement. Exclusion criteria common to all sites were contraindications for MRI and a recent history of substance abuse or dependence. Other inclusion and exclusion criteria—and consequently, the presence of psychiatric co-morbidities and use of psychiatric medications—varied by site and are detailed in Supplementary Table 2. Clustering into connectivity biotypes was not related to medication history, age or head motion ( Supplementary Fig. 7 ). Additional demographic information for all sites in data set 1 is reported in Supplementary Table 3 . Data set 2 ( n = 477)—the 'replication data set'—was used to test the most successful classifier of each depression biotype in patients with active depression ( n = 125 from seven sites) and healthy controls ( n = 352 from 13 sites). Scans in data set 2 were acquired in separate studies, at a later date or were not initially available to us, and they were not used in any step of the cluster identification or classifier training procedure. Furthermore, five sites were unique to data set 2. Patients with depression at all sites in both data sets met DSM-IV criteria for a current major depressive episode ( n = 109 unipolar; n = 16 bipolar 2), and healthy controls were subjects without any current or past history of a psychiatric or neurological condition. To test whether patterns of abnormal connectivity that were evident in clusters of patients with depression were also present in subsets of patients with other psychiatric disorders ( Fig. 5 ), we tested the same classifiers on patients meeting DSM-IV criteria for a diagnosis of generalized anxiety disorder (GAD, n = 39, 69.2% female, mean age = 32.4 years) or schizophrenia ( n = 41, 78.0% male, mean age = 38.2 years; no co-morbid mood disorders and no schizoaffective disorder). Data for the GAD subjects were acquired by one of the co-authors of this report (A.E.), and inclusion and exclusion criteria are described in Supplementary Table 2 (site: Stanford 1; PI: A. Etkin). Data for the schizophrenia subjects were obtained through the 1000 Functional Connectomes Project ( ), made publicly available by the Center of Biomedical Research Excellence in Brain Function and Mental Illness (PIs: J. Sui, J. Liu, C. Harenski, R. Thoma and C. Abbott). Inclusion criteria were a diagnosis of schizophrenia (but not schizoaffective disorder), as confirmed by the Structured Clinical Interview for DSM Disorders (SCID), and exclusion criteria were a history of neurological disorder, mental retardation, head trauma with loss of consciousness or substance abuse or dependence within the past 12 months. All subjects in all data sets provided informed consent, and all recruitment procedures and experimental protocols were approved by the Institutional Review Boards of the principal investigators' respective institutions (Weill Cornell Medical College, Stanford University, Toronto Western Hospital, Emory University and Harvard Medical School). Clinical measures. At all sites, initial screening interviews were conducted to determine eligibility to participate, and a trained clinician conducted a structured clinical interview (MINI or SCID) to confirm all psychiatric diagnoses and rule out exclusionary co-morbid conditions as defined in Supplementary Table 2 . In addition, specific clinical symptoms were evaluated using the Hamilton Rating Scale for Depression (HAMD; n = 312 patients; n = 65 healthy controls), the Beck depression inventory (BDI, n = 39 patients with GAD) and the Beck anxiety inventory (BAI; n = 39 patients with GAD). These assessments were used to test the depression biotypes that were associated with specific clinical symptom profiles. For details, see 'Clinical data analysis' section below. Magnetic resonance imaging (MRI) data acquisition. A resting-state functional MRI scan was obtained by using a T2*-weighted gradient echo spiral in–out sequence or a Z-SAGA sequence, yielding whole-brain coverage in all subjects. A high-resolution T1-weighted anatomical scan (MP–RAGE or SPGR) was obtained for brain parcellation and co-registration purposes. Specific scanning parameters varied by site. Most used a TR of ∼ 2 s, in-plane resolution of ∼ 3.5 mm, and obtained 150–180 volumes in ∼ 5–6 min. Detailed scanning parameters for each site are reported in Supplementary Table 1 and Supplementary Table 3. fMRI data analysis: preprocessing. All data sets were preprocessed using the Analysis of Functional Neuroimages (AFNI) software package. Prior to other preprocessing steps, framewise motion parameters were calculated by using AFNI's 3dvolreg function, owing to concerns that slice-time correction might lead to systematic underestimates of motion when this step is performed first. After estimating framewise motion parameters, preprocessing included standard procedures for slice-timing correction, spatial smoothing (with a 4-mm-full-width, half-maximum Gaussian kernel), temporal bandpass filtering (0.01–0.1 Hz), linear and quadratic detrending and removal of nuisance signals related to head motion, physiological variables and local and global hardware artifacts. Functional data sets were co-registered to the corresponding high-resolution T1 anatomical images, and T1 anatomicals were transformed into the Montreal Neurological Institute (MNI) common space by using AFNI's 3dQwarp function to calculate and optimize a nonlinear transformation. To reduce the number of interpolations performed on resting-state data, we combined motion-correcting, anatomical-to-structural and structural-to-MNI template alignments and applied them to functional scans in a single step. Motion correction was achieved using AFNI's 3dvolreg function. Motion artifact is increasingly recognized as an important potential confound in resting-state fMRI studies, especially those involving clinical populations, and can introduce systematic shifts in signal correlations that vary as a function of the distance separating two brain regions 25 , 26 , 27 . To balance the demands of noise reduction and data preservation, we censored volumes preceding or following any movement (framewise displacement (FD)) greater than 0.3 mm. These volumes were excluded from all further analysis steps, including nuisance regression. A small number of subjects (8.9%) were excluded from further analysis if the number of remaining volumes was insufficient for performing simultaneous nuisance signal regression and band-pass filtering as described below. (Note that descriptions of the number of subjects comprising each data set in the 'Subjects' section above and in the main text refer to subjects that were actually used in each analysis, after excluding scans because of motion contamination or poor signal quality, as defined below.) Next, nuisance signal regression and band-pass filtering were performed simultaneously, only on volumes that survived motion censoring, and excluding high-motion volumes. This is because noise from high-motion volumes has been shown to contaminate other volumes, even if they are eventually omitted from final analyses 60 , 61 . Accordingly, the regression step included 12 motion parameters (roll, pitch, yaw, translation in three dimensions and their first derivatives); non-neuronal signals from eroded white matter and CSF masks; and regressors for temporal filtering. Finally, we used AFNI's ANATICOR function to eliminate local and global hardware artifacts 62 , 63 . After preprocessing, the residual time series files, co-registered to MNI space, were used for all subsequent analyses. A note on motion artifact. We selected a censoring threshold (FD > 0.3 mm) empirically based on analyses showing that it was sufficient to exclude the majority of excursions from so-called floor values in single-subject FD traces ( Supplementary Fig. 1 ), which have been associated with significant motion artifact, while preserving enough data to allow for stable estimates of signal correlations 25 , 26 , 27 . It is also worth noting that this threshold resembles commonly used thresholds (0.2–0.5 mm) in recently published reports (reviewed in ref. 64 ). However, we found that a small number of RSFC features (just 0.7% of the connectivity features that differentiated patients and controls, at a liberal threshold of P < 0.005, uncorrected) were significantly different in low- versus high-motion subjects after ANATICOR regression and censoring at 0.3 mm ( Supplementary Fig. 1d ). To further evaluate whether motion artifact affected cluster discovery and biotype diagnoses, we repeated the hierarchical clustering analysis depicted in Figure 1 after excluding the 0.7% of RSFC features that varied with motion at this liberal threshold ( P < 0.005). 99.1% of all subjects were assigned to the same cluster ( Supplementary Fig. 1h ). To rule out the possibility that multivariate classifiers may been influenced by the aggregation of subtle between-group differences in motion artifact that were undetectable by the mass univariate approach implemented in ref. 64 , we conducted additional analyses reported in Supplementary Figure 1i,j. The results indicate that our clustering and classification results were not biased substantially by motion. fMRI data analysis: parcellation and whole-brain connectivity estimation. The objective of this analysis was to extend conventional seed-based approaches to generate a whole-brain correlation matrix for each subject, quantifying functional connectivity in regions of interest spanning the entire brain in terms of correlated, spontaneous fluctuations in the resting-state BOLD signal. Most data sets were acquired in a native grid space of ∼ 3.5 × 3.5 × 5 mm, yielding ∼ 30,000 brain voxels and up to ∼ 4.5 × 10 8 unique, potential pairwise correlations. To increase computational tractability and biological interpretability, all analyses reported in the main text used an established and extensively validated functional parcellation system 28 to delineate functional network nodes (10-mm diameter spheres) spanning most cortical, subcortical and cerebellar areas. The originally published parcellation identified 264 nodes (ROIs). Here 13 ROIs that have hypothesized roles in depression-related pathology, but that are not represented in this 264-node parcellation, were added, including the left and right nucleus accumbens, subgenual anterior cingulate, head of the caudate nucleus, amygdala, ventral hippocampus, locus coeruleus, ventral tegmental area and raphe nucleus, for a total of 264 + 13 = 277 nodes. However, 19 of the 277 nodes—mostly cerebellar and inferior temporal areas—were excluded from further analyses owing to incomplete MRI volume coverage or because of inadequate signal (SNR < 100), as discussed in more detail below. Thus, the primary parcellation used in all analyses included 264 +13 – 19 = 258 functional nodes. In addition, when optimizing the biomarkers developed in Figure 3 , we tested four strategies for parcellation: (i) The primary functional parcellation of Power and colleagues that is described above and is the focus of the analyses in the main text 28 ; (ii) a 'coarse voxelwise' parcellation strategy, a standard anatomical template brain (1 × 1 × 1–mm resolution in MNI space) was resampled to a 10 × 10 × 15–mm grid space. After excluding voxels (or portions of voxels) corresponding to white matter or CSF using masks derived from a segmentation of the original template brain into tissue classes (via AFNI's 3dSeg function), we were left with 945 ROIs spanning all cortical, subcortical and cerebellar gray matter; (iii) an anatomical parcellation used the Freesurfer atlas developed by Desikan, Killiany and colleagues that segments the brain into 68 gyral-based cortical ROIs and an additional 22 subcortical and cerebellar areas for a total of 90 anatomical regions of interest 65 ; (iv) finally, a second functional parcellation (in addition to the used 90 cortical and subcortical ROIs defined by Shirer, Greicius and colleagues using independent-components analysis to identify brain voxels that exhibit correlated activity in association with one or more cognitive states (rest, episodic-memory retrieval, serial calculations or singing lyrics; see ref. 66 for details). The best results were obtained from the primary functional parcellation devised by Power and colleagues 28 , which was the focus of all other analyses. After preprocessing the resting-state fMRI data and parcellating the brain as described above, BOLD signal time series were extracted from each ROI by averaging across all voxels in that ROI, and a correlation matrix was calculated for each subject by using AFNI's 3dNetCorr function. However, before doing so, we took additional steps to control for scanner- and site-related differences that could potentially confound analyses of data pooled across multiple sites. First, we controlled for site-related differences in signal quality or scan coverage by excluding ROIs if the signal-to-noise ratio (SNR, the voxelwise mean of the magnetic resonance signal over time divided by the s.d. of the time series) was less than 100 in >5% of subjects. On this basis, we excluded 13 of the 277 ROIs in the primary functional parcellation, leaving 264 ROIs for further analysis. Most excluded ROIs were located in the inferior cerebellum, which did not have consistent coverage across all sites, or on the ventral surface of the temporal lobe or the orbital surface of the frontal lobe, which tended to have lower SNR in some scans, likely owing to artifact at the interface with air sinuses. Second, for each subject, only voxels with SNR > 100 were used to calculate the mean BOLD signal time series for each ROI, to further control for local differences in signal quality on a per subject basis. And third, a small number of subjects (2.9%) was excluded from further analysis if the signal quality was low (SNR < 100) in any of the remaining 258 ROIs. Thus, after excluding 13 ROIs with low-quality signal and a small number of subjects with excessive head motion (8.9%) or poor signal quality (2.9%), we calculated 258 × 258–element correlation matrices for each of the remaining subjects ( n = 711 for data set 1; n = 477 for data set 2; see 'Subjects' above). To enable us to test hypotheses about functional connectivity differences in the depressed and control populations, we applied the Fisher z -transformation to each correlation coefficient. Next, we used multiple linear regression to further control for site- and age-related effects on functional connectivity by regressing the Fisher z -transformed correlation coefficients for each matrix element on subjects' ages and dummy variables for each site. The resulting residuals—comprising a 258 × 258–element matrix for each subject—were an estimate of the functional connectivity between each ROI and every other ROI, controlling for age effects and relative to other subjects whose data were acquired on the same scanner. Henceforth, we refer to these matrices of residuals as functional connectivity matrices. fMRI data analysis: canonical correlation analysis and clustering. To ensure that cluster discovery was not confounded by site-related differences in subject recruitment criteria or other unidentified variables, the cluster-discovery analysis was restricted to a subset of patients (the 'cluster-discovery set,' n = 220 of the 333 patients) from two sites with identical inclusion and exclusion criteria (see Supplementary Tables 1–3 for details). Each subject's 258 × 258–element correlation matrix contained 33,154 unique functional connectivity features, necessitating a protocol for selecting a subset of relevant, nonredundant connectivity features for use in clustering. We reasoned that biologically meaningful depression subtypes would be best characterized by a low-dimensional representation of a subset of those 33,154 connectivity features that were significantly correlated with depressive symptoms. Therefore, to select a set of connectivity features for use in clustering, we (i) used Spearman's rank correlation coefficients to identify connectivity features that were significantly correlated ( P < 0.005) with severity scores for one or more of the 17 depressive symptoms, as indexed by individual item responses on the Hamilton Depression Rating Scale (HAMD-17), and then (ii) used canonical correlation analysis to define a low-dimensional representation of those connectivity features, in terms of linear combinations of connectivity features that were correlated with linear combinations of clinical symptoms. This empirical, data-driven approach to feature selection and dimensionality reduction identified two linear combinations of functional connectivity features (canonical variates) that were correlated with distinct clinical-symptom combinations, which we term “anhedonia-related connectivity features” and “anxiety-related connectivity features.” The results are depicted in Figure 1 , with additional details in Supplementary Figure 2 . Next, to assess whether these abnormalities were evenly distributed across patients or tended to cluster in subgroups, we used hierarchical clustering to assign subjects to nested subgroups with similar patterns of abnormal connectivity along these two dimensions. We calculated a dissimilarity matrix describing the Euclidean distance between every pair of subjects in this two-dimensional feature space, and then used Ward's minimum variance method to iteratively link pairs of subjects in closest proximity, forming progressively larger clusters in a hierarchical tree. These methods were implemented by using MATLAB's pdist , linkage , cluster and clusterdata functions. The height of each link in the resulting dendrogram ( Fig. 1d ) represents the distance between the clusters being linked. On this basis, we conservatively identified at least four clusters for which the distance between cluster centroids was at least 20 times the mean distance between pairs of subjects within a cluster. Additional potential clustering solutions were also evident, nested within these subgroups. However, this four-cluster solution was optimal for defining relatively homogeneous subgroups that were maximally dissimilar from each other (maximizing the ratio of between-cluster to within-cluster variance), while ensuring individual cluster sample sizes that provided sufficient statistical power to detect biologically meaningful differences between biotypes ( Supplementary Fig. 3 ). To construct the heat maps depicted in Figure 2 , we used Wilcoxon rank–sum tests to identify connectivity features that were significantly different in patients with depression from each cluster, as compared to all controls, and Kruskal–Wallis ANOVA to identify connectivity features that differed most between clusters. As described in the following section, we also investigated whether abnormal resting-state connectivity features could be used to diagnose these putative depression subtypes in individual subjects by training classifiers to detect them ( Fig. 3 ). In our efforts to optimize classifier performance, we compared the hierarchical clustering method described above with k -means clustering, as implemented by MATLAB's kmeans function, which assigns each subject to exactly one of k clusters on the basis of their squared Euclidean distance from the centroid of each cluster, iteratively assigning and reassigning subjects to a cluster to minimize the sum of the within-cluster sum-of-squares subject-to-centroid distances. Classification: training and cross-validation of diagnostic classifiers for depression biotypes. In analyses depicted in Figure 3 , we developed classifiers for diagnosing depression in subgroups of patients with similar patterns of abnormal functional connectivity in resting-state networks, testing and optimizing methods for brain parcellation and feature extraction, subject clustering, feature selection and classification to identify empirically the most successful approach. This optimization process was conducted exclusively in subjects from data set 1 ( n = 711). As depicted in Figure 3a and in greater detail in Supplementary Figure 6 , each optimization trial tested a combination of one of four methods for parcellation and feature extraction (coarse voxelwise parcellation, anatomical parcellation and two functional parcellations; see 'Parcellation' above); one of three methods for clustering (no clustering, k -means clustering or hierarchical clustering; see 'Clustering' above); and one of three methods for classification: logistic regression, support vector (SVM) classification or linear discriminant analysis (LDA). On each optimization trial, a given combination of methods was evaluated by iteratively training classifiers on a subset (the 'training subset') of the subjects in data set 1 and then testing them on the remaining subjects (the 'test subset') through leave-one-out cross validation (LOOCV). As above, only the 220 patients in the two-site cluster-discovery set were used in the clustering analysis, whereas all 333 patients and 378 controls in data set 1 were eligible to be used in classification. Assigning left-out subjects to clusters. The 133 patients ( n = 333 – 220 = 133) left out of the cluster-discovery set were assigned to one of the four clusters in a two-step process. First, the canonical coefficients estimated in the cluster-discovery set were used to calculate canonical variate (component) scores for the left-out subjects. Second, LDA classifiers trained on the cluster-discovery sample were used to assign left-out subjects to one of the four clusters. The same two-step process was used to assign test subjects to the best-fitting cluster for the leave-one-out cross-validation analyses described below. Classifier training. Classifier training was performed using the libsvm classification package 67 , the SPSS Statistics package (IBM: ), or MATLAB classification functions (see schematic in Supplementary Fig. 6 ). Classifiers were trained to discriminate between patients with depression and healthy controls on the basis of a set of the most abnormal connectivity features, which were selected from the full set of all possible connectivity features (33,154 for the primary functional parcellation used in all other figures; 337,431 for the voxelwise parcellation; ∼ 4,000 for the anatomical and second functional parcellations). In preliminary analyses (data not shown), we found that the optimal number of features depended on the parcellation strategy and classifier method. Simple logistic-regression classifiers could be trained only on a small set of features constrained by the number of subjects in each group; optimal performance was obtained in most cases with the top 20 features. SVM and LDA classifiers performed best when trained on the top ∼ 5–10% of the most abnormal features for the primary functional and voxelwise parcellations ( ∼ 1.5–3,000 and 10,000–25,000 features, respectively) and the top 25% for the coarser anatomical and functional parcellations (1,000 features). Thus, in Figure 3b , simple logistic-regression classifiers were trained on the top 20 features, whereas LDA and SVM classifiers were trained on the top ∼ 2,000 features for the primary functional parcellation, ∼ 1,000 features for the anatomical and secondary functional parcellations or ∼ 10,000 features for voxelwise parcellation. After being trained on subjects in the training subset, the resulting classifiers were tested on subjects in the test subset. Importantly, subjects in the test subset were left out of all aspects of the optimization procedure, including dimension reduction by canonical correlation analysis, clustering, feature selection and classifier training. This is crucial, because including members of the test subset in the clustering or feature-selection procedures will yield biased, inflated estimates of classifier accuracy. Trials that did not use clustering yielded one classifier on each iteration, which was then applied to subjects in the test subset, and the accuracy rates in Figure 3b represent the percentage of patients and healthy controls correctly classified as patients and healthy controls, respectively, averaged over all iterations. Trials that used clustering yielded three, four or five classifiers as indicated in Figure 3b . Testing each of them on every subject would tend to overestimate accuracy for patients and underestimate accuracy for healthy controls. Therefore, we tested only one of the biotype classifiers on each subject, on the basis of proximity to the cluster centroid or (in the case of the best performing classifiers depicted in Fig. 3g ), by using the LDA classifiers for cluster assignment described above. For the purposes of defining a cluster's centroid in order to make new cluster assignments, we excluded a small number of subjects ( n = 15, or 6.8% of all subjects in the cluster-discovery set) with ambiguous cluster identities. These 'edge cases' were defined as cases with cluster silhouette values <0, indicating a case that was poorly matched to its own cluster and possibly better matched to a neighboring cluster. (We found that for small clusters, these edge cases could distort the calculation of the cluster's centroid location, resulting in unstable cluster assignments across iterations.) In Figure 3c–f , the neuroanatomical locations of the most discriminating nodes were plotted by selecting connectivity features that were significantly different from controls (by Wilcoxon rank–sum tests) across each round of training and cross-validation. The nodes were colored and scaled by summing across all connectivity features associated with that node, as described in ref. 68 . Permutation testing. By systematically testing various combinations of methods for parcellation, clustering, and classification, we found that the most successful classifier used our primary functional parcellation 28 , hierarchical clustering and SVM classification with linear kernel functions, and correctly identified healthy controls and patients with sensitivities of 84.1–90.9% and specificities of 84.1–92.5% ( Fig. 3g ). The statistical significance of these results was estimated by permutation testing, randomly permuting the diagnostic labels for each subject and applying the exact same procedure for clustering, feature selection and classifier training and repeating this procedure 200 times. Permutation testing was used to assess the statistical significance of the most successful classifier derived from each of the three classification methods (logistic regression, SVM and LDA). For all three methods, the reported accuracy rates exceeded those obtained on all 200 permutation tests, indicating a statistical significance of P < 0.005. Classification: testing classifiers in an independent replication data set. It is well established in the machine-learning literature that iterative training and cross-validation on the same data overestimate classifier performance, and other studies have raised questions about the capacity for classifiers trained on one data set at a single site to generalize to data collected at multiple sites 44 , 46 . To address these issues, we tested the most successful classifier for each depression biotype (primary functional parcellation, hierarchical clustering and SVM classification) in an independent replication data set (data set 2; n = 477 subjects), comprising 125 patients and 352 healthy controls acquired from 13 sites, including five sites that were not included in the original training data set. This analysis was essentially identical to the analysis of test subjects in cross-validation described above. After preprocessing, parcellation and BOLD signal time-series extraction, we calculated correlation matrices, and the Fisher z -transformed correlation coefficients were corrected for age and site effects. For subjects in data set 2 who were scanned at a site that was included in data set 1, we corrected for age and site effects using the beta weights calculated for subjects in data set 1 to calculate residuals as described above. For subjects in data set 2 who were scanned at new sites that were not included in data set 1 (all healthy controls), we used multiple linear regression to estimate beta weights for these new sites. Next, the classifier for one depression biotype was tested on each subject by using the two-step procedure for cluster/biotype assignment described above (' Assigning left-out subjects to clusters' ). The overall accuracy rates and accuracies by cluster are reported in Figure 3i . To better understand the potential for further improvements in classifier performance in future, prospective data sets, we also calculated accuracy rates separately after implementing stricter data quality controls and by treating subjects with ambiguous classification outcomes as equivocal test results, as is common practice for biomarkers in other areas of medicine. These calculations excluded subjects with <300 s of data after censoring, motivated by reports that the stability of low-frequency BOLD signal-correlation estimates is higher for longer-duration scans; 69 subjects with FD motion estimates exceeding 0.18 mm, i.e., the 95 th percentile in our training set, motivated by our finding in Supplementary Figure 1 that classification rates in cross-validation (i.e., in data set 1) were slightly lower in the 5% of subjects with the highest levels of motion (χ 2 = 5.096, P = 0.024); and the 10% of subjects with the lowest absolute SVM classification scores, i.e., equivocal classification outcomes. The results of these analyses are depicted in the cross-hatched bars in Figures 3i and 4j . We also tested whether cluster assignments were stable over time, reasoning that if these clusters represent biologically meaningful depression subtypes, then a patient diagnosed with one of these subtypes should be diagnosed with the same subtype when re-tested at a later date. To assess this, we tested for reproducibility in a subset of subjects ( n = 48) who were re-scanned 4–6 weeks after the initial scan and remained actively depressed (meeting DSM-IV criteria for a major depressive episode). As above, each subject was assigned to a cluster using the two-step procedure for biotype assignment described above (' Assigning left-out subjects to clusters' ), and we assessed the stability of cluster assignments across scans ( Fig. 3h ). A chi-squared test was used to assess the statistical significance of the longitudinal-stability results. Clinical-data analysis. To assess whether biotypes of depression defined by unique patterns of resting state functional connectivity were associated with specific clinical profiles ( Fig. 2f ), we used Kruskal–Wallis analysis of variance to test for biotype differences in the severity of depressive symptoms in the cluster-discovery set ( n = 220), as indexed by the HAMD. The six symptoms reported in Figure 2f showed the largest main effects of biotype (see Supplementary Fig. 4a for results for all 17 HAMD items). In Supplementary Figure 4c , we also tested for differences in these same six measures in clinical data acquired from subjects that were not included in the clustering analysis ( n = 92). In Figure 2c , we tested whether abnormal connectivity features that were shared across all four biotypes predicted the severity of 'core' symptoms that were present in almost all patients, regardless of biotype. We found that of the 17 symptoms quantified by the HAMD, three were present in almost all patients with depression (>90%); these included depressed mood (“feelings of sadness, hopelessness, helplessness”, 97.1%), anhedonia (96.7%) and anergia or fatigue (93.9%). We used principal-components analysis to define a low-dimensional representation of these shared, abnormal connectivity features and correlated the first component with severity scores for these three symptoms. The results are depicted in quartile plots in Figure 2c . Repetitive transcranial magnetic stimulation and related analyses. In Figure 4 , we tested whether depression biotypes defined by unique patterns of abnormal functionally connectivity were differentially responsive to rTMS in a subset of subjects ( n = 154 in total) who received a course of excitatory repetitive TMS (10 Hz or intermittent theta burst stimulation) targeting the dorsomedial prefrontal cortex, beginning the week after their fMRI scan. The left dorsolateral prefrontal cortex is the most common target for stimulation in rTMS clinical trials 48 , but recent studies have demonstrated efficacy for the dorsomedial prefrontal cortical (DMPFC) target used here 13 , 70 . Of note, DMPFC was among the most important neuroanatomical areas differentiating the four biotypes in Figure 2d , which suggested to us that biotype differences in dysfunctional connectivity at the DMPFC target site may give rise to differing treatment outcomes. The treatment parameters and scanning parameters for this sample have been previously described in detail elsewhere 13 , 71 . To summarize, all subjects received five sessions of TMS per week for 4–6 weeks (20–30 sessions total), delivered using a MagPro R30 rTMS device (MagVenture, Farum, Denmark) and a Cool-DB80 stimulation coil. For subjects who received 10-Hz stimulation ( n = 86), each session included 3,000 pulses per hemisphere, delivered to the dorsomedial prefrontal cortex at 120% of resting motor threshold at a frequency of 10 Hz and with a duty cycle of 5 s on and 10 s off, for a total of 3,000 pulses in 60 trains per hemisphere per session (6,000 pulses total). For subjects who received intermittent theta burst stimulation ( n = 68), each session included 600 pulses per hemisphere, delivered to the dorsomedial prefrontal cortex, at 120% of resting motor threshold, in 50 Hz triplet bursts, five bursts per second, with a duty cycle of 2 s on and 8 s off, for a total of 600 pulses in 20 trains per hemisphere per session (1,200 pulses total). To increase the tolerability of the DMPFC stimulation protocol, which has been associated with discomfort in some reports, all subjects also underwent a scalp-pain acclimatization protocol, as detailed in refs. 13 , 71 . Depression severity was assessed using the 17-item HAMD before and after the course of treatment, and clinical improvements were measured in terms of changes in the total HAMD score. To assess whether treatment response varied with depression biotype, subjects were classified as “treatment responders” or “treatment nonresponders”. Treatment responders were subjects who showed either a partial or full response to treatment, conventionally defined as a 25–50% or >50% reduction in HAMD scores, and “treatment nonresponders” were subjects who showed a <25% reduction in HAMD scores. A chi-squared test was used to assess whether treatment response rates varied with depression biotype, and Kruskal–Wallis analysis of variance was used to test whether change in HAMD varied with depression biotype ( Fig. 4a,b ). In addition, we tested whether functional connectivity features and biotype diagnosis were predictive of treatment response in a training and cross-validation sample ( ∼ 80% or n = 124 of the 154 patients; Fig. 4c–g ) and then tested the best-performing classifier in an independent replication sample ( ∼ 20%, n = 30 of the 154 patients). Using a procedure identical to the one described above, we used the primary functional parcellation, feature selection and SVM classification methods to iteratively train classifiers to prospectively identify TMS responders and nonresponders on the basis of connectivity features assessed before treatment, with leave-one-out cross validation ( Fig. 4f ). As above, the test subjects were left out of all aspects of feature selection and classifier training. We repeated this process using both connectivity features and biotype diagnosis, coded as four binary dummy variables ( Fig. 4g ). To understand whether clinical profiles were sufficient to predict treatment response without resting-state connectivity measures, we trained classifiers to differentiate responders and nonresponders solely on the basis of clinical data using an identical approach ( Fig. 4h ). Finally, we tested the best-performing classifier, which used both functional connectivity features and biotype diagnosis, in the independent replication sample ( Fig. 4i ). Statistics. In Figure 1 , canonical correlation analysis was used to define a low-dimensional representation of connectivity features ( n = 220 patients from the “Toronto” and “Cornell 1” sites, Supplementary Table 1 ) that were predictive of two specific combinations of clinical symptoms (see above), and hierarchical clustering analysis ( Fig. 1e–f ) was used to delineate clusters of subjects in a two-dimensional space defined by these two canonical variates. In Figure 2a–c , Wilcoxon rank–sum tests were used to test for differences in functional connectivity between all patients in the cluster-discovery set ( n = 220) and all healthy controls ( n = 378, Supplementary Table 3 , training Data set), and Spearman rank correlations were used to test for associations with three clinical symptoms that were present in at least 90% of patients ( n = 220). In Figure 2d,e , Kruskal–Wallis ANOVA ( n = 220) was used to test for connectivity features that varied by biotype, and Wilcoxon rank–sum tests were used to assess whether these connectivity features were increased or decreased in depression ( n = 220) as compared to controls ( n = 378). In Figure 2f,g , Kruskal–Wallis ANOVA ( n = 220) was used to test for differences in clinical-symptom severity by biotype. In Figure 3b,g , classifier accuracy was assessed in leave-one-out cross validation in the full training data set ( n = 333 patients, n = 378 healthy controls; Supplementary Table 3 , training data set), with the test subject strictly excluded from all aspects of the clustering and classification optimization process, and statistical significance was assessed by establishing a null hypothesis distribution by randomly permuting diagnostic labels 500 times (see 'Classification' and 'Permutation testing' sections above). In Figure 3h , the longitudinal stability of biotype assignments was assessed in a subset of subjects from the cluster-discovery set ( n = 50 patients with depression from “Cornell 1” site) who received a second fMRI scan obtained 4–5 weeks after the initial scan, and a chi-squared test ( n = 50) was used to assess for a statistical dependence between biotype ID on scans 1 and 2. In Figure 3i , the most successful classifier identified in Figure 3b was tested in an independent replication data set ( n = 125 patients, n = 352 healthy controls; Supplementary Table 3 , replication data set). In Figures 3h and 3i , the scans used for testing longitudinal stability and for replicating classifier performance were not used in any aspect of the cluster-discovery process or classifier optimization. In Figure 4a,b , chi-squared tests ( a ) and Kruskal–Wallis ANOVA ( b ) were used to test for biotype differences in response rates and improvements in depression severity (change in total HAMD), respectively, in patients after treatment with TMS ( n = 124 patients with depression from training data set, “Toronto” site). In Figure 4c–e , Wilcoxon rank–sum tests were used to test for functional connectivity differences in TMS partial responders ( n = 70) versus nonresponders ( n = 54). In Figure 4f–i , classifier accuracy for differentiating responders ( n = 70) and nonresponders ( n = 54) was assessed by using leave-one-out cross validation and permutation testing, as in Figure 3 , and the best-performing classifier was tested in an independent replication set ( n = 30 patients with depression from replication data set, “Toronto” site) in Figure 4j . In Figure 5a–c , Wilcoxon rank–sum tests were used to test for functional connectivity differences in patients with generalized anxiety disorder ( n = 39 patients with GAD from “Cornell 1” and “Stanford 1” sites) versus healthy controls ( n = 378, training data set; Supplementary Table 3 ), and a chi-squared test was used to test for significant overlap in depression- and GAD-related connectivity features ( Fig. 5b ). In Figure 5d and h , we applied the biotype classifiers developed in Figure 3 to the patients with GAD ( n = 39) and to a separate cohort of patients diagnosed with schizophrenia ( n = 41 patients with rsfMRI scans shared through the 1000 Functional Connectomes Project and the Center of Biomedical Research Excellence in Brain Function and Mental Illness (COBRE)). In Figure 5e–g , Kruskal–Wallis ANOVA was used to test for biotype differences in clinical symptom severity in the same patients with GAD ( n = 39). Throughout, all P values are two-tailed, and all error bars are either s.e.m. or 95% confidence intervals, as defined in the corresponding figure legends. Data availability. Data from the following sites ( Supplementary Tables 2 and 3 ) are publicly available for download through the 1000 Functional Connectomes Project International Data Sharing Initiative ( ): NKI, Atlanta, Cambridge, Cleveland, ICBM, New York, COBRE, Beijing, Milwaukee and Leipzig. Data from the remaining sites are available at the discretion of the respective principal investigators, listed in Supplementary Table 2 . Change history 19 December 2016 In the version of this article initially published online, the abstract contained two typos reading, “Like to other neuropsychiatric disorders,…” and “transcranial-magnetic-stimulation therapy…” . These errors have been corrected in the print, PDF and HTML versions of this article.
Patients with depression can be categorized into four unique subtypes defined by distinct patterns of abnormal connectivity in the brain, according to new research from Weill Cornell Medicine. In a collaborative study published Dec. 5 in Nature Medicine, Dr. Conor Liston, an assistant professor of neuroscience in the Feil Family Brain and Mind Institute and an assistant professor of psychiatry at Weill Cornell Medicine, has identified biomarkers in depression by analyzing more than 1,100 functional magnetic resonance imaging (fMRI) brain scans of patients with clinical depression and of healthy controls, gathered from across the country. These biomarkers may help doctors to better diagnose depression subtypes and determine which patients would most likely benefit from a targeted neuro-stimulation therapy called transcranial magnetic stimulation, which uses magnetic fields to create electrical impulses in the brain. "The four subtypes of depression that we discovered vary in terms of their clinical symptoms but, more importantly, they differ in their responses to treatment," Liston said. "We can now predict with high accuracy whether or not a patient will respond to transcranial magnetic stimulation therapy, which is significant because it takes five weeks to know if this type of treatment works." Approximately 10 percent of Americans are diagnosed with clinical depression each year. It is, by some estimates, the leading cause of disability in many developed countries. Historically, efforts to characterize depression involved looking at groups of symptoms that tend to co-occur, then testing neurophysiological links. And while past pioneering studies have defined different forms of depression, the association between the various types and the underlying biology has been inconsistent. Further, diagnostic biomarkers have yet to prove useful in distinguishing depressed patients from healthy controls or in reliably predicting treatment response among individuals. "Depression is typically diagnosed based on things that we are experiencing, but as in election polling, the results you get depend a lot on the way you ask the question," Liston said. "Brain scans are objective." Researchers from Weill Cornell Medicine and seven other institutions derived the biomarkers by assigning statistical weights to abnormal connections in the brain, then predicting the probability that they belonged to one subtype versus another. The study found that distinct patterns of abnormal functional connectivity in the brain differentiated the four biotypes and were linked with specific symptoms. For example, reduced connectivity in the part of the brain that regulates fear-related behavior and reappraisal of negative emotional stimuli was most severe in biotypes one and four, which exhibited increased anxiety. Liston will seek to replicate and confirm the results of this research and discover if it is broadly applicable to studying the biology of depression and other forms of mental illness. "Subtyping is a major problem in psychiatry," Liston said. "It's not just an issue for depression, and it would be really valuable to have objective biological tests that can help diagnose subtypes of other mental illnesses, such as psychotic disorders, autism and substance abuse syndromes."
10.1038/nm.4246
Medicine
Patients set to benefit from new guidelines on artificial intelligence health solutions
Reporting guidelines for clinical-trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nature Medicine. doi.org/10.1038/s41591-020-1034-x (2020) Guidelines for clinical-trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nature Medicine. doi.org/10.1038/s41591-020-1037-7 Journal information: Nature Medicine , British Medical Journal (BMJ)
https://doi.org/10.1038/s41591-020-1034-x
https://medicalxpress.com/news/2020-09-patients-benefit-guidelines-artificial-intelligence.html
Abstract The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human–AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes. Main Randomized controlled trials (RCTs) are considered the gold-standard experimental design for providing evidence of the safety and efficacy of an intervention 1 , 2 . Trial results, if adequately reported, have the potential to inform regulatory decisions, clinical guidelines and health policy. It is therefore crucial that RCTs are reported with transparency and completeness so that readers can critically appraise the trial methods and findings and assess the presence of bias in the results 3 , 4 , 5 . The CONSORT statement provides evidence-based recommendations to improve the completeness of the reporting of RCTs. The statement was first introduced in 1996 and has since been widely endorsed by medical journals internationally 5 . Over the past two decades, it has undergone two updates and has demonstrated a substantial positive impact on the quality of RCT reports 6 , 7 . The most recent CONSORT 2010 statement provides a 25-item checklist of the minimum reporting content applicable to all RCTs, but it recognizes that certain interventions may require extension or elaboration of these items. Several such extensions exist 8 , 9 , 10 , 11 , 12 , 13 . AI is an area of enormous interest with strong drivers to accelerate new interventions through to publication, implementation and market 14 . While AI systems have been researched for some time, recent advances in deep learning and neural networks have gained considerable interest for their potential in health applications. Examples of such applications are wide ranging and include AI systems for screening and triage 15 , 16 , diagnosis 17 , 18 , 19 , 20 ,prognostication 21 , 22 , decision support 23 and treatment recommendation 24 . However, in the most recent cases, published evidence has consisted of in silico, early-phase validation. It has been recognized that most recent AI studies are inadequately reported and existing reporting guidelines do not fully cover potential sources of bias specific to AI systems 25 . The welcome emergence of RCTs seeking to evaluate newer interventions based on, or including, an AI component (called ‘AI interventions’ here) 23 , 26 , 27 , 28 , 29 , 30 , 31 has similarly been met with concerns about the design and reporting 25 , 32 , 33 , 34 . This has highlighted the need to provide reporting guidance that is ‘fit for purpose’ in this domain. CONSORT-AI (as part of the SPIRIT-AI and CONSORT-AI initiative) is an international initiative supported by CONSORT and the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network to evaluate the existing CONSORT 2010 statement and to extend or elaborate this guidance where necessary, to support the reporting of clinical trials for AI interventions 35 , 36 . It is complementary to the SPIRIT-AI statement, which aims to promote high-quality protocol reporting for AI trials. This Consensus Statement describes the methods used to identify and evaluate candidate items and gain consensus. In addition, it also provides the CONSORT-AI checklist, which includes the new extension items and their accompanying explanations. Methods The SPIRIT-AI and CONSORT-AI extensions were simultaneously developed for clinical trial protocols and trial reports. An announcement for the SPIRIT-AI and CONSORT-AI initiative was published in October 2019 (ref. 35 ), and the two guidelines were registered as reporting guidelines under development on the EQUATOR library of reporting guidelines in May 2019. Both guidelines were developed in accordance with the EQUATOR Network’s methodological framework 37 . The SPIRIT-AI and CONSORT-AI Steering Group, consisting of 15 international experts, was formed to oversee the conduct and methodology of the study. Definitions of key terms are provided in the glossary (Box 1 ). Box 1 Glossary Artificial Intelligence The science of developing computer systems which can perform tasks normally requiring human intelligence. AI intervention A health intervention that relies upon an AI/ML component to serve its purpose. CONSORT Consolidated Standards of Reporting Trials. CONSORT-AI extension item An additional checklist item to address AI-specific content that is not adequately covered by CONSORT 2010. Class-activation map Class-activation maps are particularly relevant to image classification AI interventions. Class-activation maps are visualizations of the pixels that had the greatest influence on predicted class, by displaying the gradient of the predicted outcome from the model with respect to the input. They are also referred to as ‘saliency maps’ or ‘heat maps’. Health outcome Measured variables in the trial that are used to assess the effects of an intervention. Human–AI interaction The process of how users (humans) interact with the AI intervention, for the AI intervention to function as intended. Clinical outcome Measured variables in the trial which are used to assess the effects of an intervention. Delphi study A research method that derives the collective opinions of a group through a staged consultation of surveys, questionnaires, or interviews, with an aim to reach consensus at the end. Development environment The clinical and operational settings from which the data used for training the model is generated. This includes all aspects of the physical setting (such as geographical location, physical environment), operational setting (such as integration with an electronic record system, installation on a physical device) and clinical setting (such as primary, secondary and/or tertiary care, patient disease spectrum). Fine-tuning Modifications or additional training performed on the AI intervention model, done with the intention of improving its performance. Input data The data that need to be presented to the AI intervention to allow it to serve its purpose. Machine learning A field of computer science concerned with the development of models/algorithms that can solve specific tasks by learning patterns from data, rather than by following explicit rules. It is seen as an approach within the field of AI. Operational environment The environment in which the AI intervention will be deployed, including the infrastructure required to enable the AI intervention to function. Output data The predicted outcome given by the AI intervention based on modeling of the input data. The output data can be presented in different forms, including a classification (including diagnosis, disease severity or stage, or recommendation such as referability), a probability, a class activation map, etc. The output data typically provide additional clinical information and/or trigger a clinical decision. Performance error Instances in which the AI intervention fails to perform as expected. This term can describe different types of failures, and it is up to the investigator to specify what should be considered a performance error, preferably based on prior evidence. This can range from small decreases in accuracy (compared to expected accuracy) to erroneous predictions or the inability to produce an output, in certain cases. SPIRIT Standard Protocol Items: Recommendations for Interventional Trials. SPIRIT-AI An additional checklist item to address AI-specific content that is not adequately covered by SPIRIT 2013. SPIRIT-AI elaboration item Additional considerations to an existing SPIRIT 2013 item when applied to AI interventions. Show more Ethical approval This study was approved by the ethical review committee at the University of Birmingham, UK (ERN_19-1100). Participant information was provided to Delphi participants electronically before survey completion and before the consensus meeting. Delphi participants provided electronic informed consent, and written consent was obtained from consensus meeting participants. Literature review and candidate item generation An initial list of candidate items for the SPIRIT-AI and CONSORT-AI checklists was generated through review of the published literature and consultation with the Steering Group and known international experts. A search was performed on 13 May 2019 using the terms ‘artificial intelligence’, ‘machine learning’ and ‘deep learning’ to identify existing clinical trials for AI interventions listed within the US National Library of Medicine’s clinical trial registry (ClinicalTrials.gov). There were 316 registered trials, of which 62 were completed and 7 had published results 30 , 38 , 39 , 40 , 41 , 42 , 43 . Two studies were reported with reference to the CONSORT statement 30 , 42 , and one study provided an unpublished trial protocol 42 . The Operations Team (X.L., S.C.R., M.J.C. and A.K.D.) identified AI-specific considerations from these studies and reframed them as candidate reporting items. The candidate items were also informed by findings from a previous systematic review that evaluated the diagnostic accuracy of deep-learning systems for medical imaging 25 . After consultation with the Steering Group and additional international experts ( n = 19), 29 candidate items were generated, 26 of which were relevant for both SPIRIT-AI and CONSORT-AI and 3 of which were relevant only for CONSORT-AI. The Operations Team mapped these items to the corresponding SPIRIT and CONSORT items, revising the wording and providing explanatory text as required to contextualize the items. These items were included in subsequent Delphi surveys. Delphi consensus process In September 2019, 169 key international experts were invited to participate in the online Delphi survey to vote upon the candidate items and suggest additional items. Experts were identified and contacted via the Steering Group and were allowed one round of ‘snowball’ recruitment in which contacted experts could suggest additional experts. In addition, individuals who made contact following publication of the announcement were included 35 . The Steering Group agreed that individuals with expertise in clinical trials and AI and machine learning (ML), as well as key users of the technology, should be well represented in the consultation. Stakeholders included healthcare professionals, methodologists, statisticians, computer scientists, industry representatives, journal editors, policy makers, health ‘informaticists’, experts in law and ethics, regulators, patients and funders. Participant characteristics are described in Supplementary Table 1 . Two online Delphi surveys were conducted. DelphiManager software (version 4.0), developed and maintained by the COMET (Core Outcome Measures in Effectiveness Trials) initiative, was used to undertake the e-Delphi survey. Participants were given written information about the study and were asked to provide their level of expertise within the fields of (i) AI/ML, and (ii) clinical trials. Each item was presented for consideration (26 for SPIRIT-AI and 29 for CONSORT-AI). Participants were asked to vote on each item using a 9-point scale, as follows: 1–3, not important; 4–6, important but not critical; and 7–9, important and critical. Respondents provided separate ratings for SPIRIT-AI and CONSORT-AI. There was an option to opt out of voting for each item, and each item included space for free text comments. At the end of the Delphi survey, participants had the opportunity to suggest new items. 103 responses were received for the first Delphi round, and 91 responses (88% of participants from round one) were received for the second round. The results of the Delphi survey informed the subsequent international consensus meeting. 12 new items were proposed by the Delphi study participants and were added for discussion at the consensus meeting. Data collected during the Delphi survey were anonymized, and item-level results were presented at the consensus meeting for discussion and voting. The two-day consensus meeting took place in January 2020 and was hosted by the University of Birmingham, UK, to seek consensus on the content of SPIRIT-AI and CONSORT-AI. 31 international stakeholders from among the Delphi survey participants were invited to discuss the items and vote on their inclusion. Participants were selected to achieve adequate representation from all the stakeholder groups. 41 items were discussed in turn, comprising the 29 items generated in the initial literature review and item-generation phase (26 items relevant to both SPIRIT-AI and CONSORT-AI; 3 items relevant only to CONSORT-AI) and the 12 new items proposed by participants during the Delphi surveys. Each item was presented to the consensus group, alongside its score from the Delphi exercise (median and interquartile ranges) and any comments made by Delphi participants related to that item. Consensus-meeting participants were invited to comment on the importance of each item and whether the item should be included in the AI extension. In addition, participants were invited to comment on the wording of the explanatory text accompanying each item and the position of each item relative to the SPIRIT 2013 and CONSORT 2010 checklists. After open discussion of each item and the option to adjust wording, an electronic vote took place, with the option to include or exclude the item. An 80% threshold for inclusion was pre-specified and deemed reasonable by the Steering Group to demonstrate majority consensus. Each stakeholder voted anonymously using Turning Point voting pads (Turning Technologies, version 8.7.2.14). Checklist pilot Following the consensus meeting, attendees were given the opportunity to make final comments on the wording and agree that the updated SPIRIT-AI and CONSORT-AI items reflected discussions from the meeting. The Operations Team assigned each item as an extension or elaboration item on the basis of a decision tree and produced a penultimate draft of the SPIRIT-AI and CONSORT-AI checklists (Supplementary Fig. 1 ). A pilot of the penultimate checklists was conducted with 34 participants to ensure clarity of wording. Experts participating in the pilot included the following: (a) Delphi participants who did not attend the consensus meeting, and (b) external experts who had not taken part in the development process but who had reached out to the Steering Group after the Delphi study commenced. Final changes were made on wording only to improve clarity for readers, by the Operations Team (Supplementary Fig. 2 ). Recommendations CONSORT-AI checklist items and explanation The CONSORT-AI extension recommends that 14 new checklists items be added to the existing CONSORT 2010 statement (11 extensions and 3 elaborations). These items were considered sufficiently important for clinical-trial reports for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 checklist items. Table 1 lists the CONSORT-AI items. Table 1 CONSORT-AI checklist Full size table The 14 items below passed the threshold of 80% for inclusion at the consensus meeting. CONSORT-AI 2a, CONSORT-AI 5 (ii) and CONSORT-AI 19 each resulted from the merging of two items after discussion with the consensus group. CONSORT-AI 4a (i) and (ii) was split into two items for clarity and was voted upon separately. CONSORT-AI 5(iii) did not fulfill the criteria for inclusion on the basis of its initial wording (77% vote to include); however, after extensive discussion and rewording, the consensus group unanimously supported a re-vote, at which point it passed the inclusion threshold (97% to include). The Delphi and voting results for each included and excluded item are described in Supplementary Table 2 . Title and abstract CONSORT-AI 1a,b (i) Elaboration: Indicate that the intervention involves artificial intelligence/machine learning in the title and/or abstract and specify the type of model Explanation Indicating in the title and/or abstract of the trial report that the intervention involves a form of AI is encouraged, as it immediately identifies the intervention as an AI/ML intervention and also serves to facilitate indexing and searching of the trial report. The title should be understandable by a wide audience; therefore, a broader umbrella term such as ‘artificial intelligence’ or ‘machine learning’ is encouraged. More-precise terms should be used in the abstract, rather than the title, unless they are broadly recognized as being a form of AI/ML. Specific terminology relating to the model type and architecture should be detailed in the abstract. CONSORT-AI 1a,b (ii) Elaboration: State the intended use of the AI intervention within the trial in the title and/or abstract Explanation Describe the intended use of the AI intervention in the trial report title and/or abstract. This should describe the purpose of the AI intervention and the disease context 26 , 44 . Some AI interventions may have multiple intended uses, or the intended use may evolve over time. Therefore, documenting this allows readers to understand the intended use of the algorithm at the time of the trial. Introduction CONSORT-AI 2a (i) Extension: Explain the intended use for the AI intervention in the context of the clinical pathway, including its purpose and its intended users (for example, healthcare professionals, patients, public) Explanation In order to clarify how the AI intervention is intended to fit into a clinical pathway, a detailed description of its role should be included in the background of the trial report. AI interventions may be designed to interact with different users, including healthcare professionals, patients and the public, and their roles can be wide-ranging (for example, the same AI intervention could theoretically be replacing, augmenting or adjudicating components of clinical decision-making). Clarifying the intended use of the AI intervention and its intended user helps readers understand the purpose for which the AI intervention was evaluated in the trial. Methods CONSORT-AI 4a (i) Elaboration: State the inclusion and exclusion criteria at the level of participants Explanation The inclusion and exclusion criteria should be defined at the participant level as per usual practice in non-AI interventional trial reports (Fig. 1 ). This is distinct from the inclusion and exclusion criteria made at the input-data level, which is addressed in item 4a (ii). Fig. 1: CONSORT 2010 flow diagram — adapted for AI clinical trials. CONSORT-AI 4a (i): State the inclusion and exclusion criteria at the level of participants. CONSORT-AI 4a (ii): State the inclusion and exclusion criteria at the level of the input data. CONSORT 13b (core CONSORT item): For each group, losses and exclusions after randomization, together with reasons. Full size image CONSORT-AI 4a (ii) Extension: State the inclusion and exclusion criteria at the level of the input data Explanation ‘Input data’ refers to the data required by the AI intervention to serve its purpose (for example, for a breast-cancer diagnostic system, the input data could be the unprocessed or vendor-specific post-processing mammography scan upon which a diagnosis is being made; for an early-warning system, the input data could be physiological measurements or laboratory results from the electronic health record). The trial report should pre-specify if there were minimum requirements for the input data (such as image resolution, quality metrics or data format) that determined pre-randomization eligibility. It should specify when, how and by whom this was assessed. For example, if a participant met the eligibility criteria for lying flat for a CT scan as per item 4a (i), but the scan quality was compromised (for any given reason) to such a level that it was deemed unfit for use by the AI system, this should be reported as an exclusion criterion at the input-data level. Note that where input data are acquired after randomization, any exclusion is considered to be from the analysis, not from enrollment (CONSORT item 13b and Fig. 1 ). CONSORT-AI 4b Extension: Describe how the AI intervention was integrated into the trial setting, including any onsite or offsite requirements Explanation There are limitations to the generalizability of AI algorithms, one of which is when they are used outside of their development environment 45 , 46 . AI systems are dependent on their operational environment, and the report should provide details of the hardware and software requirements to allow technical integration of the AI intervention at each study site. For example, it should be stated if the AI intervention required vendor-specific devices, if there was specialized computing hardware at each site, or if the site had to support cloud integration, particularly if this was vendor specific. If any changes to the algorithm were required at each study site as part of the implementation procedure (such as fine-tuning the algorithm on local data), then this process should also be clearly described. CONSORT-AI 5 (i) Extension: State which version of the AI algorithm was used Explanation Similar to other forms of software as a medical device, AI systems are likely to undergo multiple iterations and updates during their lifespan. It is therefore important to specify which version of the AI system was used in the clinical trial, whether this is the same as the version evaluated in previous studies that have been used to justify the study rationale, and whether the version changed during the conduct of the trial. If applicable, the report should describe what has changed between the relevant versions and the rationales for the changes. Where available, the report should include a regulatory marking reference, such as an unique device identifier, that requires a new identifier for updated versions of the device 47 . CONSORT-AI 5 (ii) Extension: Describe how the input data were acquired and selected for the AI intervention Explanation The measured performance of any AI system may be critically dependent on the nature and quality of the input data 48 . A description of the input-data handling, including acquisition, selection and pre-processing before analysis by the AI system, should be provided. Completeness and transparency of this description is integral to the replicability of the intervention beyond the clinical trial in real-world settings. It also helps readers identify whether input-data-handling procedures were standardized across trial sites. CONSORT-AI 5 (iii) Extension: Describe how poor-quality or unavailable input data were assessed and handled Explanation As with CONSORT-AI 4a (ii), ‘input data’ refers to the data required by the AI intervention to serve its purpose. As discussed in item 4a (ii), the performance of AI systems may be compromised as a result of poor quality or missing input data 49 (for example, excessive movement artifact on an electrocardiogram). The trial report should report the amount of missing data, as well as how this was identified and handled. The report should also specify if there was a minimum standard required for the input data and, where this standard was not achieved, how this was handled (including the impact on, or any changes to, the participant care pathway). Poor quality or unavailable data can also affect non-AI interventions. For example, sub-optimal quality of a scan could affect a radiologist’s ability to interpret it and make a diagnosis. It is therefore important that this information is reported equally in the control intervention, where relevant. If this minimum quality standard was different from the inclusion criteria for input data used to assess eligibility pre-randomization, this should be stated. CONSORT-AI 5 (iv) Extension: Specify whether there was human–AI interaction in the handling of the input data, and what level of expertise was required of users Explanation A description of the human–AI interface and the requirements for successful interaction when input data are handled should be provided — for example, clinician-led selection of regions of interest from a histology slide that is then interpreted by an AI diagnostic system 50 , or an endoscopist’s selection of a colonoscopy video clips as input data for an algorithm designed to detect polyps 28 . A description of any user training provided and instructions for how users should handle the input data provides transparency and replicability of trial procedures. Poor clarity on the human–AI interface may lead to lack of a standard approach and may carry ethical implications, particularly in the event of harm 51 , 52 . For example, it may become unclear whether an error case occurred due to human deviation from the instructed procedure, or if it was an error made by the AI system. CONSORT-AI 5 (v) Extension: Specify the output of the AI intervention Explanation The output of the AI intervention should be clearly specified in the trial report. For example, an AI system may output a diagnostic classification or probability, a recommended action, an alarm alerting to an event, an instigated action in a closed-loop system (such as titration of drug infusions) or another output. The nature of the AI intervention’s output has direct implications on its usability and how it may lead to downstream actions and outcomes. CONSORT-AI 5 (vi) Extension: Explain how the AI intervention’s outputs contributed to decision-making or other elements of clinical practice Explanation Since health outcomes may also critically depend on how humans interact with the AI intervention, the report should explain how the outputs of the AI system were used to contribute to decision-making or other elements of clinical practice. This should include adequate description of downstream interventions that can affect outcomes. As with CONSORT-AI 5 (iv), any effects of human–AI interaction on the outputs should be described in detail, including the level of expertise required to understand the outputs and any training and/or instructions provided for this purpose. For example, a skin cancer detection system that produced a percentage likelihood as its output should be accompanied by an explanation of how this output was interpreted and acted upon by the user, specifying both the intended pathways (for example, skin lesion excision if the diagnosis is positive) and the thresholds for entry to these pathways (for example, skin lesion excision if the diagnosis is positive and the probability is >80%). The information produced by comparator interventions should be similarly described, alongside an explanation of how such information was used to arrive at clinical decisions on patient management, where relevant. Any discrepancy in how decision-making occurred versus how it was intended to occur (that is, as specified in the trial protocol) should be reported. Results CONSORT-AI 19 Extension: Describe results of any analysis of performance errors and how errors were identified, where applicable. If no such analysis was planned or done, explain why not Explanation Reporting performance errors and failure case analysis is especially important for AI interventions. AI systems can make errors that may be hard to foresee but that, if allowed to be deployed at scale, could have catastrophic consequences 53 . Therefore, reporting cases of error and defining risk-mitigation strategies are important for informing when, and for which populations, the intervention can be safely implemented. The results of any performance-error analysis should be reported and the implications of the results should be discussed. Other information CONSORT-AI 25 Extension: State whether and how the AI intervention and/or its code can be accessed, including any restrictions to access or re-use Explanation The trial report should make it clear whether and how the AI intervention and/or its code can be accessed or re-used. This should include details about the license and any restrictions to access. Discussion CONSORT-AI is a new reporting-guideline extension developed through international multi-stakeholder consensus. It aims to promote transparent reporting of AI intervention trials and is intended to facilitate critical appraisal and evidence synthesis. The extension items added in CONSORT-AI address a number of issues specific to the implementation and evaluation of AI interventions, which should be considered alongside the core CONSORT 2010 checklist and other CONSORT extensions 54 . It is important to note that these are minimum requirements and there may be value in including additional items not included in the checklists in the report or in supplementary materials (Supplementary Table 2 ). In both CONSORT-AI and its companion project SPIRIT-AI, a major emphasis was the addition of several new items related to the intervention itself and its application in the clinical context. Items 5 (i)–5 (vi) were added to address AI-specific considerations in descriptions of the intervention. Specific recommendations were made pertinent to AI systems related to algorithm version, input and output data, integration into trial settings, expertise of the users and protocol for acting upon the AI system’s recommendations. It was agreed that these details are critical for independent evaluation or replication of the trial. Journal editors reported that despite the importance of these items, they are currently often missing from trial reports at the time of submission for publication, which provides further weight for their inclusion as specifically listed extension items. A recurrent focus of the Delphi comments and consensus group discussion was the safety of AI systems. This was in recognition that AI systems, unlike other health interventions, can unpredictably yield errors that are not easily detectable or explainable by human judgement. For example, changes to medical imaging that are invisible, or appear random, to the human eye may change the likelihood of the diagnostic output entirely 55 , 56 . The concern is that given the theoretical ease with which AI systems could be deployed at scale, any unintended harmful consequences could be catastrophic. CONSORT-AI item 19, which requires specification of any plans to analyze performance errors, was added to emphasize the importance of anticipating systematic errors made by the algorithm and their consequences. Beyond this, investigators should also be encouraged to explore differences in performance and error rates across population subgroups. It has been shown that AI systems may be systematically biased toward different outputs, which may lead to different or even unfair treatment, on the basis of extant features 53 , 57 , 58 , 59 . The topic of ‘continuously evolving’ AI systems (also known as ‘continuously adapting’ or ‘continuously learning’ AI systems) was discussed at length during the consensus meeting, but it was agreed that this be excluded from CONSORT-AI. These are AI systems with the ability to continuously train on new data, which may cause changes in performance over time. The group noted that, while of interest, this field is relatively early in its development without tangible examples in healthcare applications, and that it would not be appropriate for it to be included in CONSORT-AI at this stage 60 . This topic will be monitored and will be revisited in future iterations of CONSORT-AI. It is worth noting that incremental software changes, whether continuous or iterative, intentional or unintentional, could have serious consequences on safety performance after deployment. It is therefore of vital importance that such changes be documented and identified by software version and that a robust post-deployment surveillance plan is in place. This study is set in the current context of AI in health; therefore, several limitations should be noted. First, there are relatively few published interventional trials in the field of AI for healthcare; therefore, the discussions and decisions made during this study were not always supported by existing examples of completed trials. This arises from our stated aim of addressing the issues of poor reporting in this field as early as possible, recognizing the strong drivers in the field and the specific challenges of study design and reporting for AI. As the science and study of AI evolves, we welcome collaboration with investigators to co-evolve these reporting standards to ensure their continued relevance. Second, the literature search of AI RCTs used terminology such as ‘artificial intelligence’, ‘machine learning’ and ‘deep learning’, but not terms such as ‘clinical decision support systems’ or ‘expert systems’, which were more commonly used in the 1990s for technologies underpinned by AI systems and share risks similar to those of recent examples 61 . It is likely that such systems, if published today, would be indexed under ‘artificial intelligence’ or ‘machine learning’; however, clinical decision support systems were not actively discussed during this consensus process. Third, the initial candidate items list was generated by a relatively small group of experts consisting of SteeringGroup members and additional international experts; however, additional items from the wider Delphi group were taken forward for consideration by the consensus group, and no new items were suggested during the consensus meeting or post-meeting evaluation. As with the CONSORT statement, the CONSORT-AI extension is intended as a minimum reporting guidance, and there are additional AI-specific considerations for trial reports that may warrant consideration (Supplementary Table 2 ). This extension is aimed particularly at investigators and readers reporting or appraising clinical trials; however, it may also serve as useful guidance for developers of AI interventions in earlier validation stages of an AI system. Investigators seeking to report studies developing and validating the diagnostic and predictive properties of AI models should refer to TRIPOD-ML (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Machine Learning) and STARD-AI (Standards for Reporting Diagnostic Accuracy Studies–Artificial Intelligence), both of which are currently under development 32 , 62 . Other potentially relevant guidelines, which are agnostic to study design, are registered with the EQUATOR Network 63 . The CONSORT-AI extension is expected to encourage careful early planning of AI interventions for clinical trials and this, in conjunction with SPIRIT-AI, should help to improve the quality of trials for AI interventions. The development of the CONSORT-AI guidance does not include additional items within the discussion section of trial reports. The guidance provided by CONSORT 2010 on trial limitations, generalizability and interpretation were deemed to be translatable to trials for AI interventions. There is also recognition that AI is a rapidly evolving field, and there will be the need to update CONSORT-AI as the technology, and newer applications for it, develop. Currently, most applications of AI involve disease detection, diagnosis and triage, and this is likely to have influenced the nature and prioritization of items within CONSORT-AI. As wider applications that utilize ‘AI as therapy’ emerge, it will be important to continue to evaluate CONSORT-AI in the light of such studies. Additionally, advances in computational techniques and the ability to integrate them into clinical workflows will bring new opportunities for innovation that benefits patients. However, they may be accompanied by new challenges around study design and reporting. In order to ensure transparency, minimize potential biases and promote the trustworthiness of the results and the extent to which they may be generalizable, the SPIRIT-AI and CONSORT-AI Steering Group will continue to monitor the need for updates. Data availability Data requests should be made to the corresponding author and release will be subject to consideration by the SPIRIT-AI and CONSORT-AI Steering Group.
Patients could benefit from faster and more effective introduction of artificial intelligence (AI) innovations to diagnose and treat disease—thanks to the first international standards for reporting of clinical trials for AI. As evaluation of health interventions involving machine learning or other AI systems moves into clinical trials, an international group has developed guidelines aiming to improve the quality of these studies and ensure that they are reported transparently. The use of these international guidelines will enable patients, health care professionals and policy-makers to be more confident on whether an AI intervention is safe and effective. This is a key step towards trustworthy AI in health. Development of new reporting guidelines which expand on the current SPIRIT 2013 and CONSORT 2010 reporting frameworks will boost transparency and robustness for clinical trials evaluating AI health solutions. Future clinical trials evaluating an AI intervention will be expected—and often required—to report their publications to the new standards. The guidelines will also help medical professionals, regulators, funders and other decision-makers assess the quality of planned clinical trials and assess whether the algorithm is safe and likely to bring about patient benefit. Researchers from the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust (UHB) worked with leading insttitutions from across the world—including the United States and Canada—and have published their findings and the new guidelines today in Nature Medicine, the BMJ and The Lancet Digital Health. Researchers developed the additional guidance to tackle concerns that many studies of AI are of insufficient quality and are not transparent. This was highlighted in research published last September, led by several of the same researchers which highlighted that less than one percent of 20,500 analyzed studies relating to health AI were of sufficient quality that independent viewers could have confidence in their results. Professor Alastair Denniston, Lead for AI at Birmingham Health Partners Center for Regulatory Science and Innovation, and Consultant Ophthalmologist at UHB, commented: "Patients could benefit hugely from the use of AI in medical settings, but before we introduce these technologies into everyday practice we need to know that they have been robustly evaluated and proven to be effective and safe. Our previous work showed just how big a problem this can be and that we needed a way to cut through the hype surrounding AI in healthcare. These new reporting guidelines—SPIRIT-AI and CONSORT-AI—provide a solution to the 'hype' problem. They provide a clear, transparent framework to support the design and reporting of AI trials that will help to improve quality and transparency. These extended guidelines will help to reduce wasted effort and deliver effective AI-led medical interventions to patients quicker." SPIRIT-AI extension is a new guideline for clinical trials protocols and CONSORT-AI extension is a new reporting guideline for clinical trial reports, for evaluating interventions with AI components. Professor Melanie Calvert, NIHR Senior Investigator and Director of Birmingham Health Partners Center for Regulatory Science and Innovation commented: "There is growing recognition that interventions involving AI need rigorous evaluation to demonstrate their impact on health outcomes. Without this, we risk not generating sufficiently robust evidence to decide whether AI interventions should be commissioned in the real world. These new guidelines will help to identify and overcome research challenges associated with AI-led health innovation, but we could not have got to this exciting point without the help of patients involved in research." Elaine Manna, from London, has been living with age-related macular degeneration for 20 years and was one of a number of patient partners who helped to develop the new guidelines. She was asked to provide a patient perspective on developing the guidelines after taking part in an AI research study involving Moorfields Eye Hospital NHS Foundation Trust, in London, and British technology company DeepMind. Elaine commented: "A super-fast algorithm was tested on my eye—diagnosing my condition as well as an expert ophthalmologist or optometrist. This was a development with significant implications for saving sight and reducing waiting times for appointments. It's vital for patients to be equally involved in their healthcare—understanding how decisions are made, being informed and involved in decision making. Helping to develop the SPIRIT-AI and CONSORT-AI guidelines, I went from thinking of myself as someone with a degenerative eye disease to someone who felt empowered." The SPIRIT-AI extension includes 15 new items and the CONSORT-AI extension includes 14 new items—all considered sufficiently important for clinical trial protocols of AI interventions to be routinely reported in addition to core items.
doi.org/10.1038/s41591-020-1034-x
Biology
Remains of 17th century bishop support neolithic emergence of tuberculosis
Susanna Sabin et al, A seventeenth-century Mycobacterium tuberculosis genome supports a Neolithic emergence of the Mycobacterium tuberculosis complex, Genome Biology (2020). DOI: 10.1186/s13059-020-02112-1 Journal information: Genome Biology
http://dx.doi.org/10.1186/s13059-020-02112-1
https://phys.org/news/2020-08-17th-century-bishop-neolithic-emergence.html
Abstract Background Although tuberculosis accounts for the highest mortality from a bacterial infection on a global scale, questions persist regarding its origin. One hypothesis based on modern Mycobacterium tuberculosis complex (MTBC) genomes suggests their most recent common ancestor followed human migrations out of Africa approximately 70,000 years before present. However, studies using ancient genomes as calibration points have yielded much younger dates of less than 6000 years. Here, we aim to address this discrepancy through the analysis of the highest-coverage and highest-quality ancient MTBC genome available to date, reconstructed from a calcified lung nodule of Bishop Peder Winstrup of Lund (b. 1605–d. 1679). Results A metagenomic approach for taxonomic classification of whole DNA content permitted the identification of abundant DNA belonging to the human host and the MTBC, with few non-TB bacterial taxa comprising the background. Genomic enrichment enabled the reconstruction of a 141-fold coverage M . tuberculosis genome. In utilizing this high-quality, high-coverage seventeenth-century genome as a calibration point for dating the MTBC, we employed multiple Bayesian tree models, including birth-death models, which allowed us to model pathogen population dynamics and data sampling strategies more realistically than those based on the coalescent. Conclusions The results of our metagenomic analysis demonstrate the unique preservation environment calcified nodules provide for DNA. Importantly, we estimate a most recent common ancestor date for the MTBC of between 2190 and 4501 before present and for Lineage 4 of between 929 and 2084 before present using multiple models, confirming a Neolithic emergence for the MTBC. Background Tuberculosis, caused by organisms in the Mycobacterium tuberculosis complex (MTBC), has taken on renewed relevance and urgency in the twenty-first century due to its global distribution, its high morbidity, and the rise of antibiotic-resistant strains [ 1 ]. The difficulty in disease management and treatment, combined with the massive reservoir the pathogen maintains in human populations through latent infection [ 2 ], makes tuberculosis a pressing public health challenge. Despite this, controversy exists regarding the history of the relationship between members of the MTBC and their human hosts. Existing literature suggests two estimates for the time to the most recent common ancestor (tMRCA) for the MTBC based on the application of Bayesian molecular dating to genome-wide Mycobacterium tuberculosis data. One estimate suggests the extant MTBC emerged through a bottleneck approximately 70,000 years ago, coincident with major migrations of humans out of Africa [ 3 ]. This estimate was reached using a large global dataset of exclusively modern M . tuberculosis genomes, with internal nodes of the MTBC calibrated by extrapolated dates for major human migrations [ 3 ]. This estimate relied on congruence between the topology of the MTBC and human mitochondrial phylogenies, but this congruence does not extend to human Y chromosome phylogeographic structure [ 4 ]. As an alternative approach, the first publication of ancient MTBC genomes utilized radiocarbon dates as direct calibration points to infer mutation rates and yielded an MRCA date for the complex of less than 6000 years [ 5 ]. This younger emergence was later supported by mutation rates estimated within the pervasive Lineage 4 (L4) of the MTBC, using four M . tuberculosis genomes from the late eighteenth and early nineteenth centuries [ 6 ]. Despite the agreement in studies that have relied on ancient DNA calibration so far, dating of the MTBC emergence remains controversial. The young age suggested by these works cannot account for purported detection of MTBC DNA in archeological material that predates the tMRCA estimate (e.g., Baker et al. [ 7 ]; Hershkovitz et al. [ 8 ]; Masson et al. [ 9 ]; Rothschild et al. [ 10 ]), the authenticity of which has been challenged [ 11 ]. Furthermore, constancy in mutation rates of the MTBC has been questioned on account of observed rate variation in modern lineages, combined with the unquantified effects of latency [ 12 ]. The ancient genomes presented by Bos and colleagues, though isolated from human remains, were most closely related to Mycobacterium pinnipedii , a lineage of the MTBC currently associated with infections in seals and sea lions [ 5 ]. Given our unfamiliarity with the demographic history of tuberculosis in sea mammal populations [ 13 ], identical substitution rates between the pinniped lineage and human-adapted lineages of the MTBC cannot be assumed. Additionally, estimates of genetic diversity in MTBC strains from archeological specimens can be difficult given their similarities to environmental mycobacterial DNA from the depositional context, which increase the risk of false positive genetic characterization [ 14 ]. Though the ancient genomes published by Kay and colleagues belonged to human-adapted lineages of the MTBC, and the confounding environmental signals were significantly reduced by their funerary context in crypts, two of the four genomes used for molecular dating were derived from mixed-strain infections [ 6 ]. By necessity, diversity derived in each genome would have to be ignored for them to be computationally distinguished [ 6 ]. Though ancient DNA is a valuable tool for answering the question of when the MTBC emerged, the available ancient data remains sparse and subject to case-by-case challenges. Here, we offer a higher resolution temporal estimate for the emergence of the MTBC and L4 using multiple Bayesian models of varying complexity through the analysis of a high-coverage seventeenth-century M . tuberculosis genome extracted from a calcified lung nodule. Removed from naturally mummified remains, the nodule provided an excellent preservation environment for the pathogen, and exhibited minimal infiltration by exogenous bacteria. The nodule and surrounding lung tissue also showed exceptional preservation of host DNA, thus showing promise for this tissue type in ancient DNA investigations. Results Pathogen identification Computed tomography (CT) scans of the mummified remains of Bishop Peder Winstrup of Lund, Sweden revealed a calcified granuloma a few millimeters (mm) in size in the collapsed right lung together with two ~ 5 mm calcifications in the right hilum (Fig. 1 ). Primary tuberculosis causes parenchymal changes and ipsilateral hilar lymphadenopathy that is more common on the right side [ 15 ]. Upon resolution, it can leave a parenchymal scar, a small calcified granuloma (Ghon focus), and calcified hilar nodes, which are together called a Ranke complex. In imaging, this complex is suggestive of previous tuberculosis infection, although histoplasmosis can have the same appearance [ 16 ]. Histoplasmosis, however, is very rare in Scandinavia and is more often seen in other parts of the world (e.g., the Americas) [ 17 ]. The imaging findings were therefore considered to result from previous primary tuberculosis. One of the calcified hilar nodes was extracted from the remains during video-assisted thoracoscopic surgery, guided by fluoroscopy. The extracted material was further subsampled for genetic analysis. DNA was extracted from the nodule and accompanying lung tissue using protocols optimized for the recovery of ancient, chemically degraded, fragmentary genetic material [ 18 ]. The library (LUND1) was shotgun sequenced to a depth of approximately 3.7 million reads. Fig. 1 CT image of Ranke complex. CT image of Peder Winstrup’s chest in a slightly angled axial plane with the short arrow showing a small calcified granuloma in the probable upper lobe of the collapsed right lung, and two approximately 5 mm calcifications in the right hilum together suggesting a Ranke complex and previous primary tuberculosis. The more lateral of the two hilar calcifications was extracted for further analysis. In addition, there are calcifications in the descending aorta proposing atherosclerosis (arrowhead) Full size image Adapter-clipped and base quality-filtered reads were taxonomically binned with MALT [ 19 ] against the full NCBI Nucleotide database (“nt,” April 2016). In this process, 3,515,715 reads, or 95% of the metagenomic reads, could be assigned to taxa contained within the database. Visual analysis of the metagenomic profile in MEGAN6 [ 20 ] revealed the majority of these reads, 2,833,403 or 81%, were assigned to Homo sapiens . A further 1724 reads were assigned to the Mycobacterium tuberculosis complex (MTBC) node. Importantly, no other taxa in the genus Mycobacterium were identified, and the only other identified bacterial taxon was Ralstonia solanacearum (Fig. 2 a), a soil-dwelling plant pathogen frequently identified in metagenomic profiles of archeological samples [ 22 , 23 ] (Additional File 1 ). Fig. 2 Screening of sequencing data from LUND1 shows preservation of host and pathogen DNA. a Krona plots reflecting the metagenomic composition of the lung nodule. The majority of sequencing reads were aligned to Homo sapiens ( n = 2,833,403), demonstrating extensive preservation of host DNA. A small portion of reads aligned to bacterial organisms, and 80% of these reads were assigned to the MTBC node ( n = 1724). b Damage plots generated from sequencing reads mapped directly to a reconstructed MTBC ancestor genome [ 21 ], demonstrating a pattern characteristic of ancient DNA Full size image Pre-processed reads were mapped to both the hg19 human reference genome and a reconstructed MTBC ancestor (TB ancestor) [ 21 ] using BWA as implemented in the Efficient Ancient Genome Reconstruction (EAGER) pipeline [ 24 ]. Reads aligned to hg19 with direct mapping constituted an impressive 88% of the total sequencing data (Additional File 2 ). Human mitochondrial contamination was extremely low, estimated at only 1–3% using Schmutzi [ 25 ] (Additional File 3 ). Reads were also mapped to the TB ancestor (Table 1 ). After map quality filtering and read de-duplication, 1458 reads, or 0.045% of the total sequencing data, aligned to the reference (Table 1 ) and exhibited cytosine-to-thymine damage patterns indicative of authentic ancient DNA (Fig. 2 b) [ 26 , 27 ]. Qualitative preservation of the tuberculosis DNA was slightly better than that of the human DNA, as damage was greater in the latter (Additional File 2 ). Laboratory-based contamination, as monitored by negative controls during the extraction and library preparation processes, could be ruled out as the source of this DNA (Additional File 4 ). Table 1 Mapping statistics for LUND1 libraries Full size table Genomic enrichment and reconstruction Due to the clear but low-abundance MTBC signal, a uracil DNA glycosylase (UDG) library was constructed to remove DNA lesions caused by hydrolytic deamination of cytosine residues [ 28 ] and enriched with an in-solution capture [ 29 , 30 ] designed to target genome-wide data representing the full diversity of the MTBC (see the “ Methods ” section). The capture probes are based on a reconstructed TB ancestor genome [ 21 ]. The enriched library was sequenced using a paired-end, 150-cycle Illumina sequencing kit to obtain a full fragment-length distribution (Fig. S1 in Additional File 3 ). The resulting sequencing data was then aligned to the hypothetical TB ancestor genome [ 21 ], and the mapping statistics were compared with those from the screening data to assess enrichment (Table 1 ). Enrichment increased the proportion of endogenous MTBC DNA content by three orders of magnitude, from 0.045 to 45.652%, and deep sequencing yielded genome-wide data at an average coverage of approximately 141.5-fold. The mapped reads have an average fragment length of ~ 66 base pairs (Table 1 ). We further evaluated the quality of the reconstructed genome by quantifying the amount of heterozygous positions (see the “ Methods ” section). Derived alleles represented by 10–90% of the reads covering a given position with five or more reads of coverage were counted. Only 24 heterozygous sites were counted across all variant positions in LUND1. As a comparison, the other high-coverage (~ 125 fold) ancient genome included here—body92 from Kay et al. [ 6 ]—contained 70 heterozygous positions. Phylogeny and dating Preliminary phylogenetic analysis using neighbor joining (Figs. S2 and S3 in Additional File 3 ), maximum likelihood (Figs. S4 and S5 in Additional File 3 ), and maximum parsimony trees (Figs. S6 and S7 in Additional File 3 ) indicated that LUND1 groups within the L4 strain diversity of the MTBC, and more specifically, within the L4.10/PGG3 sublineage. This sublineage was recently defined by Stucki and colleagues as the clade containing L4.7, L4.8, and L4.9 [ 31 ] according to the widely accepted Coll nomenclature [ 32 ]. Following this, we constructed two datasets to support molecular dating of the full MTBC (Additional File 5 ) and L4 of the MTBC (Additional File 6 ). The dataset reflecting extant diversity of the MTBC was compiled as reported elsewhere [ 5 ], with six ancient genomes as calibration points. These included LUND1; two additional ancient genomes, body80 and body92, extracted from late 18th and early nineteenth century Hungarian mummies [ 6 ]; and three human-isolated Mycobacterium pinnipedii strains from Peru [ 5 ], encompassing all available ancient M . tuberculosis genomes with sufficient coverage to call SNPs confidently after stringent mapping with BWA [ 33 ] (see the “ Methods ” section; Additional File 5 ). Mycobacterium canettii was used as an outgroup. In generating an alignment of variant positions in this dataset, we excluded repetitive regions and regions at risk of cross-mapping with other organisms as done previously [ 5 ], as well as potentially imported sites from recombination events, which were identified using ClonalFrameML [ 34 ] (Additional File 7 ). We chose to exclude these potential recombinant sites despite M . tuberculosis being generally recognized as a largely clonal organism with no recombination or horizontal gene transfer, as these phenomena have been found to occur in M . canettii [ 35 , 36 ]. Only twenty-three variant sites were lost from the full MTBC alignment as potential imports. We called a total of 42,856 variable positions in the dataset as aligned to the TB ancestor genome. After incompletely represented sites were excluded, 11,716 were carried forward for downstream analysis. Prior to performing the Bayesian molecular dating analysis, we assessed the dataset for clock-like structure with TempEst ( R 2 = 0.273; see the “ Methods ” section; Fig. S8 in Additional File 3 ). To explore the impact of the selected tree prior and clock model, we ran multiple variations of models as available for use in BEAST2 [ 37 ]. We first used both a strict and a relaxed clock model together with a constant coalescent model (CC+strict, CC+UCLD). We found there to be minimal difference between the inferred rates estimated by the two models. This finding, in addition to the low rate variance estimated in all models, suggests there is little rate variation between known branches of the MTBC. Nevertheless, the relaxed clock appeared to have a slightly better performance (Table 2 ). To experiment with models that allowed for dynamic populations, we applied a Bayesian skyline coalescent (SKY+UCLD) and birth-death skyline prior (BDSKY+UCLD) combined with a relaxed clock model. In the BDSKY+UCLD model, the tree was conditioned on the root. In a prior study, Kühnert and colleagues used birth-death tree priors to investigate two modern tuberculosis outbreaks [ 38 ]. To our knowledge, this study is the first to use a birth-death tree prior to infer evolutionary dynamics of the MTBC while using ancient data for tip calibration. The BDSKY+UCLD model had the highest marginal likelihood value of all models applied to this dataset (Table 2 ). Table 2 Model comparison for full MTBC dataset Full size table A calibrated maximum clade credibility (MCC) tree was generated for the BDSKY+UCLD model, with 3258 years before present (BP) (95% highest posterior density [95% HPD] interval, 2190–4501 BP) as an estimated date of emergence for the MTBC (Fig. 3 a). Tree topology agrees with previously presented phylogenetic analyses of the full MTBC [ 3 , 5 , 39 ]. To test the meaningfulness of our ancient tip calibrations, we performed a date randomization test of this model in which we randomly shuffled the tip dates among the genomes in the dataset ten times and compared the clock rate estimates with the randomized models to that of the “true” BDSKY+UCLD model for the MTBC dataset [ 40 , 41 ]. For this dataset, the tip shuffling caused extremely slow convergence. Though only four out of ten randomized models reached an ESS of over 200 for the clock rate parameter, all randomizations reached an ESS of 100 or greater with combined chain lengths of over 1,000,000,000 (Additional File 10 ). Date randomizations are evaluated based on two criteria of differing stringency: (i) the mean rate estimate of the randomization does not fall within the 95% HPD interval of the original model, or (ii) the 95% HPD interval of the randomization does not overlap with that of the original model [ 40 ]. All randomizations for the MTBC dataset fulfilled the more stringent criteria ii, indicating the tip calibrations from the ancient genomes firmly informed our results (Additional File 10 ; Fig. S9 in Additional File 3 ). Fig. 3 MTBC maximum clade credibility tree. This MCC tree of mean heights was generated from the BDSKY+UCLD model as applied to the full MTBC dataset. Lineages are labeled on the right side. The ancient genomes are indicated by red asterisks and labeled on the side with their sample names. The outgroup is labeled as “ M . canettii .” The 95% HPD intervals of the heights of nodes ancestral to each lineage are indicated as (lower boundary–upper boundary) in years before present. Ancestral nodes are highlighted by a circle colored to match the lineage label. The time scale is expressed as years before present, with the most recent time as 2010. The accompanying skyline plot can be found in Fig. S10 in Additional File 3 Full size image The L4 dataset includes LUND1 and the two Hungarian mummies described above [ 6 ] as calibration points. We selected 149 modern genomes representative of the known diversity of L4 from previously published datasets (Additional File 3 ) [ 3 , 21 , 31 ]. A modern Lineage 2 (L2) genome was used as an outgroup. After the exclusion of sites as discussed above (Additional File 8 ), a SNP alignment of these genomes in reference to the reconstructed TB ancestor genome [ 21 ] included a total of 17,333 variant positions, excluding positions unique to the L2 outgroup. Only fifteen variant sites were lost from the L4 dataset alignment. After sites missing from any alignment in the dataset were excluded from downstream analysis, 10,009 SNPs remained for phylogenetic inference. A total of 810 SNPs were identified in LUND1, of which 126 were unique to this genome. A SNP effect analysis [ 42 ] was subsequently performed on these derived positions (Additional File 3 ; Additional File 9 ). We also assessed the L4 dataset for clock-like structure with TempEst ( R 2 = 0.113; see the “ Methods ” section; Fig. S9 in Additional File 3 ). We applied the same models as described above for the full MTBC dataset, with the addition of a birth-death skyline model conditioned on the origin of the root (BDSKY+UCLD+origin). All mean tree heights are within 250 years of each other and the 95% HPD intervals largely overlap. The BDSKY+UCLD and BDSKY+UCLD+origin models show the highest marginal likelihood values after stepping stone sampling. We employed the BDSKY+UCLD+origin model to determine if the estimated origin of the L4 dataset agreed with the tree height estimates for the full MTBC dataset. Intriguingly, the estimated origin parameter (Table 3 ), or the ancestor of the tree root, largely overlaps with the 95% HPD range for MTBC tree height as seen in Table 2 . Table 3 Model comparison for L4 dataset Full size table A calibrated MCC tree (Fig. 4 ) was generated based on the BDSKY+UCLD model for the L4 dataset. This model yielded an estimated date of emergence for L4 of 1445 BP (95% HPD, 929–2084 BP). The tree reflects the ten-sublineage topology presented by Stucki and colleagues [ 31 ], with LUND1 grouping with the L4.10/PGG3 sublineage. Due to the relatively low R 2 value for the relationship between sampling time and root-to-tip distance as calculated using TempEst, we also performed a date randomization test of the L4 BDSKY+UCLD model, in which we shuffled the sampling dates randomly among all genomes [ 40 , 41 ]. We performed ten randomizations and compared the resulting clock rate estimates with that of the BDSKY+UCLD model with the true sampling dates (Table 3 ). Nine out of ten randomizations fulfilled the more stringent criterion ii, exhibiting no overlap between their 95% HPD intervals and that of the original (Additional File 11 ; Fig. S12 in Additional File 3 ). All ten randomizations satisfied criterion i (i.e., yielded a mean rate estimate that fell outside the 95% HPD interval of the rate from the model using true temporal values). Fig. 4 L4 maximum clade credibility tree. This MCC tree of mean heights was generated from the BDSKY+UCLD model as applied to the L4 dataset. Sublineages are labeled on the right side. The ancient genomes are indicated by red asterisks and labeled with their sample name. The Lineage 2 outgroup, represented by L2_N0020, is labeled on the side. The 95% HPD interval for node height is displayed for ancestral nodes of each sublineage as (lower boundary–upper boundary) in years before present. Ancestral nodes are highlighted by a circle colored to match the sublineage label. The time scale is expressed as years before present, with the most recent time as 2010. The accompanying skyline plot can be found in Fig. S13 in Additional File 3 Full size image Discussion The increasing number of ancient Mycobacterium tuberculosis genomes is steadily reducing the uncertainty of molecular dating estimates for the emergence of the MTBC. Here, using the ancient data available to date, we directly calibrate the MTBC time tree and confirm that known diversity within the complex is derived from a common ancestor that existed ~ 2000–6000 years before present (Fig. 3 ; Table 2 ) [ 5 , 6 ]. Our results support the hypothesis that the MTBC emerged during the Neolithic, and not before. The Neolithic revolution generally refers to the worldwide transition in lifestyle and subsistence from more mobile, foraging economies to more sedentary, agricultural economies made possible by the domestication of plants and animals. The period during which it occurred varies between regions. In Africa, where the MTBC is thought to have originated [ 3 , 43 , 44 , 45 ], the onset of these cultural changes, and animal domestication in particular, appears to have its focus around ~ 3000 BCE, or 5000 BP, across multiple regions [ 46 ]. The estimates presented here place the emergence of tuberculosis amidst the suite of human health impacts that took place as a consequence of the Neolithic lifestyle changes often referred to collectively as the first epidemiological transition [ 47 , 48 ]. Tuberculosis has left testaments to its history as a human pathogen in the archeological record [ 49 ], where some skeletal analyses have been interpreted to suggest tuberculosis in human and animal remains pre-dating the upper 95% HPD boundary for the MTBC tMRCA presented here [ 7 , 8 , 10 , 50 , 51 , 52 , 53 , 54 ]. However, it is important to explore the evolutionary history of the MTBC through molecular data. Furthermore, it is crucial to base molecular dating estimates on datasets that include ancient genomes, which expand the temporal sampling window and provide data from the pre-antibiotic era. Numerous studies have found long-term nucleotide substitution rate estimates in eukaryotes and viruses to be dependent on the temporal breadth of the sampling window, and it is reasonable to assume the same principle applies to bacteria [ 55 , 56 , 57 , 58 , 59 , 60 ]. Additionally, rate variation over time and between lineages, which may arise due to changing evolutionary dynamics such as climate and host biology, can impact the constancy of the molecular clock [ 58 , 59 ]. Though models have been developed to accommodate uncertainty regarding these dynamics [ 61 ], temporally structured populations can provide evidence and context for these phenomena over time and can aid researchers in refining models appropriate for the taxon in question [ 60 ]. Though we did not identify substantial rate variation within either the MTBC or L4 trees (Figs. S14 and S15 in Additional File 3 ), it is important that we draw these observations from temporally structured datasets and continue to do so in the future. In addition to our tMRCA estimate for the MTBC, we present one for L4, which is among the most globally dominant lineages in the complex [ 31 , 62 ]. Our analyses yielded tMRCA dates between ~ 900 and 2500 years before present, as extrapolated from the 95% HPD intervals of all models (Table 3 ), with mean dates spanning from 320 to 691 CE. These results are strikingly similar to those found in two prior publications and support the idea proposed by Kay and colleagues that L4 may have emerged during the late Roman period [ 5 , 6 ]. However, there exist discrepancies between different estimates for the age of this lineage in available literature that overlap with the upper [ 63 ] and lower [ 62 ] edges of the 95% HPD intervals reported here. In addition, recent phylogeographic analyses of the MTBC and its lineages had ambiguous results for L4, with the internal nodes being assigned to either African or European origins depending on the study or different dataset structures used within the same study [ 62 , 63 ]. This finding indicates a close relationship between ancestral L4 strains in Europe and Africa [ 62 , 63 ]. Stucki and colleagues delineated L4 into two groups based on the extent of their geographic distribution: globally distributed “generalist” sublineages and highly local “specialist” sublineages that do not appear outside a restricted geographical niche [ 31 ]. Thus far, the “specialist” sublineages are found regionally on the African continent. A clear phylogenetic relationship explaining the distinction between geographically expansive and limited strains has not been established. Specifically, LUND1 falls within the globally distributed, “generalist” L4.10/PGG3 sublineage that shares a clade with two “specialist” sublineages: L4.6.1/Uganda and L4.6.2/Cameroon (Fig. 4 ) [ 31 ]. Elucidating the phenomenon that separated L4.10/PGG3 and the L4.6 lineages could offer relevant clues about the evolutionary relationship between specific populations of MTBC organisms and specific populations of humans by selection or genetic drift discussed elsewhere [ 44 , 64 ]. Assuming modern L4 diversity in Africa was driven by exchanges between Europe and Africa [ 62 , 63 ], why do we not see the L4.6 lineages more frequently in European populations as we do their sister clade? The current discrepancies over the age and geographic origin of L4 make interpretations of existing data unreliable for questions of such specificity and complexity at this time. These discrepancies could be due to differences in genome selection, SNP selection, and/or model selection and parameterization. It is unlikely we will gain clarity until more diverse, high-quality ancient L4 genomes are generated, creating a more temporally and geographically structured dataset. Going deeper into comparisons between the results presented here and those from prior studies, mutation rate estimates in the L4 and full MTBC analyses were lower than previous estimates for comparable datasets, but within the same order of magnitude, with all mean and median estimates ranging between 1E−8 and 5E−8 [ 5 , 6 ] (Table 2 ). Nucleotide substitution rates inferred based on modern tuberculosis data are close to but slightly higher than those based on ancient calibration, with multiple studies finding rates of approximately 1E−7 substitutions per site per year [ 4 , 65 ]. Despite a strict clock model having been rejected by the MEGA-CC molecular clock test [ 66 ] for both the L4 and full MTBC datasets, the clock rate variation estimates do not surpass 9E−17 in any model. Additionally, there is little difference between the clock rates estimated in the L4 and full MTBC datasets suggesting the rate of evolution in L4 does not meaningfully differ from that of the full complex (Tables 2 and 3 ; Fig. 5 ). Fig. 5 Substitution rate comparison across models and studies. Mean substitution rate per site per year for all models is expressed by a filled circle, with extended lines indicating the 95% HPD interval for that parameter. The Bos et al. [ 5 ] and Kay et al. [ 6 ] ranges are based on the reported rate values in each study. The Bos et al. [ 5 ] range is based on a full MTBC dataset, while the Kay et al. [ 6 ] range is based on an L4 dataset. All values presented here fall within one order of magnitude Full size image Importantly, we explored our data through multiple models, including birth-death tree priors. In our opinion, these models offer more robust parameterization options for heterochronous datasets that are unevenly distributed over time, such as those presented here, by allowing for uneven sampling proportions across different time intervals of the tree [ 67 ]. Recent studies have demonstrated the importance of selecting appropriate tree priors for the population under investigation, as well as the differences between birth-death and coalescent tree priors [ 68 , 69 ]. It is notable that the estimates reported here roughly agree across multiple demographic and clock models implemented in BEAST2. The estimate of the origin height for the L4 dataset as calculated with the birth-death Skyline model overlaps with the 95% HPD intervals for the tree height estimates across models in the full MTBC dataset. In addition to confirming the findings of prior publications, this study contributes a high-coverage, contamination-free, and securely dated ancient M . tuberculosis genome for future dating efforts, which may include more ancient data or more realistic models. Much of this quality likely comes from the unique preservation environment of the calcified nodule. In the case of tuberculosis, such nodules form from host immunological responses in the waning period of an active pulmonary infection [ 70 ] and remain in lung tissue, characterizing the latent form of the disease. Host immune cells were likely responsible for the dominant signal of human DNA in the LUND1 metagenomic screening library (Fig. 1 , Supplementary Table 2 in Additional File 1 ). Similar levels of preservation have been observed through analyses of ancient nodules yielding Brucella [ 71 ] and urogenital bacterial infections [ 72 ], with pathogen preservation surpassing what we report here. LUND1 avoided multiple quality-related problems often encountered in the identification and reconstruction of ancient genetic data from the MTBC. The genome is of high quality both in terms of its high coverage and low heterozygosity. Despite the low quantity of MTBC DNA detected in the preliminary screening data, in-solution capture enriched the proportion of endogenous DNA by three orders of magnitude (Table 1 ). The resultant genomic coverage left few ambiguous positions at which multiple alleles were represented by greater than 10% of the aligned reads. This extremely low level of heterozygosity indicated that LUND1 contained a dominant signal of only one MTBC strain. This circumvented analytical complications that can arise from the simultaneous presence of multiple MTBC strains associated with mixed infections or from the presence of abundant non-MTBC mycobacteria stemming from the environment. The preservation conditions of Bishop Winstrup’s remains, mummified in a crypt far from soil, left the small MTBC signal unobscured by environmental mycobacteria or by the dominance of any other bacterial organisms (Fig. 2 a). The unprecedented quality of LUND1 and the precision of its calibration point (historically recorded year of death) made it ideal for Bayesian molecular dating applications. While the high quality and securely dated ancient genome presented here offered advantages in a molecular dating approach, there are caveats to the results of this study. First, this analysis excludes diversity within M . canettii —a bacterium that can cause pulmonary tuberculosis—from the MTBC dataset, and as such, our estimate does not preclude the possibility of a closely related ancestor having caused infections indistinguishable from tuberculosis in humans before 6000 BP. The inferred tMRCA could be restricted to a lineage that survived an evolutionary bottleneck or selective sweep, possibly connected to its virulence in humans as suggested elsewhere, albeit as a considerably more ancient event [ 45 , 73 , 74 ]. It is possible there were pathogenic sister lineages to the MTBC that existed prior to this reduction in diversity and are not represented by extant MTBC diversity. Additionally, despite the use of ancient data, our temporal sampling window is still narrow given the estimated age of the MTBC and L4. For the MTBC dataset no samples pre-date 1000 years before present, and for L4, no samples predate 350 years before present. It could be argued the ancient L4 genomes available to date represent samples taken in the midst of an epidemic—namely, the “White Plague” of tuberculosis, which afflicted Europe between the seventeenth and nineteenth centuries [ 75 ]. For a slow-evolving bacterial pathogen like tuberculosis, it is possible our sampling window of ancient genomes is subject to the very issue they are meant to alleviate: the time dependency of molecular clocks [ 55 , 57 , 58 , 59 ]. The genomes sampled from pre-contact Peruvian remains do not derive from a known epidemic period in history and add temporal spread to our MTBC dataset, but also belong to a clade of animal-associated strains ( M . pinnipedii ) that may have been subject to dramatically different evolutionary pressures compared to the human-associated lineages of the complex due to differing host biology and population dynamics. However, our use of a relaxed clock model allowed for the estimation and accommodation of variable rates across different branches of the complex. We do not see evidence for divergent substitution rates among the branches leading to the Peruvian M . pinnipedii strains (Fig. S14 in Additional File 3 ). On a related matter, we may be missing diversity for some lineages (e.g., L6, L7, animal lineages) for which whole genome data is sparse. The available ancient MTBC genomes also suffer from a lack of lineage diversity, with only pinniped strains and L4 represented. We furthermore qualify our BDSKY results by acknowledging our models required the specification of priors for the rho parameter (the sampling proportion of the total population at discrete time points). We chose rho priors (see Additional File 3 ) assuming that our modern genomes represented a greater sampling proportion of the total contemporaneous MTBC and L4 populations than our ancient genomes. This assumption alone made this parameterization less arbitrary than the assumptions inherent in the coalescent-based methods that have been utilized in the past for similar time-sampled analyses of the MTBC and other pathogens, which assume random sampling at uniform rates across all time periods. We also acknowledge that skyline models assume panmictic populations, and the datasets presented here do contain spatial subdivision, which may bias estimates regarding population dynamics. However, this aspect of our datasets is unlikely to bias our molecular clock estimates. As stated above, the agreement of multiple models to reach similar dates for the tMRCA of the MTBC and L4 reinforces our support of the hypothesis that the most recent common ancestor of the MTBC diversity we are aware of today emerged during the Neolithic. Filling the MTBC time tree with more ancient genomes from diverse time periods, locations, and lineages would have the potential to address the limitations listed above. The most informative data would (a) derive from an Old World context (i.e., Europe, Asia, or Africa) pre-dating the White Plague in Europe or (b) come from any geographical location or pre-modern time period, but belong to one of the MTBC lineages not yet represented by ancient data. An ideal data point, which would clarify many open questions and seeming contradictions related to the evolutionary history of the MTBC, would derive from Africa, the inferred home of the MTBC ancestor [ 3 , 43 , 44 , 45 ], and pre-date 2000 years before present. A genome of this age would test the lower boundaries of the 95% HPD tree height intervals estimated in the full MTBC models presented here. Until recently, it would have been considered unrealistic to expect such data to be generated from that time period and location. Innovations and improvements in ancient DNA retrieval and enrichment methods, however, have brought this expectation firmly into the realm of the possible [ 30 , 76 ]. Ancient bacterial pathogen genomes have now been retrieved from remains from up to 5000 years before present [ 77 , 78 , 79 ] and recent studies have reported the recovery of human genomes from up to 15,000-year-old remains from North Africa [ 80 , 81 ]. Conclusions Here, we offer confirmation that the extant MTBC, and all available ancient MTBC genomes, stem from a common ancestor that existed a maximum of 6000 years before present. Many open questions remain, however, regarding the evolutionary history of the MTBC and its constituent lineages, as well as the role of tuberculosis in human history. Elucidating these questions is an iterative process, and progress will include the generation of diverse ancient M . tuberculosis genomes, and the refinement and improved parameterization of Bayesian models that reflect the realities of MTBC (and other organisms’) population dynamics and sampling frequencies over time. To aid in future attempts to answer these questions, this study provides an ancient MTBC genome of impeccable quality and explores the first steps in applying birth-death population models to modern and ancient TB data. Methods Lung nodule identification The paleopathological investigation of the body of Winstrup is based on extensive CT scan examinations with imaging of the mummy and its bedding performed with a Siemens Somatom Definition Flash, 128 slice at the Imaging Department of Lund University Hospital. Ocular inspection of the body other than of the head and hands was not feasible, since Winstrup was buried in his episcopal robes and underneath the body was wrapped in linen strips. The velvet cap and the leather gloves were removed during the investigation. The body was naturally mummified and appeared to be well preserved with several internal organs identified. The imaging was quite revealing. The intracranial content was lost with remains of the brain in the posterior skull base. Further, the dental status was poor with several teeth in the upper jaw affected by severe attrition, caries, and signs of tooth decay, as well as the absence of all teeth in the lower jaw. Most of the shed teeth were represented by closed alveoli, indicating antemortem tooth loss. Along with the investigation of the bedding, a small sack made of fabric was found behind the right elbow containing five teeth: two incisors, two premolars, and one molar. The teeth in the bag complemented the remaining teeth in the upper jaw. It is feasible that the teeth belonged to Winstrup and were shed several years before he died. A fetus approximately 5 months of age was also found in the bedding, underneath his feet. Both lungs were preserved but collapsed with findings of a small parenchymal calcification and two ~ 5 mm calcifications in the right hilum (Fig. 1 ). The assessment was that these could constitute a Ranke complex, suggestive of previous primary tuberculosis [ 70 ]. A laparoscopy was performed at the Lund University Hospital in a clinical environment whereby the nodules were retrieved. Furthermore, several calcifications were also found in the aorta and the coronary arteries, suggesting the presence of atherosclerosis. The stomach, liver, and gall bladder were preserved, and several small gallstones were observed. The spleen could be identified but not the kidneys. The intestines were there, however, collapsed except for the rectum that contained several large pieces of concernments. The bladder and the prostate could not be recognized. The skeleton showed several pathological changes. Findings on the vertebrae consistent with DISH (diffuse idiopathic skeletal hyperostosis) were present in the thoracic and the lumbar spine. Reduction of the joint space in both hip joints and the left knee joint indicate that Winstrup was affected by osteoarthritis. No signs of gout or osteological tuberculosis (i.e., Pott’s disease) were found. Neither written sources nor the modern examination of the body of Winstrup reveal the immediate cause of death. However, it is known that he was bedridden for at least 2 years preceding his death. Historical records indicate that gallstones caused him problems while traveling to his different parishes. Additionally, he was known to have suffered from tuberculosis as a child, which may have recurred in his old age. Sampling and extraction Sampling of the lung nodule, extraction, and library preparation were conducted in dedicated ancient DNA clean rooms at the Max Planck Institute for the Science of Human History in Jena, Germany. The nodule was broken using a hammer, and a 5.5 mg portion of the nodule was taken with lung tissue for extraction according to a previously described protocol with modifications [ 18 ]. The sample was first decalcified overnight at room temperature in 1 mL of 0.5 M EDTA. The sample was then spun down, and the EDTA supernatant was removed and frozen. The partially decalcified nodule was then immersed in 1 mL of a digestion buffer with final concentrations of 0.45 M EDTA and 0.25 mg/mL Proteinase K (Qiagen) and rotated at 37 °C overnight. After incubation, the sample was centrifuged. The supernatants from the digestion and initial decalcification step were purified using a 5-M guanidine-hydrochloride binding buffer with a High Pure Viral Nucleic Acid Large Volume kit (Roche). The extract was eluted in 100 μl of a 10-mM tris-hydrochloride, 1-mM EDTA (pH 8.0), and 0.05% Tween-20 buffer (TET). Two negative controls and one positive control sample of cave bear bone powder were processed alongside LUND1 to control for reagent/laboratory contamination and process efficiency, respectively. Library preparation and shotgun screening sequencing Double-stranded Illumina libraries were constructed according to an established protocol with some modifications [ 82 ]. Overhangs of DNA fragments were blunt-end repaired in a 50 μl reaction including 10 μl of the LUND1 extract, 21.6 μl of H 2 O, 5 μl of NEB Buffer 2 (New England Biolabs), 2 μl dNTP mix (2.5 mM), 4 μl BSA (10 mg/ml), 5 μl ATP (10 mM), 2 μl T4 polynucleotide kinase, and 0.4 μl T4 polymerase, then purified and eluted in 18 μl TET. Illumina adapters were ligated to the blunt-end fragments in a reaction with 20 μl Quick Ligase Buffer, 1 μl of adapter mix (0.25 μM), and 1 μl of Quick Ligase. Purification of the blunt-end repair and adapter ligation steps was performed using MinElute columns (Qiagen). Adapter fill-in was performed in a 40-μl reaction including 20 μl adapter ligation eluate, 12 μl H 2 O, 4 μl Thermopol buffer, 2 μl dNTP mix (2.5 mM), and 2 μl Bst polymerase. After the reaction was incubated at 37 °C for 20 min, the enzyme was heat deactivated with a 20-min incubation at 80 °C. Four library blanks were processed alongside LUND1 to control for reagent/laboratory contamination. The library was quantified using a real-time qPCR assay (Lightcycler 480 Roche) with the universal Illumina adapter sequences IS7 and IS8 as targets. Following this step, the library was double indexed [ 83 ] with a unique pair of indices over two 100 μl reactions using 19 μl of template, 63.5 μl of H 2 O, 10 μl PfuTurbo buffer, 1 μl PfuTurbo (Agilent), 1 μl dNTP mix (25 mM), 1.5 μl BSA (10 mg/ml), and 2 μl of each indexing primer (10 μM). The master mix was prepared in a pre-PCR clean room and transported to a separate lab for amplification. The two reactions were purified and eluted in 25 μl of TET each over MinElute columns (Qiagen), then assessed for efficiency using a real-time qPCR assay targeting the IS5 and IS6 sequences in the indexing primers. The reactions were then pooled into one double-indexed library. Approximately one third of the library was amplified over three 70 μl PCR reactions using 5 μl of template each and Herculase II Fusion DNA Polymerase (Agilent). The products were MinElute purified, pooled, and quantified using an Agilent Tape Station D1000 Screen Tape kit. LUND1 and the corresponding negative controls were sequenced separately on an Illumina NextSeq 500 using single-end, 75-cycle, high-output kits. Pathogen identification and authentication De-multiplexed sequencing reads belonging to LUND1 were processed in silico with the EAGER pipeline (v.1.92) [ 24 ]. ClipAndMerge was used for adapter removal, fragment length filtering (minimum sequence length, 30 bp), and base sequence quality filtering (minimum base quality, 20). MALT v. 038 [ 19 ] was used to screen the metagenomic data for pathogens using the full NCBI Nucleotide database (“nt,” April 2016) with a minimum percent identity of 85%, a minSupport threshold of 0.01, and a topPercent value of 1.0. The resulting metagenomic profile was visually assessed with MEGAN6 CE [ 20 ]. The adapter-clipped reads were additionally aligned to a reconstructed MTBC ancestor genome [ 21 ] with BWA [ 33 ] as implemented in EAGER (-l 1000, -n 0.01, -q 30). Damage was characterized with DamageProfiler in EAGER [ 84 ]. In-solution capture probe design Single-stranded probes for in-solution capture were designed using a computationally extrapolated ancestral genome of the MTBC [ 21 ]. The probes are 52 nucleotides in length with a tiling density of 5 nucleotides, yielding a set of 852,164 unique probes after the removal of duplicate and low complexity probes. The number of probes was raised to 980,000 by a random sampling among the generated probe sequences. A linker sequence (5′-CACTGCGG-3′) was attached to each probe sequence, resulting in probes of 60 nucleotides in length, which were printed on a custom-design 1 million-feature array (Agilent). The printed probes were cleaved off the array, biotinylated, and prepared for capture according to Fu et al. [ 30 ]. UDG library preparation and in-solution capture Fifty microliters of the original LUND1 extract were used to create a uracil-DNA glycosylase (UDG) treated library, in which the post-mortem cytosine to uracil modifications, which cause characteristic damage patterns in ancient DNA, are removed. The template DNA was treated in a buffer including 7 μl H 2 O, 10 μl NEB Buffer 2 (New England Biolabs), 12 μl dNTP mix (2.5 mM), 1 μl BSA (10 mg/ml), 10 μl ATP (10 mM), 4 μl T4 polynucleotide kinase, and 6 μl USER enzyme (New England Biolabs). The reaction was incubated at 37 °C for 3 h, and then 4 μl of T4 polymerase was added to the library to complete the blunt-end repair step. The remainder of the library preparation protocol, including double indexing, was performed as described above. The LUND1 UDG-treated library was amplified over two rounds of amplification using Herculase II Fusion DNA Polymerase (Agilent). In the first round, five reactions using 3 μl of template each were MinElute purified and pooled together. The second round of amplification consisted of three reactions using 3 μl of template each from the first amplification pool. The resulting products were MinElute purified and pooled together. The final concentration of 279 ng/μl was measured using an Agilent Tape Station D1000 Screen Tape kit (Agilent). A portion of the non-UDG library (see above) was re-amplified to 215 ng/μl. A 1:10 pool of the non-UDG and UDG amplification products was made to undergo capture. A pool of all associated negative control libraries (Supplementary Table 2 ) and a positive control known to contain M . tuberculosis DNA also underwent capture in parallel with the LUND1 libraries. Capture was performed according to an established protocol [ 29 ], and the sample product was sequenced on an Illumina NextSeq with a 150-cycle paired end kit to a depth of ~ 60 million paired reads. The negative controls were sequenced on a NextSeq 500 with a 75-cycle paired end kit. Genomic reconstruction, heterozygosity, and SNP calling For the enriched, UDG-treated LUND1 sequencing data, de-multiplexed paired-end reads were processed with the EAGER pipeline (v. 1.92) [ 24 ], adapter-clipped with AdapterRemoval, and aligned to the MTBC reconstructed ancestor genome with BWA (-l 32, -n 0.1, -q 37). Previously published ancient and modern Mycobacterium tuberculosis genomic data (Supplementary Table 4 , Supplementary Table 5 ) were processed as single-end sequencing reads, but otherwise processed identically in the EAGER pipeline. Genome Analysis Toolkit (GATK) UnifiedGenotyper was used to call SNPs using default parameters and the EMIT ALL SITES output option [ 85 ]. We used MultiVCFAnalyzer (v0.87 ) [ 5 ] to create and curate SNP alignments for the L4 (Supplementary Table 5 ) and full MTBC (Supplementary Table 4 ) datasets based on SNPs called in reference to the TB ancestor genome [ 21 ], with repetitive sequences, regions subject to cross-species mapping, and potentially imported sites excluded. The repetitive and possibly cross-mapped regions were excluded as described previously [ 5 ]. Potentially imported sites were identified using ClonalFrameML [ 34 ] separately for each dataset, using full genomic alignments and trees generated in RAxML [ 86 ] as input without the respective outgroups. Remaining variants were called as homozygous if they were covered by at least 5 reads, had a minimum genotyping quality of 30, and constituted at least 90% of the alleles present at the site. Outgroups for each dataset were included in the SNP alignments, but no variants unique to the selected outgroup genomes were included. Minority alleles constituting over 10% were called and assessed for LUND1 to check for a multiple strain M . tuberculosis infection. Sites with missing or incomplete data were excluded from further analysis. Phylogenetic analysis Neighbor joining (Figs. S2 and S3 in Additional File 3 ), maximum likelihood (Figs. S4 and S5 in Additional File 3 ), and maximum parsimony (Figs. S6 and S7 in Additional File 3 ) trees were generated for the L4 and full MTBC datasets (Tables S4 and S5 in Additional File 1 ), with 500 bootstrap replications per tree. Maximum parsimony and neighbor joining trees were configured using MEGA-Proto and executed using MEGA-CC [ 66 ]. Maximum likelihood trees were configured and executed using RAxML [ 86 ] with the GTR+GAMMA (4 gamma categories) substitution model. Bayesian phylogenetic analysis of full MTBC and L4 datasets Bayesian phylogenetic analysis of the full MTBC was conducted using a dataset of 261 M . tuberculosis genomes including LUND1, five previously published ancient genomes [ 5 , 6 ], and 255 previously published modern genomes (Additional File 5 ). Mycobacterium canettii was used as an outgroup for this dataset. Bayesian phylogenetic analysis of L4 of the MTBC was conducted using a dataset of 152 genomes including three ancient genomes presented here and in a previous publication [ 6 ] and 149 previously published modern genomes (Additional File 6 ). Body80 and body92 were selected out of the eight samples presented by Kay and colleagues based on multiple criteria. Multiple samples from that study proved to be mixed strain infections. Apart from body92, these samples were excluded from this analysis due to our present inability to separate strains with retention of unique positions. Body92 had a clearly dominant strain estimated by Kay et al. [ 6 ] to constitute 96% of the tuberculosis DNA in the sample, and stringent mapping in BWA [ 33 ] (-l 32, -n 0.1, -q 37) for this project found the genome to have 124-fold coverage when mapped against the TB ancestor. Between the degree of dominance and the high coverage, we could confidently call variant positions from the dominant strain (Fig. S16a in Additional File 3 ). Body80 was the only single-strain sample from that collection to have sufficient coverage (~8x) for confident SNP calling after stringent mapping (Fig. S16b in Additional File 3 ). For selection criteria for the modern genomes, please see Additional File 3. L2_N0020 was used as an outgroup. The possibility of equal evolutionary rates in both datasets was rejected by the MEGA-CC molecular clock test [ 66 ]. TempEst [ 87 ] was also used to assess temporal structure in the phylogeny prior to analysis with BEAST2 [ 37 ]. For the full MTBC alignment, R 2 = 0.273, and for the L4 alignment, R 2 = 0.113 (Figs. S8 and S9 in Additional File 3 ). We generated a maximum likelihood tree and alignment for the full MTBC excluding the animal-associated lineages, and consequently excluding the ancient M . pinnipedii genomes, to test if limiting the dataset to the human lineages produced a stronger temporal signal. Without the anchor of the ancient M . pinnipedii genomes, the temporal signal for the full complex reduced ( R 2 = 0.06), as all ancient calibration points were limited to Lineage 4 (Fig. S17 in Additional File 3 ). When the root-to-tip distances are plotted with points labeled according to lineage or sublineage, it becomes clear that clade membership is largely driving the distance from root of the genomes. However, there remains a temporal signal in the data. A correction for static positions in the M . tuberculosis genome not included in the SNP alignment was included in the configuration file. A “TVM” substitution model, selected based on results from ModelGenerator [ 88 ], was implemented in BEAUti as a GTR+G4 model with the AG rate parameter fixed to 1.0. LUND1, body80, and body92 were tip-calibrated using year of death, which was available for all three individuals (Additional File 6 ). The three ancient Peruvian genomes were calibrated using the mid-point of their OxCal ranges (Additional File 5 ) [ 5 ]. We performed tip sampling for all modern genomes excluding the outgroup over a uniform distribution between 1992 and 2010 in all models. The outgroup was fixed to 2010 in every case. All tree priors were used in conjunction with an uncorrelated relaxed lognormal clock model. The constant coalescent model was also used in conjunction with a strict clock model. Two independent MCMC chains of 200,000,000 iterations minimum were computed for each model. If the ESS for any parameter was below 200 after the chains were combined, they were resumed with additional iterations. The results were assessed in Tracer v1.7.1 with a 10% burn-in [ 89 ]. Trees were sampled every 20,000 iterations. The log files and trees for each pair of runs were combined using LogCombiner v2.4.7 [ 37 ]. An MCC tree was generated using TreeAnnotator with 10% burn-in [ 37 ]. Figures 3 and 4 were generated using the ggtree package [ 90 ] in R [ 91 ]. For details on the parameterization of the birth-death models, please see Additional File 3 . Marginal likelihood was calculated using stepping stone sampling [ 92 ] implemented in the MODELSELECTION package in BEAST2. The total chain length required for convergence in each model was split across 100 steps. Following this, we performed a date randomization test [ 41 ] for the BDSKY+UCLD model for each dataset. Dates were shuffled randomly among all genomes excluding the outgroup. For both datasets the outgroup was used as an anchor for tip-dating of the “modern” genomes in each date-randomized model. Ten randomizations were generated for each model and run in at least two parallel chains. For the L4 dataset, the chains were run until the rate parameter reached an ESS of at least 200 for every date-randomized model (Additional File 11 ). For the date randomizations of the full MTBC dataset, we reached sufficient ESS in four out of ten models. However, as noted above in the “ Results ” section, we reached ESS values greater than or equal to 100 for the rate parameter for all models. We present the rate estimates and rate parameter ESS values for all MTBC date randomizations (Additional File 10 ). Availability of data and materials Raw sequencing data generated within this study was uploaded to the NCBI Sequence Read Archive (SRA) (accession SRS6462469; BioProject PRJNA517266) [ 93 ]. These data include the non-UDG non-enriched screening library, the non-UDG enriched library, and the UDG-treated enriched library. The previously published data used in the MTBC dataset is available on the SRA and can be accessed as part of the following BioProject accessions: PRJNA244165 [ 94 ], PRJEB7454 [ 95 ], PRJNA186722 [ 96 ], PRJEB3128 [ 97 ], PRJEB3223 [ 98 ], PRJNA52007 [ 97 ], PRJNA39969 [ 100 ], PRJEB2138 [ 101 ], PRJNA52637 [ 102 ], PRJNA38491 [ 103 ], PRJNA49659 [ 104 ], PRJEB2092 [ 105 ], PRJEB2091 [ 106 ], and PRJNA244633 [ 107 ]. See Additional File 5 for sample-specific accession numbers. The previously published data used in the L4 dataset is available on the SRA and can be accessed as part of the following BioProject accessions: PRJEB7454 [ 95 ], PRJEB11460 [ 108 ], PRJEB3223 [ 98 ], PRJNA52007 [ 99 ], PRJNA52637 [ 102 ], PRJNA38491 [ 103 ], PRJNA39969 [ 100 ], and PRJNA49659 [ 104 ]. See Additional File 6 for sample-specific accession numbers.
When anthropologist Caroline Arcini and her colleagues at the Swedish Natural Historical Museum discovered small calcifications in the extremely well-preserved lungs of Bishop Peder Winstrup, they knew more investigation was needed. "We suspected these were remnants of a past lung infection," says Arcini, "and tuberculosis was at the top of our list of candidates. DNA analysis was the best way to prove it." Up to one-quarter of the world's population is suspected to have been exposed to bacteria of the Mycobacterium tuberculosis complex, which cause tuberculosis (TB). Bishop Winstrup would have been one of many to fall ill during the onset of the so-called "white plague" TB pandemic that ravaged post-medieval Europe. Today, TB is among the most prevalent diseases, accounting for the highest worldwide mortality from a bacterial infection. The global distribution of TB has led to the prevailing assumption that the pathogen evolved early in human history and reached its global distribution via the hallmark human migrations tens of thousands of years ago, but recent work on ancient TB genomes has stirred up controversy over when this host-pathogen relationship began. In 2014, a team led by scientists from the University of Tübingen and Arizona State University reconstructed three ancient TB genomes from pre-contact South America—not only were the ancient strains unexpectedly related to those circulating in present-day seals, but comparison against a large number of human strains suggested that TB emerged within the last 6000 years. Understandably, skepticism surrounded this new estimate since it was based entirely on ancient genomes that are not representative of the TB strains associated with humans today. "Discovery of the Bishop's lung calcification gave us the opportunity to revisit the question of tuberculosis emergence with data from an ancient European," comments Kirsten Bos, group leader for Molecular Paleopathology at the Max Planck Institute for the Science of Human History (MPI-SHH), who co-led the study. "If we could reconstruct a TB genome from Bishop Winstrup, where we know his date of death to the day, it would give a secure and independent calibration for our estimates of how old TB, as we know it, actually is." The highest quality ancient TB genome to date In a new study published this week in Genome Biology, Susanna Sabin of MPI-SHH and colleagues have reconstructed a tuberculosis genome from the calcified nodule discovered in Bishop Winstrup's remains. Scanning electron micrograph of Mycobacterium tuberculosis bacteria, which cause TB. Credit: NIAID "The genome is of incredible quality—preservation on this scale is extremely rare in ancient DNA," says Bos. Together with a handful of tuberculosis genomes from other work, the researchers revisited the question of the age of the Mycobacterium tuberculosis complex, with the year of the Bishop's death as a fine-tuned calibration point. Using multiple molecular dating models, all angles indeed point to a relatively young age of the Mycobacterium tuberculosis complex. "A more recent emergence of the tuberculosis pathogen complex is now supported by genetic evidence from multiple geographic regions and time periods," says Sabin, first author of the study. "It's the strongest evidence available to date for this emergence having been a Neolithic phenomenon." This most recent shift in the narrative for when bacteria in the Mycobacterium tuberculosis complex became highly infectious to humans raises further questions about the context of its emergence, as it appears to have coincided with the rise of pastoralism and sedentary lifestyles. "The Neolithic transition seems to have played an important role for the emergence of a number of human pathogens," says Denise Kühnert, group leader for disease transmission research at MPI-SHH who co-led the investigation. "For TB in particular, stronger evidence could only come from an older genome, though these deeper time periods are unlikely to yield preservation on the scale of what we've seen for Bishop Winstrup," adds Bos. "Moving forward," Sabin further comments, "the hope is we will find adequately preserved DNA from time periods close to the emergence of the complex, or perhaps from its ancestor."
10.1186/s13059-020-02112-1
Biology
Developing a targeted, reliable, long-lasting genetic kill switch
Austin G. Rottinghaus et al, Genetically stable CRISPR-based kill switches for engineered microbes, Nature Communications (2022). DOI: 10.1038/s41467-022-28163-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-022-28163-5
https://phys.org/news/2022-02-reliable-long-lasting-genetic.html
Abstract Microbial biocontainment is an essential goal for engineering safe, next-generation living therapeutics. However, the genetic stability of biocontainment circuits, including kill switches, is a challenge that must be addressed. Kill switches are among the most difficult circuits to maintain due to the strong selection pressure they impart, leading to high potential for evolution of escape mutant populations. Here we engineer two CRISPR-based kill switches in the probiotic Escherichia coli Nissle 1917, a single-input chemical-responsive switch and a 2-input chemical- and temperature-responsive switch. We employ parallel strategies to address kill switch stability, including functional redundancy within the circuit, modulation of the SOS response, antibiotic-independent plasmid maintenance, and provision of intra-niche competition by a closely related strain. We demonstrate that strains harboring either kill switch can be selectively and efficiently killed inside the murine gut, while strains harboring the 2-input switch are additionally killed upon excretion. Leveraging redundant strategies, we demonstrate robust biocontainment of our kill switch strains and provide a template for future kill switch development. Introduction Probiotic microbes have become effective chasses for engineering diagnostic and therapeutic technologies. One of the most commonly engineered probiotic strains is Escherichia coli Nissle 1917 (EcN). Engineered strains of EcN have been successfully used to diagnose and treat bacterial infections 1 , 2 , cancers 3 , 4 , 5 , gastrointestinal bleeding 6 , inflammatory disorders 7 , 8 , 9 , and obesity 10 in a variety of animal models. EcN strains engineered to treat metabolic disorders are being evaluated in human clinical trials with promising early-phase results 11 , 12 . However, there are important safety concerns associated with organisms genetically engineered for medical applications. Probiotics are living organisms that have the potential to mutate and evolve undesirable traits over the course of diagnosis or treatment. Such adaptations can include loss of beneficial functions of the engineered system, gain of deleterious functions such as competitive exclusion of native microbes, pathogenic potential against the host, or environmental contamination if they spread outside the host 13 , 14 , 15 , 16 . To mitigate these concerns, engineered probiotics should possess biocontainment systems that enable both selective removal from the host and prevent their environmental dissemination 17 . Biocontainment circuit designs are focused on preventing proliferation in the wild, and typically involve an input that is specific to the permissive environment and repressive to the killing circuit, such that upon exit of the permissive environment, the lethal components are expressed 18 . Several such biocontainment strategies have been developed with varying degrees of efficacy and stability, including use of auxotrophy 11 , 19 , 20 and synthetic amino acids 21 , 22 , 23 . While approaches like synthetic auxotrophy are evolutionarily stable in that they do not readily give rise to escape mutants 21 , a limitation of these methods is that they may require the probiotic environment to be supplemented with additional survival factors (‘permissive molecules’). Completely withholding these molecules in the gut, for example by administering an auxotrophic strain without the essential compound, effectively limits probiotic lifespan in vivo 11 , but it may also limit therapeutic potential depending on the rate of probiotic cell death in the absence of the permissive molecules. Alternatively, the permissive molecules may theoretically be supplied to patients in conjunction with the probiotic, but this design complicates administration as well as the selective removal of probiotics from the gut since the time to full clearance of the permissive molecules may be difficult to control. In addition, if the permissive molecules are natively present in the gut, these circuits can be compromised by cross-feeding 20 , 21 , 24 . An inverse kill switch design would then be one in which the baseline state in the gut is permissive without supplementation of exogenous molecules; correspondingly, the lethal components of the circuit are expressed in response to supplied inducers, or environmental signals external to the gut. Numerous genetic circuits have been developed that initiate cell death in response to a chemical inducer 25 , 26 , 27 , 28 . Similarly, biocontainment circuits have been developed in E. coli using temperature sensors tuned to differentiate physiological and environmental temperatures 18 , 29 , 30 . These kill switches control cell survival using a variety of mechanisms, including expression of toxins and lysis proteins 18 , 25 , 26 , 27 , degradation of essential proteins 26 , and cleavage and degradation of the genome by Cas3 proteins 28 . Both temperature-sensitive circuits designed by Piraner et al. and Stirling et al. used the E. coli CcdB-CcdA toxin-antitoxin system to control cell survival. The kill switch engineered by Piraner et al. used a modified version of the Salmonella typhimurium -native P tlpA - tlpA sensor and achieved a 4-log reduction in fecal cell number 30 , while the kill switch engineered by Stirling et al. used the E. coli- native P cspA promoter and achieved a 5-log reduction in fecal cell number 29 . Notably, functional redundancy offered by the combination of the P cspA -controlled temperature-sensitive kill switch with an orthogonal pH-sensitive kill switch mechanism synergistically improved in vitro killing efficacy such that surviving colony counts were below the 11-log limit of detection 18 . However, kill switches that induce cell death by expressing toxins, lysis proteins, and proteases are prone to mutational inactivation, often leading to population dominance of non-functional variants, or have not been characterized for genetic stability 26 . To overcome this stringent evolutionary selection, such kill switch systems must be designed to be highly stable. The temperature-inducible toxin-antitoxin kill switch engineered by Stirling et al. was shown to be stable over 140 generations of growth in vitro and at least 10 days of growth in the mouse gut 29 , while the combined temperature- and pH-inducible kill switch was stable over at least 100 generations in vitro 18 . In contrast, a CRISPR-Cas3-based system has been shown to be stable for 1700 generations when applied to plasmid removal 28 . However, it is unclear whether the same stability would persist if the system was applied to cell death. To engineer a genetically stable probiotic with viability that is controllable both inside and outside the host, we used a step-wise design approach to develop two CRISPR-Cas9-based kill switches (CRISPRks) in EcN. First, we built a single-input kill switch that can initiate probiotic death in response to the chemical inducer anhydrotetracycline (aTc). After iterative optimization for stability and efficacy, this circuit became the foundation for a 2-input kill switch that can additionally initiate death in response to the temperature decrease that occurs upon excretion from the host. Both designs allow the engineered microbe to be selectively removed in situ from the gut, while the final 2-input CRISPRks additionally prevents the microbe from surviving outside the body. To achieve genetic stability of the kill switches, we applied parallel approaches of genetic engineering and environmental control, including functionally redundant expression cassettes, antibiotic-free plasmid maintenance systems, knockouts of key drivers of DNA-mutagenesis in the SOS response, and provision of intra-niche microbial competition. Both kill switches exhibited significant long-term stability, with efficient killing maintained after 28 days (224 generations) of continuous growth in vitro. In mice, we innovatively leveraged intra-niche competitive inhibition to selectively disadvantage the target EcN subpopulation 31 . The single-input CRISPRks allowed the engineered probiotic to be completely eliminated from the mouse gut in response to aTc consumption. Similarly, the 2-input CRISPRks achieved virtually complete eradication of the probiotic population when both chemical- and temperature-induction were applied. The principles and genetic parts used in the CRISPRks described here (i.e., Cas9 and guide RNA) have been employed in various microbial species 32 , 33 , 34 , in contrast to the limited use of Cas3 in microbes, highlighting the potential for deploying this generalizable biocontainment platform to other engineered probiotic and microbial strains for diverse applications. Results Development of a chemical-inducible CRISPR-based kill switch for EcN We first designed a kill switch that induces EcN cell death in response to the chemical inducer aTc (Fig. 1a ; Table 1 ). To initiate cell death, we utilized a CRISPR/Cas9-based approach. E. coli can repair double-stranded DNA breaks (DSBs), as caused by Cas9, through RecA-dependent homologous recombination with sister genomes generated by DNA replication 35 . However, the damage caused by Cas9 is lethal if each copy of the replicating genome is cut 36 . To make CRISPR-based cell death dependent on the presence of aTc, we expressed Cas9 from a low-copy plasmid and genome-targeting guide RNAs (gRNAs) from a medium-copy plasmid using aTc-inducible P tet promoters. Fig. 1: Development of a chemical-inducible CRISPR-based kill switch for EcN. a Schematic of the aTc-inducible CRISPR-based kill switch. Cas9 and gRNAs are expressed on independent plasmids by aTc-inducible P tet promoters. TetR, which regulates the expression of P tet promoters in an aTc-dependent manner, is constitutively expressed on the gRNA plasmid. In the presence of aTc, TetR is unable to bind to its target promoters, leading to expression of Cas9 and the gRNAs. The Cas9/gRNA complex then binds to and cleaves its genomic targets, leading to cell death. b gRNA target locations in the EcN genome: the single copy groL gene (gray), the three copy ileTUV genes (blue), and the seven copy rrsABCDEGH genes (green). c Log 10 CFUs for the no gRNA control and six gRNA expression plasmids in wild-type EcN with the P tet - cas9 expression plasmid. d Sequencing results from 30 survivors from c with non-functional ile-2 kill switches. e Genomic neutral integration sites used for P tet - cas9 integrations: within the lacZ gene, between rhtB and rhtC , between agaI and rsmI , and between exo and cea . f Log 10 Fraction Viable for EcN strains with plasmid-based, one genome-integrated (Int X1), two genome-integrated (Int X2), three genome-integrated (Int X3), and four genome-integrated (Int X4) P tet - cas9 expression cassettes. All strains contain the same ile-2 gRNA plasmid. g Sequencing results from 24 survivors from f with non-functional Int X4 kill switches. h Log 10 CFUs for EcN with four genomic P tet - cas9 integrations and different combinations of the groL-2 and ile-2 gRNAs. ‘+opt’ represents the respective P tet -gRNA cassette with an optimized P tet promoter. aTc concentration was increased to 500 ng/mL, from 100 ng/mL in c , based on the transfer curve characterized in Supplementary Fig. 1e . For all sequencing, the fraction mutated is the fraction of total sequenced cassettes that contained a mutation. For all kill switch assays, exponential phase cells for each strain were induced with 0 and 100 or 500 ng/mL aTc for 1.5 h. Values and error bars are the average and standard deviation of biological triplicate, respectively. Statistical comparisons were performed using two-tailed unpaired t -tests (* P < 0.05; ** P < 0.01; *** P < 0.001). See also Supplementary Fig. 1 . Source data with p-values are provided as a Source Data file. Full size image Table 1 Names and descriptions of each no gRNA control and kill switch strain. Cas9-expressing plasmids include a low (~5)-copy pSC101 origin of replication and gRNA-expressing plasmids include a medium (~20)-copy p15A origin of replication. Full size table We next sought a gRNA that could target the genome and initiate cell death with high efficiency. We hypothesized that gRNAs with multiple genomic binding sites would have higher killing efficiencies as the gRNA-Cas9 complex would have a higher probability of locating a target site. In addition, multi-locus genome cleavage should decrease the probability of DSB rescue through homologous recombination 37 , or genomic mutations in the target sites rendering the cell immune to the kill switch. To explore the effect of target copy number, we tested the aTc-response of six total gRNAs for three target genes present at various copies throughout the genome: the single-copy groL gene, the three-copy ileTUV genes, and the seven-copy rrsABCDEGH genes (Fig. 1 b, c ). To quantify the efficiency for each gRNA, we defined the term ‘fraction viable’ as the ratio of colony forming units (CFUs) obtained in the non-permissive condition (+aTc) to CFUs obtained in the permissive condition (−aTc). Interestingly, the killing efficiencies for the multi-target gRNAs did not exceed the efficiencies of the single-target gRNAs. At least one gRNA for each target gene achieved a fraction viable of 10 −4 –10 −5 . The kill switch rapidly triggered cell death, with maximum killing efficiencies detected after just 1.5 h of aTc induction (Supplementary Fig. 1a ). However, efficiencies were significantly reduced by 2.5 h of induction, suggesting that a subpopulation of cells harboring inactive kill switches were able to repopulate. Improving kill switch stability through functional redundancy To determine the source of kill switch inactivation, we identified isolates exhibiting loss-of-function from the aTc-induction assay, and sequenced the cas9 , gRNA, and tetR expression cassettes. While a small number of isolates harbored mutated gRNA (10 ± 14%) and tetR (3 ± 6%) cassettes, a large majority harbored mutated cas9 expression cassettes (80 ± 19%), with mutations predominantly in the P tet promoters (Fig. 1d ). To decrease the probability of Cas9 expression inactivation, we integrated functionally redundant P tet - cas9 expression cassettes into four genomic neutral integration sites 11 (Fig. 1e ). Each successive integration improved the efficiency of the ile-2 gRNA kill switch, with the four P tet - cas9 integration strain (Int X4) achieving a fraction viable 10 times lower than the plasmid-based Cas9 expression strain (Fig. 1f ). Chromosomal expression of four functionally redundant Cas9 cassettes did not alter the relative efficiency of the six gRNAs (Supplementary Fig. 1b ). Diminishing returns were observed by the fourth P tet - cas9 integration, suggesting a shift in inactivation method. We sequenced non-functional Int X4 isolates and identified the promoter of the P tet -gRNA expression cassette as the primary source of instability (Fig. 1g ). Applying the same principle of functional redundancy, we combined the groL-2 and ile-2 gRNA expression cassettes onto one plasmid and achieved aTc-dependent cell death below the limit of detection after 1.5 h and a fraction viable below 10 −6.6 (Fig. 1h ). However, this initial 2-gRNA kill switch caused significant cell death even in the absence of aTc, limiting the therapeutic potential of the strain. To optimize gRNA expression and minimize leaky killing, we simultaneously mutagenized the -35 sites of the two gRNA P tet promoters and assayed colonies from the library (Supplementary Fig. 1c , d ). We isolated one 2-gRNA kill switch that maintained full killing in the presence of aTc without leaky killing in the absence of aTc (Fig. 1h ). The optimized 2-gRNA kill switch achieved a fraction viable of less than 10 −8.6 , surpassing the killing efficiency of 10 −8 recommended by the NIH Office of Biotechnology Activities 38 . The optimized kill switch also displayed an improved aTc sensitivity relative to the single ile-2 gRNA kill switch (Supplementary Fig. 1e ). Eliminating the reliance on antibiotics for efficient killing When the antibiotic (ABX) used to maintain the gRNA expression plasmid was removed from selection plates, we found the killing efficiencies of both 2-gRNA kill switches to be severely reduced due to loss of the gRNA plasmid during replication (Fig. 2a , Supplementary Fig. 2a ). To enable stability of the kill switch plasmid constructs in the absence of antibiotics, we implemented a modified version of a previously described ABX-free plasmid maintenance method 39 , 40 . We expressed the essential gene infA (required for initiation of protein synthesis) using an intermediate strength constitutive promoter on the gRNA plasmid and knocked it out of the genome of the four P tet - cas9 integration strain. With this system, EcN needs to maintain the plasmid to provide InfA and survive. The antibiotic resistance gene was not removed from the plasmid to allow for CFU quantification in future experiments using mixed strains. Expressing InfA on the gRNA plasmid eliminated the dependence on ABX but lowered the efficiency of the kill switches (Fig. 2b , Supplementary Fig. 2b ). Fig. 2: Eliminating the reliance on antibiotics for efficient killing. a – c System schematics and log 10 CFU values for the following kill switch strains: a optimized 2-gRNA circuit, which has four genome-integrated P tet - cas9 expression cassettes (X4) and optimized groL-2 and ile-2 P tet -gRNA expression cassettes, b ABX-independent 2-gRNA circuit, which has four genome-integrated P tet - cas9 expression cassettes (X4), optimized groL-2 and ile-2 P tet -gRNA expression cassettes, and an unoptimized constitutive infA expression cassette to complement a genomic infA knockout, and c CRISPRks, which has four genome-integrated P tet - cas9 expression cassettes (X4), optimized groL-2, ile-2, and rrs-2 P tet -gRNA expression cassettes, and an optimized constitutive infA expression cassette to complement a genomic infA knockout. Exponential phase cells for each strain were induced with 0 and 500 ng/mL aTc for 1.5 h ( a ), and 1.5 or 3 h ( b and c ) in LB with and without spectinomycin. CFUs were determined by plating onto LB agar with and/or without spectinomycin. Differences between the circuit in a and the circuits in b and c are highlighted in the construct schematics in red. gRNA , tetR , and infA cassettes are located on the same plasmid (connected lines), while cas9 is located exclusively in the genome. ‘Both’ denotes whether antibiotics were present in both the liquid and solid phase media. d aTc-inducible killing transfer curve for the CRISPRks strain after 3 h of induction in LB without antibiotics. Points represent experimental data while the line represents the fitted curve. e Long-term stability assessment of the CRISPRks strain. Each day for 28 days, three replicates of the strain were diluted 250X into fresh LB without antibiotics and grown for 24 h. Every 3–4 days, exponential phase cells were induced with 0 and 500 ng/mL aTc for 3 h and plated on LB agar without antibiotics for CFU quantification. f Log 10 Fraction Viable of the CRISPRks strain in response to 500 ng/mL aTc in M9+0.4% glucose (poor), M9+0.4% glucose+0.2% casamino acids (intermediate), and LB (rich). g Correlation between generation time and fraction viable. Fraction viable values of <−8 or <−9 had no colonies obtained from cultures receiving aTc. Values and error bars are the average and standard deviation of biological triplicate, respectively. See also Supplementary Figs. 2 , 3 and Supplementary Table 1 . Source data are provided as a Source Data file. Full size image Switching a plasmid to InfA-based selection has been shown to affect plasmid copy number 39 . We therefore improved the efficiency of the ABX-independent 2-gRNA kill switches using two methods: increasing gRNA expression by adding an rrs-2 gRNA expression cassette library and tuning the strength of the constitutive infA cassette. We performed both optimization methods independently for both the ABX-independent initial 2-gRNA and optimized 2-gRNA kill switches (Supplementary Fig. 2c– f ). Both methods improved kill switch efficiency but failed to restore complete killing. However, when we incorporated a third gRNA expression cassette into the InfA-optimized, optimized 2-gRNA kill switch, we obtained two variants that achieved complete killing in the presence of aTc with uninhibited growth in the absence of aTc (Supplementary Fig. 3a , b ). To identify the superior CRISPRks variant, we assayed each for their aTc-response time, aTc-sensitivity, and long-term stability (Fig. 2c , Supplementary Fig. 3c – f ). Both variants were inactivated at similarly low rates, with a small population of non-functional cells appearing after 14 days (112 generations) of growth in vitro. However, the Mut14 variant displayed a significantly better response time, aTc sensitivity, and long-term killing efficiency. The Mut14 CRISPRks achieved complete killing by 3 h of aTc induction, responded to aTc with a half-maximal concentration of 21 ng/mL, and displayed highly stable killing over 28 days of growth (Fig. 2 c– e ). In addition, the kill switch reliably achieved complete killing of EcN in diverse nutrient conditions, with a linear correlation between generation time and log fraction viable at 1 h (Fig. 2 f, g ). Complete killing is achieved by the CRISPRks in vivo after knocking out key components of the SOS response and providing intra-niche competition Having demonstrated the dependence of the kill switch on growth conditions, we next sought to predict kill switch efficacy in vivo. We developed an in vitro assay that more closely mirrors the gut, a complex environment with lower levels of oxygen, nutrients, and mixing than standard in vitro conditions. When induced with aTc in minimal media, micro-aerobically, and without shaking, the CRISPRks (circuit schematic in Fig. 2c ) achieved a fraction viable of 10 −3.3 after 24 h of induction (Fig. 3a ). After induction, we detected a large fraction of surviving colonies with kill switch inactivation (Fig. 3b ). By 48 h of induction, the CRISPRks monoculture had rebounded to pre-induction levels (Fig. 3a ). We hypothesized that supplementing the assay with microbes that fill the same niche as the CRISPRks strain could prevent the rapid regrowth of escape mutants and maximize killing efficiency 31 . We introduced competition into the assay by incubating the spectinomycin-resistant CRISPRks strain at a 1:1 ratio with a chloramphenicol-resistant control strain. In this mixture, the CRISPRks strain achieved an improved fraction viable of 10 −4.2 after 24 h of aTc induction (Fig. 3a , Supplementary Fig. 4a ). Although a population of the 1:1 consortium cells with non-functional kill switches was detected among the colonies that survived aTc induction, it was significantly smaller than the population detected from the CRISPRks monocultures (Fig. 3b ). Importantly, the fraction viable remained stable over the course of the experiment, suggesting that intra-niche competition is important to prevent the repopulation of escape mutants. Fig. 3: Complete killing is achieved by the CRISPRks in vivo after knocking out key components of the SOS response and providing intra-niche competition. a Log 10 Fraction Viable of the CRISPRks and CRISPRks Δrpdu kill switch strains when incubated alone or at a 1:1 ratio with the no gRNA control. Cells were cultured anaerobically at 37 °C without shaking in M9+0.4% glucose and induced with 0 or 500 ng/mL aTc. Fraction viable values of <-8 had no colonies obtained from cultures receiving aTc. Values and error bars are the average and standard deviation of biological triplicate, respectively. b Percent of cells that survived 24 h of aTc induction with non-functional kill switches. 24 colonies (8 from each replicate) of each condition were tested for functionality. Values and error bars are the average and standard deviation, respectively. c Sequencing results from 24 CRISPRks and 24 CRISPRks Δrpdu survivors (8 from each replicate) from a with non-functional ile-2 kill switches. The fraction mutated is the fraction of total sequenced cassettes that contained a mutation. Values and error bars are the average and standard deviation, respectively. d Schematic for testing the kill switches in vivo. 24 h after a streptomycin treatment, 10 8 CFUs of CRISPRks or CRISPRks Δrpdu were gavaged into C57BL/6 mice. 24 h after gavage, mice receiving each kill switch strain were split into two groups. One group received control water (5% sucrose) and the other group received aTc treatment water (10 5 ng/mL aTc + 5% sucrose). Fecal samples were collected 0, 6, 24, and 72 h after gavage for CFU quantification. e , f Log 10 CFUs/mg feces of e CRISPRks or f CRISPRks Δrpdu strains from mice receiving control water or aTc treatment water. Points are the average of two technical replicates. Lines and error bars are the average and standard error from 6 ( e ) or 4 ( f ) mice across two cages, respectively. g Percent of CRISPRks and CRISPRks Δrpdu cells that survived aTc treatment with non-functional kill switches. To assay for kill switch functionality, exponential phase cells were induced with 0 and 500 ng/mL aTc for 3 h and the absorbance at 600 nm quantified. Induced to uninduced absorbance ratios within three standard deviations of the no gRNA control strain were deemed non-functional. 24 colonies (12 from each cage) at each timepoint were tested for functionality. Values and error bars are the average and standard deviation, respectively. h Schematic for testing control and kill switch co-gavage in vivo. 72 h after a streptomycin treatment, 10 8 CFUs of a 1:1 ratio mixture of the no gRNA control strain and CRISPRks Δrpdu were gavaged into C57BL/6 mice. 24 h after gavage, mice receiving each kill switch strain were split into two groups. One group received control water (5% sucrose) and the other group received aTc treatment water (10 5 ng/mL aTc + 5% sucrose). Fecal samples were collected 0, 6, 24, and 48 h after gavage for CFU quantification. At the conclusion of the experiment (192 h), mice were sacrificed and cecal contents were collected for CFU quantification. i , j Log 10 CFUs/mg feces ( i ) or cecal contents ( j ) of CRISPRks Δrpdu from mice receiving a 1:1 ratio mixture of CRISPRks Δrpdu and the no gRNA control strain. Points are the average of two technical replicates. Cecal contents were sampled 8 days after gavage. Lines and error bars are the average and standard error from 4 mice across two cages, respectively. k Percent of CRISPRks Δrpdu cells that survived aTc treatment with non-functional kill switches when gavaged alone or in a 1:1 ratio mixture with the no gRNA control strain. 24 colonies (12 from each cage) at each timepoint were tested for functionality. Statistical comparisons were performed using two-tailed unpaired t -tests ( b and c ) or two-tailed mixed model ANOVA with Sidak’s multiple comparisons ( e – g , i , and j ), (* P < 0.05; ** P < 0.01; *** P < 0.001). See also Supplementary Figs. 4 and 5 . Source data with p-values are provided as a Source Data file. Full size image CRISPRks escape mutants were observed in the assay emulating in vivo conditions but not when tested under more optimal conditions (Fig. 2f ), suggesting that the mutations accumulated de novo during the assay rather than existed in the inoculum population. Single DNA DSBs, as induced by CRISPR-Cas9, have been shown to strongly induce the SOS response in E. coli 41 , 42 . This response increases the mutation rate of the cell through the expression of recombinase genes, including recA , and diverse error-prone DNA polymerase genes, including polB , dinB , and umuDC . Thus, we hypothesized that a nutrient- and growth-limited environment, which have been shown to reduce per-cell-protein production 43 , reduces expression of the kill switch, impairing the DNA-cleavage rate and allowing the survival of daughter cells with an induced SOS response and elevated kill switch inactivation rate. To reduce SOS-response-mediated DNA mutagenesis, we knocked out recA , polB , dinB , and umuDC (Δrpdu) from the CRISPRks strain. The CRISPRks Δrpdu kill switch maintained complete killing in optimal growth conditions (Supplementary Fig. 4b ). In an assay emulating in vivo conditions, the CRISPRks Δrpdu strain achieved a similar fraction viable as the CRISPRks strain in the absence of competition (Fig. 3a ). However, CRISPRks Δrpdu had a significantly smaller population of non-functional survivors after 24 h of induction, indicating a reduced rate of inactivation, but with a similar mutation profile (Fig. 3 b, c ). Most mutations for both strains were in the promoters of the P tet - cas9 and P tet -gRNA expression cassettes. No mutations were identified in the infA cassette or the gRNA target sites. Strikingly, in the presence of competition from the control strain, CRISPRks Δrpdu cells were eliminated from the culture by 72 h of aTc induction (Fig. 3a , Supplementary Fig. 4a ). Using the same assay, we found that the CRISPRks Δrpdu strain was able to be eliminated from the culture at control:kill switch ratios as low as 1:1000 (Supplementary Fig. 4c ). This is important because it suggests the mitigation of kill switch escape afforded by intra-niche competition is robust against stochastic variation in sample preparation and colonization efficiencies of the two strains; further, a lower proportion of control relative to engineered probiotic would help maximize therapeutic potential. Both the CRISPRks and CRISPRks Δrpdu strains showed similar long-term killing efficiencies, kill switch inactivation rates, and mutation patterns in optimal growth conditions (Fig. 4 d– f ). As such, while the Δrpdu knockouts significantly reduce induction-dependent kill switch inactivation, they have no impact on the mutation rate during normal DNA replication. Fig. 4: Development of a 2-input CRISPRks that responds to both aTc and reduced temperatures. a Schematic of the joint aTc- and temperature-inducible CRISPRks system. Cas9 is expressed from four genome-integrated aTc-inducible P tet - cas9 cassettes, and gRNAs are expressed from three aTc-inducible P tet -gRNA expression cassettes on a plasmid. TetR, which regulates the expression of the P tet promoters in an aTc-dependent manner, is expressed by the P tlpA promoter on the gRNA plasmid. Activity of the P tlpA promoter is regulated by the TlpA* transcription factor. TlpA* is expressed from the same P tlpA promoter and regulates its own expression in a negative feedback loop. In the presence of aTc or at temperatures less than 33 °C, the Cas9-gRNA complex is produced, leading to cell death. b Log 10 CFUs for the no gRNA control, initial 2-input CRISPRks, and optimized 2-input CRISPRks. Exponential phase cells for each strain were induced with 0 and 500 ng/mL aTc at 37 °C for 3 h. Cultures were then plated on LB agar without antibiotics and incubated overnight at 37 °C (both 0 and 500 n/mL aTc cultures) or for seven days at 30 °C (0 ng/mL cultures) for CFU quantification. c The expression and stability of TetR was optimized for the 2-input CRISPRks by simultaneously tuning the strength of the RBS and inserting a C-terminus SsrA degradation tag library onto TetR. d Long-term stability of the optimized 2-input kill switch. Each day for 28 days, three replicates of the strain were diluted 250X into fresh LB without antibiotics and grown for 24 h. Every 3-4 days, exponential phase cells were plated on LB agar without antibiotics and incubated overnight at 37 °C and for 7 days at 30 °C for CFU quantification. Fraction viable values of <-7 had no colonies obtained from cultures receiving aTc. e Cell death transfer curve with respect to temperature for the no gRNA control and the optimized 2-input kill switch. Exponential phase cells were incubated in a thermocycler for 5 h at a range of temperatures: 30, 30.6, 31.6, 32.8, 34.4, 35.6, 36.4, and 37 °C. Points represent experimental data while the line represents the fitted curve. Values and error bars are the average and standard deviation of biological triplicate, respectively. See also Supplementary Fig. 6 and Supplementary Table 1 . Source data are provided as a Source Data file. Full size image We then tested the efficacy of the CRISPRks and CRISPRks Δrpdu strains in vivo. C57BL/6 mice were treated with streptomycin to enable EcN colonization 44 (Fig. 3d ). 24 h later, 10 8 CFU of kill switch or control EcN, with or without the Δrpdu knockouts, were delivered to mice by oral gavage. 24 hours after EcN gavage, mice were switched to aTc treatment water (10 5 ng/mL aTc + 5% sucrose) or control water (5% sucrose) ad libitum , and fecal samples were collected longitudinally (Fig. 3d ). While both kill switch strains demonstrated significant killing activity in vivo, the CRISPRks Δrpdu strain exhibited improved killing efficacy with a 4-log reduction in fecal titers after 24 hours of aTc treatment, compared to a 1-log reduction for the CRISPRks strain at the same timepoint (Fig. 3 e, f ). In addition, as observed in vitro (Fig. 3b ), the Δrpdu knockouts mitigated the incidence of escape mutants; fewer CRISPRks Δrpdu isolates (57 ± 37% versus 100 ± 0%) recovered from stool at the 24-hour timepoint exhibited loss of kill switch activity in follow-up in vitro assays (Fig. 3g ). CRISPRks-mediated reduction of EcN titers in vivo was transient even with the Δrpdu knockouts. Fecal kill switch titers in aTc-treated mice approached that of mice given control water by 72 hours of treatment (Fig. 3 e, f ). At this timepoint, 100% of recovered kill switch isolates from aTc-treated mice demonstrated loss-of-function in follow-up in vitro assays (Fig. 3g ), indicating a bloom of an escape mutant population in vivo. We additionally observed that fecal titers of the control strains were low on average and highly variable at treatment baseline (Supplementary Fig. 4 g, h ). In contrast, fecal titers of CRISPRks strains consistently reached 5-6 logs at treatment baseline (Fig. 3 e, f ). We hypothesized that the spectinomycin-resistant kill switch strains were cross-resistant to the streptomycin used for knockdown of the native microbiota, a phenotype not afforded to the chloramphenicol-resistant control strains. Indeed, we observed that baseline fecal titers of chloramphenicol-resistant EcN were dependent on the length of the interval between streptomycin and EcN gavage, while those of an isogenic spectinomycin-resistant strain of EcN were not (Supplementary Fig. 5a , b ). We sought to optimize the microbiota knockdown protocol to consistently achieve equal baseline titers of the control and CRISPRks strains, either by extending the interval between streptomycin treatment and EcN gavage or through use of an alternative antibiotic, and determined that waiting 72 h after streptomycin treatment enables equivalent baseline titers of chloramphenicol-resistant and spectinomycin-resistant EcN (Supplementary Fig. 5b ). In contrast, omission of any antibiotic treatment resulted in undetectable levels of either EcN strain (Supplementary Fig. 5c ), indicating a failure to colonize, while treatment with carbenicillin 45 48 hours before EcN gavage resulted in low and variable titers of both EcN strains (Supplementary Fig. 5d ). We therefore carried out another experiment in mice with the CRISPRks Δrpdu and control Δrpdu strains with a 72 h interval between streptomycin and EcN gavage. Motivated by the hypothesis that intra-niche competition by an isogenic non-kill switch EcN strain can support eradication of the kill switch EcN population, we included an additional arm in which we co-gavaged the kill switch and control strains at a 1:1 ratio (Fig. 3h ). This is an attractive approach when the goal is eradication of a specific population of engineered microbes, rather than a probiotic species itself, from the gut. Using the 72 h interval, we were able to achieve equivalent baseline titers of spectinomycin-resistant CRISPRks Δrpdu and chloramphenicol-resistant control Δrpdu EcN (Fig. 3i , Supplementary Fig. 5e , f ). Remarkably, when CRISPRks Δrpdu EcN was co-gavaged with control Δrpdu EcN, the CRISPRks Δrpdu strain was no longer detectable in stool by 48 hours of aTc treatment (Fig. 3i ). Except for a single colony observed after plating undiluted fecal homogenate, the CRISPRks Δrpdu strain remained undetectable in cecal contents after 8 days of aTc treatment, demonstrating almost complete eradication of the engineered microbe from the mice (Fig. 3j ). In contrast, titers of control Δrpdu EcN in the same mice remained stable over the course of aTc treatment, as did titers of both strains (similarly co-gavaged) in mice treated with control water (Fig. 3i ). Correspondingly, we were still able to detect EcN in the ceca of these mice at sacrifice, with cecal titers not significantly different between the control strain in either treatment arm and the CRISPRks Δrpdu strain in the control water arm (Fig. 3j ). After 24 hours of aTc treatment in vivo, 100% of CRISPRks Δrpdu isolates recovered from mice colonized with only this strain exhibited loss of kill switch function in vitro, while no loss of function was observed in isolates of the same strain from co-gavaged mice (Fig. 3k ). These data indicate that a combined approach of kill switch induction and intra-niche competition can mitigate the emergence of escape mutant populations in vivo. Development of a 2-input CRISPRks that responds to both aTc and reduced temperatures Several temperature sensors have been characterized in other strains of E. coli 30 , 46 , 47 . Of these sensors, the P tlpA promoter with a modified TlpA regulator protein from Salmonella demonstrated the highest fold-change in expression in response to a temperature reduction. TlpA is a transcriptional regulator that assembles into homodimers and represses transcription from the P tlpA promoter at low temperatures 48 . At high temperatures, the dimers are unable to form, allowing transcription. To characterize the sensor in EcN, we obtained the P tlpA - tlpA temperature sensing system from Salmonella typhimurium SL1344 genomic DNA and used it to drive expression of a green fluorescent protein (GFP) reporter. The wild-type TlpA sensor induced GFP expression with a half maximal fluorescence at 43.4 °C, significantly above our desired range (Supplementary Fig. 6a ). We next inserted five amino acid substitutions identified by Pirner et al. to generate the modified TlpA sensor, TlpA* 30 . As previously demonstrated, the mutations shifted the half maximal fluorescence to a temperature of 35.6 °C (Supplementary Fig. 6b ). Expression of GFP was 97% repressed at a temperature of 34 °C compared to 37 °C. To make a 2-input CRISPRks that induces cell death in response to both aTc and a temperature downshift (i.e., an OR gate), we replaced the constitutive promoter driving TetR expression with the P tlpA - tlpA * temperature sensor (Fig. 4a ). With this design, TetR expression is inhibited by TlpA* at low temperatures (<33 °C), de-repressing expression of the kill switch. At 37 °C, TetR is expressed, and the kill switch remains sensitive to aTc. This initial 2-input CRISPRks achieved complete cell death in response to aTc, but only a modest reduction in fraction viable (10 −1.7 ) at 30 °C (Fig. 4b ). We hypothesized that the temperature-dependent response was poor because the circuit inhibits new TetR production but not the activity of TetR already in the cell. Numerous cell divisions may be required to dilute cytosolic TetR concentrations below the threshold for kill switch induction. To test this hypothesis, we performed a liquid-phase, temperature-response kill switch assay using a range of initial dilution factors, where larger starting dilutions would allow more growth and smaller per-cell TetR concentrations. The killing efficiency was directly correlated to the dilution factor, confirming the dependence on cell divisions for kill switch activation (Supplementary Fig. 6c ). To improve the response to temperature, we sought to uncouple TetR removal from cell growth. We simultaneously optimized the expression level and stability of TetR by inserting a ribosome binding site (RBS) library and a C-terminal SsrA degradation tag library, respectively (Fig. 4c ). We tested over 500 2-input CRISPRks Δrpdu variants from the combined RBS and degradation tag library and identified eight that achieved efficient cell death at 30 °C with minimal growth inhibition at 37 °C (Supplementary Fig. 6d ). To select the best kill switch, we assayed select variants for long-term stability and temperature sensitivity (Supplementary Fig. 6e , f ). The final selected variant was highly stable over 28 days of growth (Fig. 4d ) and demonstrated an ultrasensitive response to temperature, with no killing at 33-37 °C and strong killing at temperatures less than 32 °C (Fig. 4e ). The 2-input CRISPRks efficiently kills EcN in response to both aTc treatment and excretion from mice We next tested the in vivo efficiency of the 2-input CRISPRks Δrpdu strain when gavaged into streptomycin-treated mice alone, or in co-gavage with the cognate control strain at a 1:1 ratio (Fig. 5a ). In a control arm, mice were instead singly-gavaged with the control strain as before. To assay chemical induction of the 2-input kill switch, 24 hours after EcN gavage, mice were switched to aTc treatment water or control water ad libitum , and fecal samples were collected longitudinally over a week (Fig. 5a ). To assay temperature induction of the 2-input kill switch, collected stool was processed at both 37 °C and room temperature (22 °C); results from samples processed at 37 °C reflect only within-gut aTc induction of kill switch activity, while those processed at 22 °C additionally reflect temperature induction of the kill switch after excretion. Fig. 5: The 2-input CRISPRks efficiently kills EcN in response to both aTc treatment and excretion from mice. a Schematic for testing co-gavage of the control strain and 2-input CRISPRks in vivo. 72 h after a streptomycin treatment, 10 8 CFUs of a 1:1 ratio mixture of the no gRNA control strain and 2-input CRISPRks were gavaged into C57BL/6 mice. 24 h after gavage, mice were split into two groups. One group received control water (5% sucrose) and the other group received aTc treatment water (10 5 ng/mL aTc + 5% sucrose). Fecal samples were collected 0, 24, 48, 72, and 168 h after gavage for CFU quantification. At the conclusion of the experiment (192 h), mice were sacrificed and cecal contents were collected for CFU quantification. Fecal and cecal contents were plated at 37 °C overnight or room temperature (RT) for 48 h. Samples were also incubated in LB broth at RT for 24 h and plated at RT for 48 h for CFU quantification. b Log 10 CFUs/mg feces (left) or cecal contents (right) of 2-input CRISPRks cells from mice receiving a 1:1 ratio mixture of 2-input CRISPRks and the no gRNA control strain. CFUs were quantified at 37 °C (top) or RT (bottom). Points are the average of two technical replicates. Lines and error bars are the average and standard error from 6 mice across two cages, respectively. Statistical comparisons were made between control water and aTc treatment (top) or control strain and 2-input CRISPRks (bottom). c Percent of 2-input CRISPRks cells that survived aTc treatment or control water with non-functional kill switches when gavaged alone (from Supplementary Fig. 7b , top) or in a 1:1 ratio mixture of 2-input CRISPRks and the no gRNA control strain (from Fig. 5b, top). To assay for kill switch functionality, exponential phase cells were induced with 0 and 500 ng/mL aTc for 3 h and the absorbance at 600 nm quantified. Induced to uninduced absorbance ratios within three standard deviations of the no gRNA control strain were deemed non-functional. 24 colonies (12 from each cage) at each timepoint were tested for functionality through an in vitro aTc-response assay. d Sequencing results from 24 2-input CRISPRks survivors from Fig. 5b, top with non-functional kill switches. The fraction mutated is the fraction of total sequenced cassettes that contained a mutation. Mutations in the P tlpA promoter are counted as mutations in both the tetR and tlpA expression cassettes. e RT growth assays for control and 2-input CRISPRks cells obtained from fecal or cecal samples at different timepoints in Fig. 5b. f CFUs of control and 2-input CRISPRks cells from Fig. 5e following 24 h of RT growth. Points are the average of two technical replicates. Values and error bars are the average and standard error from 6 mice across two cages, respectively. Statistical comparisons were performed using two-tailed mixed model ANOVA ( c ) or two-tailed mixed model ANOVA with Sidak’s multiple comparisons ( b and f ) (* P < 0.05; ** P < 0.01; *** P < 0.001, **** P < 0.0001). See also Supplementary Fig. 7 . Source data with p-values are provided as a Source Data file. Full size image When singly-gavaged, aTc induction of the 2-input CRISPRks Δrpdu strain within mice resulted in a significant (3-log) reduction in fecal titers. However, this response was again transient over the first 48 h of treatment with the kill switch population rebounding to baseline levels by 72 h and remaining high in stool on day 7 and in the ceca on day 8 (Supplementary Fig. 7b , c , top). Temperature induction of the 2-input CRISPRks Δrpdu strain in stool collected from these mice greatly improved killing efficacy; we consistently observed 1- to 2-log CFU/mg EcN in stool from mice that were not induced with aTc, 5-log lower than control strain titers, indicating that this reduction was driven by temperature induction alone (Supplementary Fig. 7b , bottom). When combined with aTc induction in mice, temperature induction (i.e., post-excretion) further reduced fecal titers such that we were unable to detect EcN in stool from these mice at the 24- and 48-h timepoints (Supplementary Fig. 7b , bottom). However, the kill switch population was not completely eradicated as we were able to detect it in stool on day 7 and in the ceca on day 8 (Supplementary Fig. 7b , c , bottom). Interestingly, we also observed greater variability in stool titers of the control strain in mice treated with aTc compared to control water, including one mouse in which we were unable to detect control strain at multiple timepoints (Supplementary Fig. 7a ). This high variability may be due to aTc induction of the Cas9 complex (which the control strain carries despite lacking gRNAs), leading to decreased in vivo fitness, potentially coupled with a rise of control strain subpopulations with inactivating mutations (e.g., mutations in the P tet promoter driving Cas9 expression). Co-gavage of the 2-input CRIPRSks Δrpdu strain with the control strain, providing intra-niche competition, mitigated the bloom of an escape mutant population during aTc treatment (Fig. 5b , top). This was evidenced by significantly lower ( p = 0.0006; mixed model ANOVA) stool and cecal titers of the kill switch strain in aTc-treated mice compared to control strain titers in the same mice, or to either strain in mice treated with control water. Correspondingly, fewer kill switch isolates recovered from co-gavaged mice exhibited loss of function in follow-up in vitro kill switch assays compared to isolates from singly-gavaged mice (29 ± 6% compared to 92 ± 12% on day 3, and 67 ± 24% compared to 100% by day 7) (Fig. 5c ). As in the single-gavage arm (Supplementary Fig. 7 ), in samples from co-gavaged mice, temperature induction alone of the 2-input kill switch resulted in high killing efficacy (1-log CFU/mg feces [gray circles] compared to 4-log CFU/mg for the control strain in the same mice [white triangles]), but the kill switch population increased in titer by day 7 (Fig. 5b , bottom). When we sequenced non-functional isolates, we identified mutations throughout the cas9 , gRNA, tetR , and tlpA expression cassettes, suggesting the presence of diverse inactivation mechanisms and that the stability of the kill switch cannot be easily improved through additional functional redundances (Fig. 5d ). Again, no mutations were identified in the infA cassette or the gRNA target sites. Strikingly, a combination approach, including provision of intra-niche competition by a competing strain, within-gut chemical kill switch induction, and temperature kill switch induction outside the gut, was effective for eradicating the kill switch population; with these conditions, we were unable to detect the 2-input CRISPRks strain in stool beginning at the 24-hour timepoint and through the end of the experiment, nor in the ceca on day 8 (Fig. 5b , bottom [red circles]). In addition to plating directly from stool and ceca to quantify EcN, we used fecal and cecal samples to inoculate room temperature kinetic growth assays in rich media as a more sensitive measure of temperature-induced biocontainment, where any EcN population that may have been below the limit of detection of the direct plating assays has the opportunity to amplify (Fig. 5 e, f , Supplementary Fig. 7d , e ). In line with the direct plating results, while on days 7 and 8 (cecal timepoint), we observed some growth of the 2-input CRISPRks Δrpdu strain in cultures inoculated from mice not treated with aTc (gray lines), we observed no growth over 24 hours of the kill switch strain in cultures inoculated from mice that had been treated with aTc (red lines, Fig. 5e ); with the exception of an average of 10 1.4 kill switch colonies observed at the 72-hour timepoint, plating from the kinetic growth assays at their termination to quantify CFUs confirmed this result (Fig. 5f ). In contrast, in growth assays inoculated from mice treated with aTc in the single-gavage arms, we observed no significant difference in titers of 2-input kill switch strains and control strains (Supplementary Fig. 7d, e ). These data indicate that intra-niche strain competition is required to mitigate aTc-mediated selection for kill switch mutants that escape both chemical and temperature induction modalities. It is important that biocontainment systems minimally impact the protein production capability and therapeutic potential of the engineered microbe. To determine how the kill switches designed here impact protein expression in EcN, we quantified the expression of a constitutively expressed GFP reporter using both a genome-integrated and plasmid-based cassette. Both the CRISPRks Δrpdu strain and the 2-input CRISPRks Δrpdu strain achieved GFP expression levels equivalent to wild-type EcN and the no gRNA control strain (Supplementary Fig. 8 ). This trend remained true for both genome-integrated and plasmid-based expression methods. As such, the aTc-responsive CRISPRks Δrpdu kill switch and the aTc- and temperature-responsive 2-input CRISPRks Δrpdu kill switch can be effectively applied towards the biocontainment of engineered therapeutic and diagnostic microbes. Discussion Robust control of the viability of engineered probiotics is essential for the host safety and the environmental protection. The aTc-only and 2-input CRISPRks strains developed here allow the growth of EcN to be tightly controlled during in vivo applications. To develop the two CRISPRks strains, we explored a variety of methods for optimizing genetic stability, including incorporating multiple functionally redundant Cas9 and gRNA expression cassettes (Figs. 1 and 2 ), eliminating reliance on antibiotics for kill switch maintenance (Fig. 2 ), knocking out key SOS response genes involved in DNA mutagenesis (Fig. 3 ), providing intra-niche competition (Figs. 3 and 5 ), and combining two layered methods of viability control (Figs. 4 and 5 ). In this work, we addressed two routes of DNA mutagenesis that contributed to kill switch instability (Fig. 6a ). First, we mitigated stochastic inactivation of kill switch constructs resulting from the natural and slow accumulation of errors that occurs during DNA replication 49 , 50 . Introduction of functionally redundant Cas9 and gRNA expression cassettes reduced the kill switch inactivation rate due to these processes, enabling kill switch stability for at least 28 days of growth. However, we continued to detect inactivated CRISPRks variants after about 14 days of growth (Supplementary Fig. 4d – f ). The rate of evolution can potentially be further reduced by optimizing the metabolic burden required to maintain the circuit 51 , improving the accuracy of DNA replication 52 , and knocking out transposable elements 53 , 54 . The second route of kill switch inactivation involved SOS-mediated DNA mutagenesis in response to the DSBs caused by Cas9 (Fig. 6a ). Escape mutants of our CRISPRks strain only arose via this process when tested in vivo or with in vivo-like conditions where resources and growth are limited 55 . In these conditions, limited induction of the kill switch may lead to incomplete cleavage of the multi-copy replicating chromosome. Low levels of DSBs per cell would allow daughter cells to survive with intact copies of the chromosome, but with an activated SOS response and an elevated rate of DNA mutagenesis. Expression of a genome-integrated GFP reporter is significantly weaker in minimal medium compared to rich medium, suggesting lower Cas9 expression levels in nutrient-poor conditions (Supplementary Fig. 8 ). In addition, even single double strand breaks have been shown to strongly induce the SOS response 41 . We successfully reduced SOS-mediated inactivation through the Δrpdu knockouts, allowing for complete killing of the CRISPRks strain in growth-limited conditions. Knocking out alternative recombinases with known low levels of activity may further improve the stability of the 2-input CRISPRks in vivo 56 . In addition, our current gRNAs target the chromosome near the origin of replication where growth-dependent copy numbers would be highest (Fig. 1b ). Utilizing gRNAs that instead target the region where growth-dependent chromosomal copy numbers are lower may decrease the number of daughter cells that escape kill switch induction with intact genomes and reduce the potential for SOS-mediated kill switch inactivation. Fig. 6: Summary of sequencing results from non-functional kill switches and potential mechanisms of circuit inactivation. a Potential mechanisms of kill switch inactivation and cell survival. DSB, double-strand break. b Classifications of the observed mutations in P tet - cas9 , P tet -gRNA, tetR , and tlpA expression cassettes. Source data are provided as a Source Data file. c Sequence of the most commonly observed inactivation mutation in the utilized P tet promoters. Full size image Mutations in the P tet - cas9 and P tet -gRNA cassettes each contributed to ~45% of the total observed mutations, highlighting the importance of functionally redundant expression cassettes (Fig. 6b ). Importantly, the P tet promoters within these cassettes were the primary source of instability. The P tet promoter used in these kill switches contains two identical 19 bp tet operator sites. Over 50% of the total observed mutations were 25 bp deletions including one of these operators and most of the -35 site (Fig. 6c ). These deletions can occur through RecA-dependent homologous recombination, or RecA-independent rearrangement of tandem repeat sequences by replication slippage, sister-chromosome exchange-associated slippage, and single-strand annealing 50 . Replacing the P tet promoters with engineered variants that have only one tet operator site or operator sites with lower sequence identity, or using alternative chemical-inducible promoters lacking internal homology, may further improve the stability of the kill switches. Biocontainment of genetically engineered organisms must not neglect biocontainment of the corresponding recombinant nucleic acids, due to the possibility of horizontal gene transfer (e.g., antibiotic resistance genes, especially as mediated by plasmids) 57 , 58 . Our kill switch design mitigates this possibility in that the only plasmid-borne elements include the Tet repressor, which does not alone confer antibiotic resistance, and the gRNAs, which carry inherent specificity to the EcN genome and whose transcription should have no cleaving effect in the absence of the Cas9 cassette. As such, the incorporation of the infA plasmid-maintenance system served to both improve stability of the kill switch and reduce the risk of antibiotic resistance gene dissemination. We included plasmid-borne spectinomycin and chloramphenicol resistance genes to selectively distinguish our control and CRISPRks strains in co-culture and co-gavage experiments; clinical iterations of the kill switch strains for practical applications would omit these selective markers. Inspired by the competitive inter-strain exclusion properties observed in gut pathogens 59 and commensals alike 31 , 60 , we hypothesized genetic approaches for mitigating kill switch escape could be complemented at the population level by external competitive pressure. With the goal of eliminating a specific subpopulation of engineered microbes and not a probiotic species itself from the gut, we provided both CRISPRks EcN and control EcN in our animal models, and demonstrated that kill switch induction in synergy with competitive exclusion enabled virtually complete eradication of the kill switch population. Co- or pre-administration of a wild-type probiotic strain in clinical applications of engineered probiotics could help prime the gut for induced elimination of the engineered probiotic. However, ways to ensure the stability of both populations prior to kill switch induction must be further investigated. The performance of the kill switches could be further improved by addition of an orthogonal kill switch mechanism, an approach that has recently been demonstrated to have a multiplicative effect on efficacy 18 . In the absence of aTc exposure in the gut (i.e., induction only with temperature upon excretion), there was an increasing incidence over time of biocontainment failure for the 2-input CRISPRks (Fig. 5b , bottom), suggesting that the strain requires both induction modalities to perform robustly. Indeed, the 2-input CRISPRks also did not achieve full efficacy in vitro when induced with temperature alone (Fig. 4b ), in contrast to the single-input kill switch when induced with aTc in vitro (Fig. 2c ), indicating that P tlpA -driven expression of tetR may be leaky even at low temperatures or that the TetR degradation rate is not sufficiently high to robustly induce kill switch activation. Because the temperature- and aTc-driven induction feed into the same kill switch mechanism, the combination of both inputs does not result in a synergistic increase in efficacy. Similarly, induction with both temperature and aTc in the context of intra-niche competition led to no detectable growth after direct plating of feces (approximately 2% of the entire fecal sample at the 1X dilution), nor after inoculating 24 h growth assays at room temperature to allow any putative escape mutants to amplify; the exception was at the 72 h timepoint, where after 24 h of growth in rich media, we observed a mean 583 CFU/mL of the 2-input CRISPRks (Fig. 5f ). With a room temperature doubling time of 115 min estimated from control strain titers from the same mice at the same timepoint, this implies a baseline CRISPRks titer of 0.1 CFU/mL, or approximately 100 CFU in the entire fecal sample given our inoculation scheme. This was not captured when plating directly from feces (Fig. 5b , bottom), highlighting the importance of survival assay sensitivity in the development of kill switches. Escape events such as this could also be mitigated by addition of an orthogonal kill switch circuit, in addition to further optimization of the tlpA and tetR cassettes. In summary, we developed CRISPR-based kill switches in EcN to create a safe probiotic chassis for future biomedical technologies. These kill switches allow EcN to proliferate under normal gut conditions and initiate cell death in response to oral consumption of an inducer and excretion from the body. We have demonstrated a kill switch approach to on-demand selective removal of engineered microbes from the gut. We also explored diverse methods for improving the stability of the kill switches and minimized key mechanisms of kill switch inactivation. The engineered kill switches apply genetic parts (CRISPR/Cas9 and TetR/Ptet) that have been shown to be functional in diverse microbes 32 , 33 , 34 . As such, similar kill switches can be engineered for a larger panel of probiotic microbes. Furthermore, the temperature sensing module can be replaced with sensors for alternative environmental conditions 46 and chemicals 61 , 62 , 63 to create 2-input kill switches for novel applications. These microbial biocontainment tools will facilitate the creation of living therapeutics or other microbes for environmental applications that are more robust, predictable, and controllable, which is critical for both regulatory approval and public acceptance of genetically modified organisms. Methods Experimental model and subject details All in vitro experiments were performed in compliance with the Washington University in Saint Louis Institutional Biological & Chemical (IBC) Safety Committee. All plasmids were assembled in and purified from E. coli DH10B. Purified plasmids were subsequently transferred to and tested in wild-type or engineered E. coli Nissle 1917 variants lacking the two native plasmids, pMUT1 and pMUT2. All mouse experiments were approved by the Washington University in Saint Louis School of Medicine Institutional Animal Care and Use Committee (Protocol number: 21-0160), and performed in AAALAC-accredited facilities in accordance with the National Institutes of Health guide for the case and use of laboratory animals. All mouse experiments were performed in female 8-week old C57BL/6 mice (Jackson Labs C57BL/6 J, RRID:IMSR_JAX:000664). Mice were housed in a specific pathogen free barrier facility maintained by WUSM DCM at 30-70% humidity and 68–79 °F under a 12:12 hour light:dark cycle. Mice were provided feed (Purina Conventional Mouse Diet (JL Rat/Mouse 6 F Auto) #5K67) and water ad libitum . Mice were co-housed with up to 5 mice per cage, and at least 2 cages per experimental arm to account for cage effects. Oral gavage of mice was performed using 18ga x 38 mm plastic feeding tubes (FTP-18-38, Instech). To ablate the native microbiome prior to EcN colonization, each mouse was administered 20 mg streptomycin sulfate salt (S6501, Sigma-Aldrich) in 100 μL H 2 O via oral gavage 44 . Streptomycin treatment of mice, often used in Salmonella enterica colonization models 44 , has also been used extensively in colonization models for commensal and pathogenic strains of Escherichia coli 13 , 31 , 64 , 65 , 66 , 67 , 68 . 24 or 72 hours after streptomycin administration, mice were orally gavaged with 10 8 CFU EcN in 100 μL phosphate buffered saline (PBS). To test alternative microbiome ablation strategies, mice were instead orally gavaged with 6 mg carbenicillin disodium salt (C1389, Sigma-Aldrich) in 100 μL H 2 O 48 hours prior to EcN administration 45 , or left untreated prior to EcN administration. 24 hours after EcN administration, mice were switched from standard drinking water to aTc treatment water (10 5 ng/mL aTc + 5% sucrose, filter sterilized) or control water (5% sucrose, filter sterilized), provided ad libitum . Treatment water and control water were prepared fresh and replaced daily for the duration of each experiment. Fecal samples were collected at the indicated timepoints. At the end of each experiment, mice were sacrificed through carbon dioxide asphyxiation. Method details Plasmids, strains, and growth conditions All plasmids were designed using SnapGene and assembled in E. coli DH10B using the Gibson Assembly (100 mM Tris-HCl, 10 mM MgCl 2 , 0.2 mM dNTPs, 10 mM DTT, 5% PEG-8000, 1 mM NAD + , 4 U/μL Taq DNA ligase, 4 U/mL T5 exonuclease, 25 U/mL Phusion DNA polymerase) or Golden Gate Assembly (1X T4 ligase buffer, 1X Cutsmart buffer, 40 U/μL T4 ligase, 1 U/μL SapI, 1 U/μL DpnI) methods. Wild-type or engineered EcN variants were then transformed with the purified and sequence-verified plasmids for kill switch testing. Plasmid DNA was isolated using the PureLink Quick Plasmid Miniprep Kit (K210011, Invitrogen), and polymerase chain reaction (PCR) products were extracted from electrophoresis gels using the Zymoclean Gel DNA Recovery Kit (D4008, ZYMO research). Enzymes were purchased from New England Biolabs (Ipswich, MA, USA). To construct engineered EcN variants, we utilized lambda red-mediated CRISPR-Cas9 recombineering as previously described 69 . To create knockouts of specific genes (Supplementary Table 2 ), gRNAs for the pgRNA plasmid were designed using the gRNA designer from Atum (atum.bio) to target the gene of interest. 60 bp ssDNA oligos were designed with 30 bp arms homologous to the lagging strand of DNA synthesis flanking the region to be knocked out. For insertions (Supplementary Table 3 ), the pgRNA plasmid was similarly constructed. The dsDNA insert was obtained by constructing a plasmid with the DNA to be inserted flanked by 500 bp arms homologous to the insertion region. The full product (both arms and insert DNA) were PCR amplified and purified by gel extraction. EcN harboring pMP11 was then transformed with 100 ng of the respective pgRNA plasmid and either 1 μM of the ssDNA oligo or 100 ng of the dsDNA insert to initiate recombination. All sequencing (Supplementary Table 4 ) was performed by Genewiz (South Plainfield, NJ, USA). Primers were purchased from Integrated DNA Technologies (Coralville, IA, USA). All plasmids and parts constructed and used in this work are summarized in Supplementary Tables 5 , 6 , respectively. Unless otherwise specified, LB medium was used for culturing. For Figs. 2 f, 3a , Supplementary Fig. 4a , c , M9 minimal medium supplemented with 1 mM MgSO 4 , 100 μM CaCl 2 , and 0.4% w/v glucose was used. 0.2% w/v casamino acids was also added to the M9 minimal medium as specified. Medium was supplemented with the following concentrations of antibiotics as necessary: 100 μg/mL ampicillin, 34 μg/mL chloramphenicol, 20 μg/mL kanamycin, 10 μg/mL gentamycin, and 100 μg/mL spectinomycin (Gold Biotechnology, Olivette, MO, USA). Unless otherwise stated, cultures were incubated at 37 °C with 250 rpm shaking. Standard kill switch assays EcN was transformed with kill switch plasmids by electroporation and plated on LB agar with the relevant antibiotics. Single colonies were then transferred to 1 mL of LB in 14 mL round bottom tubes (14-959-11B, Fisher Scientific) and grown in a shaking incubator at 37 °C and 250 rpm for ~2 h until exponential phase (OD600 of 0.25-0.50) was reached. To test for aTc-inducible cell death, these cultures were diluted to an OD600 of 0.01 in 1 mL fresh LB medium with the specified concentrations of aTc 28 . At the indicated timepoints, samples were removed from the cultures for viable colony forming unit (CFU) quantification. CFUs were determined by plating 10 μL of serially diluted cultures onto LB agar with the relevant antibiotics unless otherwise specified and incubating at 37 °C overnight. For cultures where no colonies were obtained in the undiluted sample, 100 μL was also plated to ensure a more accurate quantification. The fraction of viable cells was calculated as the ratio of CFUs obtained from the induced culture to the number of CFUs obtained from the uninduced culture. To test for temperature-inducible cell death, the exponential phase cultures were serially diluted and plated onto two LB agar plates. One plate was incubated at 37 °C overnight while the other plate was incubated at 30 °C for two weeks. For temperature-sensitive assays, all liquid medium and LB agar plates were preheated to 37 °C for at least two hours prior to the addition of cells. Long-term stability kill switch assay A long-term stability assay was used for Figs. 2 e, 4d , Supplementary Figs. 3e , 6e . On day 1, single colonies were transferred to 1 mL LB without antibiotics, incubated until an OD600 of 0.25–0.50 was reached, and diluted to an OD600 of 0.01 for the standard aTc kill switch assay or directly plated onto LB agar for the standard temperature kill switch assay as before. The original undiluted cultures were then returned to the shaking incubator and cultured for an additional ~22 h (24 h of incubation total). After the 24 h incubation, each culture was diluted 250X into fresh LB without antibiotics and incubated for an additional 24 h. These daily dilutions were repeated for a total of 28 days. Every 3–4 days, the culture was used for the kill switch assays as described above. On assay days, culture samples were stored at −80 °C in 15% (v/v) glycerol. Temperature-sensing transfer curve kill switch assay For temperature-sensitive transfer curves, single colonies were then transferred to 1 mL of LB in 14 mL round bottom tubes and grown in a shaking incubator at 37 °C and 250 rpm for ~2 h until exponential phase (OD600 of 0.25–0.50) was reached. These cultures were then diluted to an OD600 of 0.01 in 50 μL of fresh LB medium in PCR tubes and incubated at the specified temperatures in a thermocycler for 5 h. CFUs were determined by plating 10 μL of serially diluted cultures onto LB agar and incubating at 37 °C overnight. In vitro condition-poor competition kill switch assay EcN strains with and without the Δrpdu knockouts were transformed with the no gRNA control and the aTc-inducible CRISPRks. Single colonies for each of the four strains were grown overnight in 5 mL LB at 37 °C and 250 rpm. The following day, each culture was centrifuged at 3,000 g, the LB supernatant was removed, and the cell pellet was resuspended in 5 mL M9 + 0.4% glucose. For CRISPRks-only cultures, 1.5 mL of the respective CRISPRks strain was diluted into two 15 mL conical tube at a final volume of 15 mL M9 + 0.4% glucose. For competitive mixture cultures, 0.75 mL of each CRISPRks strain was also diluted at a 1:1 ratio with the respective no gRNA control strain (1.5 mL total culture) into two additional 15 mL conical tubes. One of the two conical tubes in each pair was induced with 500 ng/mL aTc. Each tube was capped to stop oxygen transfer and incubated at 37 °C without shaking. Every 24 h for 72 h, the cultures were removed from the incubator, the CFUs were quantified, and the cells were diluted 10X into a new tube with fresh M9 + 0.4% glucose and 500 ng/mL aTc. Assays of intestinal samples for quantification of viable EcN Fecal samples (and cecal samples post-sacrifice) were collected in pre-weighed sterile 2 mL microtubes, and weighed again to accurately measure sample mass. Samples were homogenized in 500 μL PBS on a benchtop vortexer (2000 rpm for 3 min), serially diluted in PBS, and plated on LB agar supplemented with 100 μg/mL spectinomycin dihydrochloride pentahydrate (S4014, Sigma-Aldrich) to select for spectinomycin-resistant kill switch strains, and 34 μg/mL chloramphenicol (AC227920250, Acros Organics) to select for chloramphenicol-resistant control strains. All samples were plated on both chloramphenicol- and spectinomycin-supplemented LB agar plates to confirm no cross-contamination of strains between experimental arms. Dilutions of fecal or cecal homogenates plated for all experiments spanned 10 0 to 10 −6 (i.e. undiluted homogenate was plated for each sample). CFU/mg sample values were calculated from enumerated colonies. Two technical replicates per intestinal sample were processed. For experiments with the single-input CRISPRks (aTc-inducible only) strains, samples were weighed, resuspended, serially diluted, and plated at room temperature (22 °C), and plates were incubated overnight at 37 °C prior to colony enumeration. For experiments with the 2-input CRISPRks (aTc- and temperature-inducible) strains, fecal samples were immediately placed in a pre-warmed heatblock (37 °C) inside an insulated container. Similarly, to maintain cecal samples at 37 °C, freshly sacrificed mice were kept in a 37 °C incubator until dissection and collection of cecal contents. Samples were weighed, resuspended, serially diluted, and plated inside a 37 °C warm room, using pre-warmed PBS and LB agar plates. After plating, LB plates continued to be incubated at 37 °C overnight prior to colony enumeration. To assay temperature-dependent induction of the kill switch, samples were then moved to room temperature (22 °C) and again plated on spectinomycin- or chloramphenicol-supplemented LB agar plates. Plates were incubated at 22 °C for 48 h prior to colony enumeration. To assay biocontainment efficacy of the 2-input CRISPRks strains via room temperature kinetic growth assays, 10 μL of the 100X dilution of each sample was inoculated into 190 uL LB broth supplemented with 100 μg/mL spectinomycin or 34 μg/mL chloramphenicol in a 96-well plate, and read in a plate reader (Biotek Powerwave HT) at room temperature (22 °C) for 24 h (kinetic read, 5 sec shake followed by absorbance reading at 600 nm [Abs600] at 20 min intervals). Additionally, at 3 h and 24 h, kinetic growth assay cultures were sampled, serially diluted, and plated on LB agar supplemented with 100 μg/mL spectinomycin or 34 μg/mL chloramphenicol, and incubated overnight at 37 °C prior to colony enumeration. Kill switch functionality assay To assess whether colonies that survived kill switch assays maintained functional kill switches, three no gRNA control colonies and the specified number of surviving kill switch colonies were transferred to 600 μL LB in 96-deep well plates (E951032808, Fisher Scientific). The plates were then incubated at 37 °C and 250 rpm for 3 h and diluted 20X into 600 μL of fresh LB with and without 500 ng/mL aTc. After 3 h of induction, the Abs600 was measured for each culture, and the ratio of uninduced Abs600 to induced Abs600 was determined. Colonies with non-functional kill switches were defined as having Abs600 ratios within three standard deviations of the Abs600 ratio of the no gRNA control. Transforming gRNA plasmids into Ptet- cas9 integration strains A temperature-curable TetR expression plasmid was constructed with a constitutive TetR expression cassette, a temperature-sensitive oriR101 origin, and kanamycin resistance gene (pAGR377). EcN strains with Ptet- cas9 genomic integrations were transformed with this plasmid, plated on LB agar with kanamycin, and incubated overnight at 30 °C. Cells with the pAGR377 were then made electrocompetent and transformed with the gRNA kill switch plasmids. Transformed cells were recovered in 600 μL SOC at 30 °C for 1 h. The cells were then plated onto LB agar with spectinomycin to select for the kill switch plasmids and incubated overnight at 42 °C to cure pAGR377. Successful curing of pAGR377 was confirmed by streaking colonies on agar plates with spectinomycin only and with both spectinomycin and kanamycin. Generating antibiotic-independent kill switches To remove the antibiotic dependence of the kill switches, the infA essential gene was knocked out of the genome and constitutively expressed on the plasmid of interest 39 , 40 . To generate an infA knockout EcN strain, the pMP11 CRISPR plasmid first was modified to constitutively express InfA (pAGR309). The recombination protocol described above was then used to perform and confirm the knockout. Next, the TetR expression plasmid was modified to also constitutively express InfA (pAGR384). The infA knockout strain containing the modified pMP11 plasmid was then transformed with AGR384 and plated on LB agar with kanamycin. The transformants were incubated overnight at 37 °C to allow for moderate curing of the ampicillin-resistant modified pMP11 plasmid. Colonies from the plate were streaked onto LB agar plates with and without ampicillin to identify colonies with successful curing of the modified pMP11 plasmid. gRNA kill switch plasmids were constructed containing another constitutive InfA expression cassette. Cells containing only pAGR384 were transformed with the InfA-expressing gRNA kill switch plasmids and incubated overnight at 42 °C to cure pAGR384. Successful curing of pAGR384 was confirmed by streaking colonies on agar plates with spectinomycin only and with both spectinomycin and kanamycin. Growth and fluorescence measurements Population absorbances at 600 nm (Abs600) were measured in 96-well black assay microplates (07-000-088, Fisher Scientific) using a Tecan microplate reader (Infinite M200 Pro) and Tecan i-Control and converted to optical density OD600 when necessary. To measure GFP fluorescence, culture samples were transferred to 200 μL filtered 0.9% (w/v) saline supplemented with 2 mg/ml kanamycin in 96-well clear round bottom assay microplates (353910, Corning) for measurements. Flow cytometry analysis was carried out using a Millipore Guava EasyCyte High Throughput Flow Cytometer and Guavasoft 2.7 software with a 488 nm excitation laser and a 512/18 nm emission filter. 10000 events for each sample, gated by forward (minimum/maximum of 20/600) and side scatter (minimum/maximum of 30/2000), were measured at a flow rate 0.59 μl/s. FlowJo (TreeStar Inc.) was used to obtain the average fluorescence of the population. The fluorescence (au) of each sample was calculated using the following formula: F s = F experiment – F EcN , where F s , F experiment , and F EcN respectively represent the reported sample fluorescence, measured sample fluorescence, and autofluorescence (background fluorescence of EcN lacking GFP). Characterization of TlpA temperature sensors in EcN The P tlpA promoter and tlpA gene were PCR amplified from Salmonella typhimurium SL1344 genomic DNA and inserted upstream of gfpmut3 to form a negative feedback operon. To convert tlpA to the modified tlpA (translated to the protein TlpA*) developed by Pirner et al., the following amino acid substitutions were inserted into the tlpA gene: P60L, G135V, K187R, K202I, and L208Q 30 . After transforming EcN with each plasmid, single colonies were transferred to 5 mL of LB and incubated overnight at 37 °C and 250 rpm. The following day, cultures were diluted 1000X into 50 μL of LB in PCR tubes 30 (T3202N, Fisher Scientific). The PCR tubes were incubated in a thermocycler running temperature gradients between 31.5–45.5 °C (TlpA) or 31–37.5 °C (TlpA*) for 24 h. The GFP fluorescence of the cultures was then quantified by flow cytometry as described above. Incorporating and optimizing temperature sensing in the kill switch The P tlpA - tlpA* negative feedback cassette was inserted in place of the constitutive promoter controlling TetR expression on the gRNA kill switch plasmid. To optimize the expression and stability of TetR in the kill switch, a tetR RBS library and a C-terminal SsrA degradation tag library were simultaneously inserted into the plasmid (Supplementary Table 6 ). The RBS library was designed using the De Novo RBS Library Calculator v2.1 to have 128 different variants with translation initiation rates spanning 20−100,000 au 70 . The initial RBS had a translation initiation rate of 1,500 au. The SsrA degradation tag library was constructed by randomly mutagenizing the three nucleotides in the third to last codon of the SsrA degradation tag 71 . The total number of potential unique construct variants was 3,328. The library was generated in E. coli DH10B and tested in EcN. Hill equation fitting The Hill equation (Eq. 1 ) was used to fit lines to the fluorescence, CFU, and fraction viable data. The model was fit to the experimentally collected data by minimizing the root mean square error (RMSE; Eq. 2 ). Fitted values are listed in Supplementary Table 1 . $$F={F}_{{min }}+\frac{\left({F}_{{max }}-{F}_{{min }}\right)}{1+{\left(\frac{{K}_{{{{{\rm{A}}}}}}}{[L]}\right)}^{n}}$$ (1) where F = Calculated fluorescence, CFUs/mL. or Fraction Viable F max = Maximum fluorescence, CFUs/mL. or Fraction Viable F min = Minimum fluorescence, CFUs/mL. or Fraction Viable K A = Half maximal constant n = Hill coefficient [ L ] = Ligand concentration or temperature $${{{{{\rm{RMSE}}}}}}=\sqrt{\frac{{\sum }_{N=1}^{N}{\left(F-{F}_{{exp }}\right)}^{2}}{N}}$$ (2) where RMSE = Root mean squared error F = Calculated fluorescence, CFUs/mL, or Fraction Viable F exp = Actual experimental fluorescence, CFUs/mL, or Fraction Viable N = Number of data points Quantification and statistical analysis All statistical tests were performed using GraphPad Prism or Excel. All statistical details of experiments, including significance criteria, sample size, definition of center, and dispersion measures can be found in the figure legends, in the Results section, or in the Source Data file. Sample sizes for animal experiments, reporter assays, and viability assays were chosen based on our previous work 13 , 72 and the literature, and represent sample sizes routinely used for these methods. No sample size calculations were performed during the design of experiments. Samples were randomized during group assignment in all experiments. No samples were excluded from analyses. The Investigators were not blinded to allocation during experiments and outcome assessment. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All source data and plasmid maps have been deposited to Mendeley Data (DOI: 10.17632/dwhfp2ycyw.1). Any additional information is available from the Lead Contact upon request. Plasmids and strains generated in this paper are available upon request from the Lead Contact. This study did not generate additional new unique reagents. Source data is available in the Source Data file. Source data are provided with this paper.
Tae Seok Moon, associate professor of energy, environmental and chemical engineering at the McKelvey School of Engineering at Washington University in St. Louis, has taken a big step forward in his quest to design a modular, genetically engineered kill switch that integrates into any genetically engineered microbe, causing it to self-destruct under certain defined conditions. His research was published Feb. 3 in the journal Nature Communications. Moon's lab understands microbes in a way that only engineers would, as systems made up of sensors, circuits and actuators. These are the components that allow microbes to sense the world around them, interpret it and then act on the interpretation. In some cases, the actuator may act on the information by moving toward a certain protein or attacking a foreign invader. Moon is developing actuators that go against millions of years of evolution that have acted in favor of self-preservation, asking instead that an actuator tells a microbe to self-destruct. The kill switch activator is an effort to quell anxiety about the potential for genetically modified microbes to make their way into the environment. So far, he has developed several: one, for instance, causes a microbe to self-destruct once the ambient environment around it reaches a certain temperature. "But the previous work had either a base-level activation that was either too high or too low," Moon said. And every time he solved that problem, "the bacteria would mutate." During experiments, that meant there were too many microbes left alive after the kill switch should have turned on. Additionally, in some situations, a kill switch may not be triggered for days. This additional time means additional opportunities for the microbes to mutate, possibly affecting the switch's ability to work. For instance, Moon is interested in developing genetically engineered microbes to eat plastic as a way of disposing of harmful waste. "But we don't know how many days we need to keep these microbes stable until they finish cleaning up our environment. It might be a few days, or a few weeks," Moon said, "because we have so much waste." To overcome these roadblocks, Moon inserted multiple kill switches—up to four—in the microbial DNA. The result: During experimentation, of a billion microbes, only one or none may survive. During the experiments, researchers tested the microbes daily. The switches remained functional for 28 days. "This is the best kill switch ever developed," Moon said. These experiments were also done in mice, but looking forward, Moon would like to build kill switches for microbes that will be used in soil—maybe to kill pathogens that are deadly to crops—or even in the human gut to cure diseases. The end game is getting microbes to do what we want and then go away, Moon said. He thinks these microbes could be used to solve a whole host of global problems. "Bacteria may seem dumb," he said, "but they can be very smart as long as we teach them well."
10.1038/s41467-022-28163-5
Medicine
Immune booster drugs meant to kill tumors found to improve Alzheimer's symptoms in mice
Kuti Baruch et al. PD-1 immune checkpoint blockade reduces pathology and improves memory in mouse models of Alzheimer's disease, Nature Medicine (2016). DOI: 10.1038/nm.4022 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.4022
https://medicalxpress.com/news/2016-01-immune-booster-drugs-meant-tumors.html
Abstract Systemic immune suppression may curtail the ability to mount the protective, cell-mediated immune responses that are needed for brain repair. By using mouse models of Alzheimer's disease (AD), we show that immune checkpoint blockade directed against the programmed death-1 (PD-1) pathway evokes an interferon (IFN)-γ–dependent systemic immune response, which is followed by the recruitment of monocyte-derived macrophages to the brain. When induced in mice with established pathology, this immunological response leads to clearance of cerebral amyloid-β (Aβ) plaques and improved cognitive performance. Repeated treatment sessions were required to maintain a long-lasting beneficial effect on disease pathology. These findings suggest that immune checkpoints may be targeted therapeutically in AD. Main Chronic neuroinflammation is common to nearly all neurodegenerative diseases, and it contributes to their pathophysiology 1 . Nevertheless, although anti-inflammatory and immunosuppressive therapies have demonstrated some efficacy in neurodegenerative disease models, these treatments have largely failed in the clinic 2 , 3 . In mouse models of AD, the trafficking of blood-borne myeloid cells (monocyte-derived macrophages) to the central nervous system (CNS) was shown to be neuroprotective. Yet, spontaneous recruitment of these cells seems to be insufficient 4 . By using the five familial AD mutations (5XFAD) mouse model of AD 5 , we recently showed that transient depletion of forkhead box P3 (FOXP3) + regulatory T (T reg ) cells induces an IFN-γ–associated systemic immune response and the activation of the brain's choroid plexus 6 , which is a selective gateway for leukocyte trafficking to the CNS 7 , 8 . This response was followed by the accumulation of monocyte-derived macrophages and T reg cells at sites of CNS pathology and by Aβ plaque clearance and a reversal of cognitive decline 6 . We therefore suggested that in chronic neurodegenerative conditions, systemic immunity should be boosted, rather than suppressed, to drive an immune-dependent cascade needed for brain repair 4 . Immune checkpoints are regulatory pathways for maintaining systemic immune homeostasis and tolerance 9 . Selective blockade of immune checkpoints, such as the PD-1 pathway, enhances anti-tumor immunity by mobilizing the immune system 10 . The IFN-γ–dependent activity induced by PD-1 blockade in cancer immunotherapy 11 , in addition to our observations that leukocyte trafficking to the CNS for repair involves an IFN-γ–dependent response 7 , 12 , prompted us to explore the therapeutic potential of PD-1 immune checkpoint blockade in AD. 5XFAD mice aged 10 months—an age of advanced cerebral pathology—received two intraperitoneal (i.p.) injections (at 3-d intervals) of either a blocking antibody directed at PD-1 (anti–PD-1) or an IgG control, and were examined 7 d after the first injection. PD-1 blockade increased splenocyte frequencies of IFN-γ–producing CD4 + T cells ( Supplementary Fig. 1a,b ), and genome-wide RNA-sequencing of the choroid plexus ( Supplementary Table 1 ) revealed an expression profile associated with an IFN-γ–response ( Fig. 1a and Supplementary Table 2 ). Real-time quantitative PCR (RT-qPCR) showed elevated IFN-γ ( Ifng ) mRNA levels at the choroid plexus ( Fig. 1b ). These findings pointed to a systemic IFN-γ immune response in 5XFAD mice following PD-1 blockade, particularly at the choroid plexus. Figure 1: PD-1 blockade promotes myeloid cell recruitment to the CNS via IFN-γ. ( a ) Gene Ontology (GO) annotation terms enriched in the choroid plexus of 10-month-old 5XFAD mice treated with anti–PD-1 ( n = 5) and examined on day 10 after the first injection, when compared to IgG-treated ( n = 5) and untreated ( n = 4) 5XFAD controls (based on Supplementary Table 2 ; color scale corresponds to negative log 10 of P value). ( b ) mRNA expression levels of Ifng (encoding IFN-γ) in the choroid plexus of anti–PD-1–treated ( n = 5), IgG-treated ( n = 5) and untreated ( n = 3) 5XFAD mice (one-way analysis of variance (ANOVA) and Bonferroni post-test; data are representative of three independent experiments). ( c ) 5- to 6-month-old 5XFAD mice ( n = 3 per group) were i.p. injected on days 1 and 4 with either anti–PD-1 or IgG, and examined at days 7 (d7) and 14 (d14). Flow cytometry sorting gating strategy and quantitative analysis of brain CD45 low CD11b + (indicated by blue gates and bar fills) and CD45 high CD11b + (indicated in orange) myeloid cells. Myeloid cell populations showed distinct differential expression of Ly6c. ( d ) 6-month-old 5XFAD mice were injected with IFN-γ–neutralizing antibodies 1 d before PD-1–specific antibody injections and were then examined on day 7. Flow cytometry analysis of CD45 high CD11b + cell frequencies in the brains of IgG-treated ( n = 4) and anti–PD-1–treated (with ( n = 5) or without ( n = 6) anti−IFN-γ) 5XFAD mice. ( e ) mRNA expression levels of Ccl2 and Icam1 in the choroid plexus of the same mice (one-way ANOVA and Bonferroni post-test). Error bars represent mean ± s.e.m.; * P < 0.05; ** P < 0.01; *** P < 0.001. Full size image We next examined whether the effect of PD-1 blockade on systemic immunity involves CNS recruitment of monocyte-derived macrophages. We analyzed myeloid cell populations in the brains of 5XFAD mice at 7 d and 14 d after the first injection of anti–PD-1 (two i.p. injections at 3-d intervals) by separately sorting CD45 low CD11b + microglia and CD45 high CD11b + cells, which represent mostly infiltrating myeloid cells 13 . We observed higher frequencies of CD45 high CD11b + cells in the brains of 5XFAD mice following PD-1 blockade, relative to IgG-treated 5XFAD and wild-type (WT) controls ( Fig. 1c ). Genome-wide transcriptome analysis ( Supplementary Table 3 ) of the myeloid cell populations, sorted from 5XFAD brains after PD-1 blockade, indicated that the CD45 high CD11b + cells expressed a distinct mRNA profile relative to that expressed by the CD45 low CD11b + cells. The CD45 high CD11b + expression profile included features of infiltrating myeloid cells (characterized by high expression of lymphocyte antigen 6c (Ly6C)) ( Fig. 1c ), and expression of the chemokine receptor CCR2 ( Supplementary Fig. 2a ), which is associated with myeloid cell neuroprotection in AD 14 . These myeloid cells were characterized at the mRNA ( Supplementary Fig. 2a,b ) and protein ( Supplementary Fig. 2c ) levels by the expression of scavenger receptor A (SRA1), which is an Aβ-binding scavenger receptor associated with Aβ-plaque clearance 15 . To determine whether enhanced monocyte-derived macrophage trafficking seen after PD-1 blockade was dependent on IFN-γ, we gave 5XFAD mice an IFN-γ–blocking antibody before administering PD-1 blockade. IFN-γ neutralization reduced monocyte-derived macrophages recruitment to the CNS ( Fig. 1d ) and interfered with mRNA expression of intercellular adhesion molecule 1 ( Icam1 ) and chemokine (C-C motif) ligand 2 ( Ccl2 ) by the choroid plexus, induced by PD-1 blockade ( Fig. 1e ); these leukocyte-trafficking determinates were previously associated with myeloid cell entry into the CNS via the choroid plexus–cerebrospinal fluid pathway 6 , 7 . To examine the potential impact of PD-1 blockade on AD pathology, we first treated 10-month-old 5XFAD mice with either anti–PD-1 antibody or IgG control, and evaluated the effect of the treatment on spatial learning and memory by using the radial arm water maze (RAWM) task. 5XFAD mice that received PD-1 blockade (two i.p. injections at 3-d intervals) were analyzed 1 month later, at which point they exhibited reduced cognitive deficits relative to IgG-treated or untreated age-matched controls ( Fig. 2a ). 5XFAD mice that received two sessions of PD-1 blockade, with a 1-month interval between sessions, were tested 2 months after the first session, and they exhibited improved cognitive performance relative to IgG-treated or untreated 5XFAD control mice, reaching performance levels comparable to those of WT mice ( Fig. 2b ). Notably, when 5XFAD mice that had received a single session of PD-1 blockade were examined 2 months after the treatment, only a marginal improvement in memory was observed when compared to IgG-treated mice ( Fig. 2b ), which suggests that repeated treatment sessions are needed to maintain the beneficial effects on cognition and memory. Figure 2: PD-1 blockade reduces AD pathology and improves memory in 5XFAD and APP/PS1 mice. Male 5XFAD mice (average cohorts aged 10 months) were treated with either PD-1–specific antibody or IgG control. Experimental design is presented. Black arrows indicate time points of treatment, and illustrations indicate time points of cognitive testing or Aβ plaque–burden assessment. ( a ) RAWM performance of anti–PD-1–treated mice ( n = 9), of IgG-treated ( n = 6) 5XFAD mice and of untreated 5XFAD ( n = 9) and wild-type (WT) ( n = 9) controls; two-way repeated-measures ANOVA and Bonferroni post-test). ( b ) RAWM performance, comparing one anti–PD-1 treatment session ( n = 9) to two sessions with a 1-month interval ( n = 6), and untreated aged-matched 5XFAD ( n = 7) and IgG-treated ( n = 9) controls, and WT ( n = 9) controls (combined data from separate experiments which included treated and control groups; two-way repeated-measures ANOVA and Bonferroni post-test). ( c – f ) Representative immunofluorescence images ( c ), and quantitative analysis ( d – f ) of Aβ and astrogliosis, assessed 2 months after the first treatment, in the brains of anti–PD-1–treated 5XFAD mice (after either one session ( n = 4) or two sessions ( n = 6)) and of controls (untreated ( n = 7) and IgG-treated ( n = 6) 5XFAD mice). Brains were immunostained for Aβ (in red), GFAP (in green) and Hoechst nuclear staining. Scale bars, 50 μm. Mean plaque area and numbers were quantified (in 6-μm brain slices) in the dentate gyrus (DG) and in the cerebral cortex (layer V), and GFAP immunoreactivity was measured in the hippocampus (one-way ANOVA and Bonferroni post-test). ( g , h ) APP/PS1 mice were treated with either PD-1–specific antibody or IgG control and examined 1 month later. Brains were immunostained for Aβ (in red) and Hoechst nuclear staining. Scale bars, 250 μm. Mean Aβ plaque area and numbers were quantified in the hippocampus (HC) (in 6-μm brain slices; Student's t test). Representative immunofluorescence images and quantitative analysis of 8-month-old male mice ( n = 4 per group) ( g ) and 15-month-old female mice ( n = 4 per group) ( h ). CA1, region I of hippocampus proper. Error bars represent mean ± s.e.m.; *, anti–PD-1–treated versus IgG-treated controls; #, anti–PD-1–treated versus untreated controls; *,# P < 0.05; **,## P < 0.01; ***,### P < 0.001. Full size image After behavioral testing, 2 months following treatment initiation, we examined the brains of 5XFAD mice that had received either one or two sessions of PD-1 blockade. Cerebral Aβ plaque load was reduced in the hippocampus (specifically, in the dentate gyrus) ( Fig. 2c,d ) and in the cerebral cortex (layer V) ( Fig. 2c,e ), which are the main anatomical regions with robust Aβ-plaque pathology in 5XFAD mice 5 . Aβ clearance was more pronounced after two sessions of PD-1 blockade than after a single session, and both mouse groups had reduced plaque load relative to untreated or IgG-treated 5XFAD mice. Astrogliosis, as assessed by glial fibrillary acid protein (GFAP) immunoreactivity, was reduced in the hippocampus of 5XFAD mice treated with either one or two sessions of PD-1 blockade, relative to that in IgG-treated controls ( Fig. 2f ). We also examined the effect of PD-1 blockade in another AD model, APP/PS1 mice 16 , which develop Aβ-plaque pathology at a more advanced age than do 5XFAD mice. APP/PS1 mice were tested at two stages of disease progression (8 and 15 months). PD-1 blockade reduced hippocampal Aβ plaque load in PD-1–treated APP/PS1 mice when compared to IgG-treated controls ( Fig. 2g,h ). Our findings show that in the context of neurodegenerative disease, PD-1 blockade evokes a systemic IFN-γ–dependent immune response that enables the mobilization of monocyte-derived macrophages to the brain. This process is reminiscent of tissue-specific immune surveillance induced by immune checkpoint blockade in cancer therapy 10 , 11 , 17 . PD-1 blockade treatment reduced the cerebral Aβ plaque load in two mouse models of AD in advanced stages of the disease. In particular, repeated treatment sessions were required for maintaining a long-lasting beneficial effect on disease pathology. Given that immune checkpoint blockade releases self-reactive T cells from immune tolerance mechanisms 18 , these findings support a neuroprotective role for CNS-specific cell-mediated immunity 19 . Notably, immune checkpoint blockade is not meant to target a single disease-causing etiologic factor in AD; rather, this approach is meant to augment the overall ability of the immune system to clear brain pathology. In cancer immunotherapy, anti–PD-1 and anti–PD-ligand antibodies were shown to be relatively safe and well tolerated 20 . Taken together, our findings identify immune checkpoint blockade as a novel therapeutic strategy for AD and, potentially, for other neurodegenerative diseases. Methods Animals. Heterozygous 5XFAD transgenic mice (on a C57/BL6-SJL background) that overexpress familial AD mutant forms of human APP (the Swedish mutation, K670N/M671L; the Florida mutation, I716V; and the London mutation, V717I) and PS1 (M146L/L286V) transgenes under the transcriptional control of the neuron-specific mouse Thy-1 promoter 5 (5XFAD line Tg6799; The Jackson Laboratory). Genotyping was performed by PCR analysis of tail DNA, as previously described 5 . Male and female mice were bred and maintained by the animal breeding center of the Weizmann Institute of Science. AD double-transgenic B6.C3-Tg (APPswe, PSEN1dE9) 85Dbo/Mmjax mice 17 (on a C57BL/6 background) were a gift from Dr. Inna Slutsky, Tel Aviv University, Tel Aviv, Israel. All experiments detailed herein complied with the regulations formulated by the Institutional Animal Care and Use Committee (IACUC) of the Weizmann Institute of Science. Antibodies. For PD-1 blockade, PD-1–specific blocking antibody (anti–PD-1; rat isotype; clone RPM1-14; BIOXCELL) and isotype control immunoglobulin (rat IgG2a; BIOXCELL) were administered i.p. at days 1 and 4 of each treatment session at a dose of 250 μg per mouse. For IFN-γ neutralization, mice were treated with 500 μg of an IFN-γ–specific blocking antibody (anti–IFN-γ; clone XMG1.2; BIOXCELL) on the day before each anti–PD-1 injection. RNA purification, cDNA synthesis and quantitative real-time PCR analysis. Mice were transcardially perfused with phosphate-buffered saline (PBS) before tissue excision. Choroid plexus tissues were isolated under a dissecting microscope (Stemi DV4; Zeiss) from the lateral, third and fourth ventricles of the brain. Total RNA from the choroid plexus was extracted using the RNA MicroPrep Kit (Zymo Research), and mRNA (1 μg) was converted into cDNA using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems). The expression of specific mRNAs was assayed using fluorescence-based quantitative real-time PCR (RT-qPCR) (Fast-SYBR PCR Master Mix; Applied Biosystems). Quantification reactions were performed in triplicate for each sample using the 'delta-delta Ct' method. Peptidylprolyl isomerase A ( Ppia ) was chosen as a reference (housekeeping) gene. At the end of the assay, a melting curve was constructed to verify the specificity of the reaction. To determine the expression levels of Ifng , cDNA was pre-amplified for 14 PCR cycles, according to the manufacturer's protocol (PreAmp Master Mix Kit; Applied Biosystems), thereby increasing the sensitivity of the subsequent real-time PCR reaction. The TaqMan Assays-on-Demand probes Mm02342430_g1 ( Ppia ) and Mm01168134_m1 ( Ifng ) were used. For other genes examined, the following primers were used: Ppia forward 5′-AGCATACAGGTCCTGGCATCTTGT-3′ and reverse 5′-CAAAGACCACATGCTTGCCATCCA-3′; Icam1 forward 5′-AGATCACATTCACGGTGCTGGCTA-3′ and reverse 5′-GCTTTGGGATGGTAGCTGGAAGA-3′; Ccl2 forward 5′-CATCCACGTGTTGGCTCA-3′ and reverse 5′-GATCATCTTGCTGGTGAATGAGT-3′; RT-qPCR reactions were performed and analyzed using StepOne software V2.2.2 (Applied Biosystems). Immunohistochemistry. Mice were transcardially perfused with PBS before tissue excision and fixation. Tissues that were not adequately perfused were not further analyzed, because autofluorescence associated with blood contamination interferes with immunostaining analyses. Tissue processing and immunohistochemistry were performed on paraffin-embedded, sectioned (6 μm thick) mouse brains. The following primary antibodies were used: mouse anti-Aβ (1:300, Covance, #SIG-39320, clone 6E10) and rabbit anti-GFAP (1:200, Dako, #Z0334, #LOT 00085137). Secondary antibodies were Cy2 and Cy3 conjugated donkey anti-mouse or anti-rabbit antibodies (1:200; all from Jackson ImmunoResearch). The slides were exposed to Hoechst nuclear staining (1:4,000; Invitrogen Probes) for 1 min, before being sealed with Aquamount (Polysciences) and glass covers. Two negative controls were routinely used in immunostaining procedures, which involved staining with isotype control antibody followed by secondary antibody, or staining with secondary antibody alone. Microscopic analysis was performed using a fluorescence microscope (E800; Nikon) equipped with a digital camera (DXM 1200F; Nikon), and with either a 20× numerical aperture (NA) 0.50 or 40× NA 0.75 objective lens (Plan Fluor; Nikon). Recordings were made on postfixed tissues using acquisition software (NIS-Elements, F3; Nikon). For the quantification of staining intensity, total cell and background fluorescence intensity was measured using ImageJ software (from the US National Institutes of Health; NIH), and the intensity of specific staining was calculated, as previously described 21 . Images were cropped, merged and optimized using Photoshop CS6 13.0 (Adobe), and they were arranged using Illustrator CS5 15.1 (Adobe). Flow cytometry and sorting, and sample preparation and analysis. Mice were transcardially perfused with PBS before tissue extraction. Spleens were mashed with the plunger of a syringe and treated with ammonium chloride potassium (ACK)-lysing buffer to remove erythrocytes. Brains were removed under a dissecting microscope (Stemi DV4; Zeiss) in PBS, and tissues were dissociated using the GentleMACS dissociator (Miltenyi Biotec). All samples were filtered through a 70-μm nylon mesh and blocked with anti-Fc CD16/32 (1:100; BD Biosciences) before immunostaining. For intracellular staining of IFN-γ, the cells were incubated with phorbol 12-myristate 13-acetate (PMA; 10 ng/ml; Sigma-Aldrich) and ionomycin (250 ng/ml; Sigma-Aldrich) for 6 h, and brefeldin-A (10 μg/ml; Sigma-Aldrich) was added for the last 4 h of incubation. Intracellular labeling of cytokines was performed using BD Cytofix/Cytoperm Plus fixation/permeabilization kit (cat. no. 555028) according to the manufacturer's protocol. The following fluorochrome-labeled monoclonal antibodies were purchased from BD Pharmingen, BioLegend, R&D Systems or eBiosciences and used according to the manufacturers' protocols: Brilliant-violet-421 (1:200) or PerCP-Cy5.5-conjugated anti-CD45 (1:400); phycoerythrin (PE) or Alexa Fluor 450–conjugated anti-CD4 (1:200); fluorescein isothiocyanate (FITC)-conjugated anti-TCRβ (1:200); PerCP-Cy5.5–conjugated anti-CD11b (1:400); PE-conjugated anti-Ly6c (1:200); APC-conjugated anti–IFN-γ (1:50); allophycocyanin (APC)-conjugated SRAI/MSR (SRA1; 1:20). Cells were analyzed on an LSRII cytometer (BD Biosciences) using FACSdiva (BD Biosciences) and FlowJo (Tree Star, Inc.) software. In each experiment, relevant negative-control groups, positive controls and single-stained samples for each tissue were used to identify the populations of interest and to exclude others. In sorting experiments, 1,500–3,000 myeloid cells were collected per sample using the FACSAriaIII sorter (BD Biosciences) into 50 μl of lysis buffer. RNA was extracted from sorted cells, DNA libraries were produced and sequencing was conducted, as described below. RNA sequencing, library construction and analysis. For each library, 10 ng of RNA from each sample was used. A derivation of MARS-seq 22 , developed for single-cell RNA-seq, was used to produce sensitive and robust RNA expression libraries. A minimum of two replicates were used per population. An average of 4 million reads per library was obtained and aligned to the mouse reference genome (National Center for Biotechnology Information (NCBI) 37, mm9) using TopHat v2.0.10 (ref. 23 ) with default parameters. Expression levels were calculated and normalized using Homer 24 . RNA-seq analysis of the choroid plexus was focused on genes with levels of expression above the sixtieth percentile and was robust across different cutoffs. A constant value representing the sixtieth percentile was added to each data point in order to reduce variability between low-level expressed genes. Genes were ordered according to their average expression levels in anti–PD-1 injected mice, when compared to IgG-treated and untreated 5XFAD mice, and they were analyzed for Gene Ontology (GO) enrichment using Gorilla ( ). RNA-seq analysis of myeloid cells was focused on genes with levels of expression above the fiftieth percentile (to remove low expressed genes), which were filtered for nonchanging genes (maximum median of sets − minimum median of sets > 0.75), followed by K -means clustering on median columns ( K = 4). Heat maps were prepared using GENE-E ( ). Radial-arm water maze. The RAWM was used to test spatial learning and memory, as previously described in detail 25 . Briefly, six stainless steel inserts were placed in the tank, forming six swim arms radiating from an open central area. The escape platform was located at the end of one arm (the goal arm), 1.5 cm below the water's surface, in a pool 1.1 m in diameter. The water temperature was kept at 21–22 °C. Water was made opaque with milk powder. In the testing room, only distal visual shape and object cues were available to the mice to aid in finding the location of the submerged platform. On day 1, mice were trained for 15 trials with alternating visible and hidden platforms, and the last four trials with the hidden platform only. On day 2, mice were trained for 15 trials with the hidden platform. Entry into an incorrect arm, or failure to select an arm within 15 s, was scored as an error. Spatial learning and memory were measured by counting the number of arm entry errors, by a researcher blinded to the identity of the mice. Mice that displayed motor deficits in swimming performance were excluded at the beginning of the experiments from further analysis. No motor deficits were observed in relation to treatments. Data were analyzed as the mean number of errors, for training blocks of three consecutive trials. Aβ plaque quantitation. From each brain, 6-μm coronal slices were collected from five different pre-determined depths, all together covering 600 μm throughout the region of interest (of the hippocampus and cerebral cortex). Slices were immunostained, and histogram-based segmentation of positively stained pixels was performed using Image-Pro Plus software (Media Cybernetics, Bethesda, Maryland, USA). The segmentation algorithm was manually applied to each image in the hippocampus, the dentate gyrus area or in cortical layer V, and the percentage of the area occupied by total Aβ immunostaining was determined. Plaque numbers were quantified from the same 6-μm coronal brain slices, and they are presented as the average number of plaques per brain region. Prior to quantification, samples were coded to mask the identity of the mice, and plaque burden was quantified by an observer blinded to the identity of the treatment groups. Statistical analysis. The specific tests used to analyze each set of experiments are indicated in the figure legends. For each statistical analysis, appropriate tests were selected on the basis of whether the data was normally distributed. Data were analyzed using a two-tailed Student's t test to compare between two groups, and one-way ANOVA was used to compare several groups, followed by the Bonferroni post-hoc procedure for pairwise comparison of groups after the null hypothesis was rejected ( P < 0.05). Data from behavioral tests were analyzed using two-way repeated-measures ANOVA, and Bonferroni post-hoc procedure was used for follow-up pairwise comparison. Sample sizes were chosen with adequate statistical power on the basis of the literature and past experience, and mice were allocated to experimental groups according to age, gender and genotype. RAWM behavioral experiments were carried out in several cohorts of mice that contained all tested groups of treated mice and controls, which were examined in constitutive days, and the data were combined for analysis. Investigators were blinded to the identity of the groups during experiments and during outcome assessment. All inclusion and exclusion criteria were pre-established according to the IACUC. Results are presented as means ± s.e.m. In the graphs, y -axis error bars represent s.e.m. Statistical calculations were performed using GraphPad Prism software (GraphPad Software, San Diego, California).
(MedicalXpress)—A team of researchers working at the Weizmann Institute of Science in Israel has found that a type of drug meant to help the immune system kill tumors also reduces Alzheimer's type symptoms in mouse models. In their paper published in the journal Nature Medicine, the team describes their study of drugs known as PD-1 immune checkpoint blockades, on mouse models, and the results they found. As scientists close in on the cause of Alzheimer's disease and hopefully find a cure, more and more evidence points at problems with the immune system and inflammation as a factor. For the past several years, the prevailing view has been that an overactive immune system might be the root cause, but new studies have begun to suggest the opposite might be true—and that boosting the immune response in the brain might help reduce symptoms of the disease. In this new effort, the researchers looked at PD-1 immune checkpoint blockades because they do their work by disabling immunity checkpoints which is where the body sets up roadblocks to stop the immune system from attacking normal body parts. But tumors have been found to trick this same part of the immune system to prevent it from attacking them. Thus, the idea behind PD-1 blockers is to override the checkpoints and force the immune system to attack the tumor anyway, causing it to shrink and disappear. In this new effort, the goal was to learn if such drugs might help stop or reverse the symptoms of Alzheimer's disease by boosting an immune response in the brain. To find out, the researchers genetically engineered test mice to develop Alzheimer's symptoms, both memory loss and the buildup of amyloid in the brain, and then gave each of them PD-1 blockers to see if it caused any improvement. They report that amyloid buildup in the brain of the mice was reduced by half and that most of them were once again able to make their way through a maze—a test of their memory abilities. The research team notes that some PD-1 blockers are already on the market, Keytruda, for example has already been approved for use in treating tumors—thus, testing the drug on human patients in clinical trials should go rather quickly if further tests suggest it might actually work on people with Alzheimer's disease.
10.1038/nm.4022
Physics
Researchers demonstrate a 100x increase in the amount of information that can be 'packed into light'
Scientific Reports, http://www.nature.com/articles/srep27674 Journal information: Scientific Reports
http://www.nature.com/articles/srep27674
https://phys.org/news/2016-06-100x-amount.html
Abstract Mode division multiplexing (MDM) is mooted as a technology to address future bandwidth issues and has been successfully demonstrated in free space using spatial modes with orbital angular momentum (OAM). To further increase the data transmission rate, more degrees of freedom are required to form a densely packed mode space. Here we move beyond OAM and demonstrate multiplexing and demultiplexing using both the radial and azimuthal degrees of freedom. We achieve this with a holographic approach that allows over 100 modes to be encoded on a single hologram, across a wide wavelength range, in a wavelength independent manner. Our results offer a new tool that will prove useful in realizing higher bit rates for next generation optical networks. Introduction Since the beginning of the 21st century there has been a growing interest in increasing the capacity of telecommunication systems to eventually overcome our pending bandwidth crunch. Significant improvements in networks transmission capacity has been achieved through the use of polarization division multiplexing (PDM) and wavelength division multiplexing (WDM) techniques and also through implementing high order modulation formats 1 , 2 , 3 . However, it might not be possible to satisfy the exponential global capacity demand in the near future. One potential solution to eventually cope with bandwidth issues is space division multiplexing (SDM) 4 , 5 , 6 and in particular the special case of mode division multiplexing (MDM), which was first suggested in the 1980s 7 . In MDM based communication systems, each spatial mode, from an orthogonal modal basis, can carry an independent data stream, thereby increasing the overall capacity by a factor equal to the number of modes used 8 . A particular mode basis for data communication is orbital angular momentum (OAM) 9 , 10 which has become the mode of choice in many studies due to its topical nature and ease of detection with phase-only optical elements 11 , 12 . Indeed, OAM multiplexing implementation results have reported Tbit/s transmission capacity over both free space and optical fibers 13 , 14 . More recent reports have shown free space communication with a bit rate of 1.036 Pbit/s and a spectral efficiency of 112.6-bit/s/Hz using 26 OAM modes 15 . But, by taking into account the effects of atmospheric turbulence on the crosstalk and system bit error rate (BER) in an OAM multiplexed free space optics (FSO) link, experimental results have indicated that turbulence-induced signal fading will significantly deteriorate link performance and might cause link outage in the strong turbulence regime 16 , 17 , 18 . Recently, Zhao et al . claimed that OAM is outperformed by any conventional mode division multiplexing technique with a complete basis or conventional line of sight (LOS) multiple-input multiple-output (MIMO) systems 19 , 20 . Indeed, OAM is only a subspace of the full space of Laguerre Gaussian (LG) beams where modes have two degrees of freedom: an azimuthal index and a radial index p , the former responsible for the OAM. The addition of the radial degree of freedom certainly increases the bandwidth capacity, since for each value of an infinite number of p values can be used to have access to many more information channels. In this study, we demonstrate a new holographic tool to realise a communication link using a densely packed LG mode set incorporating both radial and azimuthal degrees of freedom. We show that it is possible to multiplex/demultiplex over 100 spatial modes on a single hologram, written to a spatial light modulator, in a manner that is independent of wavelength. Our subset of the LG modes were successfully used as information carriers over a free space link to illustrate the robustness of our technique. The information is recovered by simultaneously detecting all 100 modes employing a single hologram. Using this approach we are able to transmit several images with correlations higher than 98%. Although our scheme is a proof-of-concept, it provides a useful basis for increasing the capacity of future optical communication systems. Results Consider a LG mode in cylindrical coordinates, at its waist plane ( z = 0), described by: where p and are the radial and azimuthal indices respectively, ( r , ϕ) are the transverse coordinates, is the generalized Laguerre polynomial and w 0 is a scalar parameter corresponding to the Gaussian (fundamental mode) radius. The mode size is a function of the indices and is given by . Such modes are shape invariant during propagation and are reduced to the special case of the Gaussian beam when . This full set of modes can be experimentally generated using complex-amplitude modulation. For this experiment we use the CGH type 3 as described in 21 to generate a subset of 35 modes given by combination of p = {0, 1, 2, 3, 4} and . In this way, the amplitude and phase of the modes set ( Eq. 1 ) can be encoded into phase-only digital holograms and displayed on phase-only SLMs to generate any mode. Moreover, the holograms can be multiplexed into a single hologram to generate multiple modes simultaneously. Figure 1(a) shows the generated holograms to create the desired subset of modes for this experiment. Their corresponding theoretical intensity profile can be seen in Fig. 2(a) . Figure 1 Complex amplitude modulation and spatial-multiplexing. ( a ) Holograms encoded via complex-amplitude modulation to generate different modes. ( b ) Holograms encoded with different carrier frequencies are superimposed into a single hologram to produce a spatial separation of all modes in the Fourier plane. Full size image Figure 2 Schematic of our Multiplexing and Demultiplexing setup. ( a ) Intensity profiles of modes generated from combinations of p = {0, 1, 2, 3, 4} and . ( b ) Experimental setup: Three components of a multiline Ion-Argon laser, λ 1 = 457 nm, λ 2 = 488 nm and λ 3 = 514 nm, are separated using a grating and sent to a Spatial Light Modulator (SLM-1). ( c ) The SLM is split into three independent screens and addressed with holograms to produce the set of modes shown in ( a ). The information is propagated through free space and reconstructed in the second stage with a modal filter. ( d ) The modal filter consists of a superposition of all holograms encoded in SLM-2. ( e ) Each mode is identified in the far field using a CCD camera and a lens. Full size image The modes generated in this way were encoded using three different wavelengths onto a single hologram, in a wavelength independent manner and sent through free space. At the receiver, we were able to identify with high fidelity any of the 105 encoded modes in a single real time measurement, using a wavelength independent multimode correlation filter on a single SLM 22 , 23 , 24 . This involves superimposing a series of single transmission functions t n ( r ), each multiplied with a unique carrier frequency K n to produce a final transmission function T ( r ). where N is the maximum number of multiplexed modes. In the Fourier plane the carrier frequencies K n manifest as separate spatial coordinates as illustrated in Fig. 2(e) . This approach allows multiple LG modes to be generated and detected simultaneously producing a high data transmission rate. The experimentally generated modes are used to encode and decode information in our multiplexing and demultiplexing scheme as shown in Fig. 2 . To date only the azimuthal component, responsible for the OAM content of these mode, has been used for data transmission, ostensibly because the divergence is lowest for p = 0 9 , 19 . Here we demonstrate that the propagation dynamics, divergence being one example, is governed by the beam quality factor 25 and that modes with the same index will propagate in an identical manner regardless of the radial component p . For example the modes LG 11 and LG 03 will experience the same diffraction since both has the same value M 2 = 4. To show this, we encoded information in the set of modes that incorporates both degrees of freedom, created as described before. Moreover, we multiplex the above mentioned subset of modes on three different wavelengths to increase our (de) encoding basis set from 35 to 105. All modes were generated using a single SLM (SLM-1 in Fig. 2(b) ) and a wide range multi-line laser. The data are encoded using these mode set and transferred in free space. This information is recovered by projecting the propagated information onto a modal filter. The modal filter consists of multiplexed holograms displayed on a second SLM (SLM-2) and a CCD camera, capable of identifying with high accuracy any of the input modes (see experimental details). The intermodal crosstalk for the chosen modes, this is, the crosstalk between the input modes and the output modes was computed by measuring, for each input mode, the on-axis intensity of every output mode and normalized so that the total intensity adds up to unity ( Fig. 3 ). As can be seen, the crosstalk between the different modes is very low and is independent of the p value. In a real scenario, where modes propagate over long distances, intermodal crosstalk would become an issue and the addition of compensating methods, as adaptive optics, would be needed. Figure 3 Cross Talk. For each input mode we measure the output cross talk for all hundred and five output modes. In all cases the input mode is detected with very high accuracy, higher than 98%. Full size image Figure 4 shows an example of an RGB image encoded, pixel by pixel as explained in the next section and reconstructed in real time with a very high correlation coefficient ( c = 0.96). The correlation coefficient is a dimensionless number that measures the similarity between two images, being 0 for nonidentical images and 1 for identical images. Figure 4 Example of sent and received images. A quantification of the similarity between sent and received images is done using 2D image correlation. The value of the correlation coefficient ranges from 0 for nonidentical images to 1 for identical images. The correlation coefficient for the above image is c = 0.96. Rubik’s Cube ® used by permission of Rubik’s Brand Ltd . Full size image Encoding scheme The information encoding is performed in three different ways. In the first one, applied to grayscale images, we specifically assign a particular mode and a particular wavelength to the gray-level of each pixel forming the image. For example the mode LG 0–3 generated with λ 1 is assigned to the lowest gray-level and the mode LG 44 generated with λ 3 to the highest [see Fig. 5(a) ]. In this approach we are able to reach 105 different levels of gray. In a second approach, applied to color images, each pixel is first decomposed into its three color components (red, blue and green). The level of saturation of each color is assigned to one of the 35 different spatial modes and to a specific wavelength λ 1 , λ 2 or λ 3 [see Fig. 5(b) ]. In this approach only 35 levels of saturation can be reached with a total number of 105 generated modes. Finally, in the third we implement multi-bit encoding [see Fig. 5(c) ]. In this scheme, 256 levels of contrast are achieved by multiplexing eight different modes on a single hologram. Each of the 256 possible permutations, of these 8 modes, representing a particular gray level. Upon arrival to the detector each permutation is uniquely identified and the information is decoded to its 8-bit form to reconstruct the image. This approach was extended to high contrast color images by using a particular wavelength for each primary color intensity, achieving a total rate of 24 bits per pixel. The reliability of our technique was further tested by transmitting different complex images containing all levels of saturation in each RGB component. The transmission error rate, defined as the ratio between the number of wrong pixels and the total number of transmitted pixels, is found very low and did not reach 1% in the case of gray-scale images and of 2% for RGB images. Here we only show the results for one image ( Fig. 4 ), that clearly evinces the very high similitude between the original and recovered images. Figure 5 Encoding Configurations. ( a ) Single colour channel encoding, applied to gray-scale images. ( b ) RGB encoding, applied to colour images. ( c ) Multi-bit encoding, applied to both gray-scale and colour images. Rubik’s Cube ® used by permission of Rubik’s Brand Ltd . Full size image Discussion Very recently it was pointed out that OAM multiplexing is not an optimal technique for free-space information encoding and that OAM itself does not increase the bandwidth of optical communication systems 19 , 20 . It has also been suggested that MDM, requires a complete mode set for a real bandwidth increment. Indeed, in all work to date only the azimuthal component of transverse modes, that gives rise to OAM, has been used in multiplexing schemes. Here we point out that the propagation dynamics (beam size, divergence, phase shift etc.) in free space are entirely governed by the beam quality factor, 25 , with analogous relations for fibre modes. The M 2 can be viewed as a mode index: modes with the same index (e.g., p = 0, and p = 1, ) will propagate in an identical manner as they have the same space-bandwidth product (see supplementary information for some examples). It is clear that one mode set will be as good as any other (at least in terms of perturbation-free communication), provided that the elements are orthogonal and regardless of whether it carries OAM or not. To demonstrate this, we create a mixed radial and azimuthal mode set from the basis (with p = {0, 1, 2, 3, 4} and ) and use this to transfer information over free space. Moreover, by implementing MDM on different wavelengths, we demonstrate that it is possible to expand the overall transmission capacity by several orders of magnitude. The number of carrier channels would be given by the number of optical modes times the number of wavelengths. In our experiment we generated 35 optical modes and combined this with 3 different wavelengths, creating a basis set of 105 modes. These modes are used as information carriers in a proof-of-concept free space link, capable of transmitting and recovering information in real time with very high fidelity. Figure 4 is an example of the many images transmitted in our link. Each image is sent pixel by pixel, for this, the information of colour saturation of each pixel, is encoded using our mode set. Our encoding/decoding technique is key in the implementation of our optical link. Its simplicity linked to the versatility of SLMs, capable of operating in a wide range of the spectrum as well as with broad band sources, allowed us to generate customized digital hologram to encode and decode the information. Furthermore, the designed correlation filters are wavelength insensitive which allows the technique to operate in a large spectrum, compared to existing mode (de) multiplexers which are extremely wavelength sensitive, such as the photonic lantern. This approach can be extended to a wider range of wavelengths and to a higher number of modes, both limited by physical properties of the SLM as for example spatial resolution and wavelength operating range. The use of polarization could be potentially an additional degree of freedom and could possibly double the overall transmission capacity of the system. Even though here we have used our modes as information carriers, this experiment establishes the basis for this technique to be incorporated into standard communication systems. In this case each mode would represent a channel that can be modulated and detected with conventional technology. To conclude, we have introduced a novel holographic technique that allows over 100 modes to be encoded/decoded on a single hologram, across a wide wavelength range, in a wavelength independent manner. This technique allowed us to incorporate the radial component of LG beams as another degree of freedom for mode division multiplexing. By combining both degrees of freedom, radial and azimuthal, with wavelength-division multiplexing, we are able to generate over 100 information channels using a single hologram. As a proof-of-concept, we implemented different encoding techniques to transmit information, with very high accuracy, in a free space link that employs conventional technology such as SLMs and CCD cameras. Our approach can be implemented in both, free space and optical fibres, facilitating studies towards high bit rate next generation networks. Additionally, our technique could also be extended to other type of orthogonal modes regardless of their OAM content, as for example Hermite-Gaussian beams. Methods Experimental details The source, a continuum linearly-polarized Argon Ion laser (Laser Physics: 457–514 nm), is expanded and collimated by a telescope ( f 1 = 50 mm and f 2 = 300 mm) to approximate a plane wave. Afterwards it is decomposed into its different wavelength components by means of a grating. Three of these components, λ 1 = 457 nm, λ 2 = 488 nm and λ 3 = 514 nm propagating almost parallel to each other, are redirected to a HoloEye Pluto Spatial Light Modulator (SLM, 1080 × 1920 pixels) with a resolution of 8 μm per pixel [see Fig. 2(b) ]. The SLM is split into three independent screens, one for each beam and controlled independently. Each third is addressed with a hologram representing a Laguerre-Gaussian mode , where p is the radial index and the azimuthal index [see Fig. 2(c) ]. For this experiment we use 35 different modes [see Fig. 2(a) ], generated by combinations of p = {0, 1, 2, 3, 4} and . It should be stressed that the selection of the modes was made arbitrary and does not exclude any other combinations. These modes were encoded via complex amplitude modulations, a technique that allows for the generation of modes with purities higher than 0.98 26 . Additionally, we added blazed gratings to each hologram and spatially filtered the first diffraction order to generate wavelength independent holograms. Even though here we only used the first 35 LG modes, similar results would be obtained in using higher order modes and a larger set of modes, the limit is only imposed by the spatial resolution of the SLM. The information decoding is performed using modal decomposition, for this, the beams are projected onto a second SLM using a 4 f configuration system ( f 3 = 150 mm). This SLM is also split into three independent screens, each of which is addressed with a multiplexed hologram. This hologram consists of the complex conjugated of all 35 modes, encoded with different spatial carrier frequencies [see Fig. 2(d) ]. To identify each mode and therefore the graylevel of each pixel, we measured the on-axis intensity, of the projection, in the far field. For this we use a lens with focal length f 4 = 200 mm and a CCD camera (Point Grey Flea3 Mono USB3 1280 × 960) in a 2 f configuration system. In the detection plane (that of the camera), all 105 modes appear spatially separated, due to their unique carrier frequencies, in a rectangular configuration. In this way, an incoming mode can be unambiguously identified by detecting an on-axis high intensity [see Fig. 2(e) ]. Even though, it is possible to get on-axis intensity for many other modes, the one that matches the incoming one, is always brighter. In our experiment, it is necessary to compensate for small spherical aberrations, this is done by digitally encoding a cylindrical lens on the second SLM which corrects for all modes. Notice that, since our modes, generated in SLM-1, were imaged onto SLM-2 for decoding, the effects as phase shift, beam size and divergence caused by propagation would be negligible and modes would seem to propagate identically. These effects would become stronger when propagating over long distances: modes in the same group would have identical properties but significant differences would exist between group. Additional Information How to cite this article : Trichili, A. et al . Optical communication beyond orbital angular momentum. Sci. Rep. 6 , 27674; doi: 10.1038/srep27674 (2016).
The rise of big data and advances in information technology has serious implications for our ability to deliver sufficient bandwidth to meet the growing demand. Researchers at the University of the Witwatersrand in Johannesburg, South Africa, and the Council for Scientific and Industrial Research (CSIR) are looking at alternative sources that will be able to take over where traditional optical communications systems are likely to fail in future. In their latest research, published online today (10 June 2016) in the scientific journal, Scientific Reports, the team from South Africa and Tunisia demonstrate over 100 patterns of light used in an optical communication link, potentially increasing the bandwidth of communication systems by 100 times. The idea was conceived by Professor Andrew Forbes from Wits University, who led the collaboration. The key experiment was performed by Dr Carmelo Rosales-Guzman, a Research Fellow in the Structured Light group in the Wits School of Physics, and Dr Angela Dudley of the CSIR, an honorary academic at Wits. The first experiments on the topic were carried out by Abderrahmen Trichili of Sup'Com (Tunisia) as a visiting student to South Africa as part of an African Laser Centre funded research project. The other team members included Bienvenu Ndagano (Wits), Dr Amine Ben Salem (Sup'Com) and Professor Mourad Zghal (Sup'Com), all of who contributed significantly to the work. Bracing for the bandwidth ceiling Traditional optical communication systems modulate the amplitude, phase, polarisation, colour and frequency of the light that is transmitted. Yet despite these technologies, we are predicted to reach a bandwidth ceiling in the near future. Dr. Carmelo Rosales-Guzman from Wits University. Credit: Wits University But light also has a "pattern" - the intensity distribution of the light, that is, how it looks on a camera or a screen. Since these patterns are unique, they can be used to encode information: pattern 1 = channel 1 or the letter A,pattern 2 = channel 2 or the letter B, and so on. What does this mean? That future bandwidth can be increased by precisely the number of patterns of light we are able to use. Ten patterns mean a 10x increase in existing bandwidth, as 10 new channels would emerge for data transfer. At the moment modern optical communication systems only use one pattern. This is due to technical hurdles in how to pack information into these patterns of light, and how to get the information back out again. How the research was done In this latest work, the team showed data transmission with over 100 patterns of light, exploiting three degrees of freedom in the process. They used digital holograms written to a small liquid crystal display (LCD) and showed that it is possible to have a hologram encoded with over 100 patterns in multiple colours. Researchers demonstrate a 100x (times) increase in the amount of information that can be "packed into light" by showing how information was encoded into patterns of light by "sending" and "receiving" this example of a Rubik's cube. Credit: Wits University "This is the highest number of patterns created and detected on such a device to date, far exceeding the previous state-of-the-art," says Forbes. One of the novel steps was to make the device 'colour blind', so the same holograms can be used to encode many wavelengths. According to Rosales-Guzman to make this work "100 holograms were combined into a single, complex hologram. Moreover, each sub-hologram was individually tailored to correct for any optical aberrations due to the colour difference, angular offset and so on". What's next? The next stage is to move out of the laboratory and demonstrate the technology in a real-world system. "We are presently working with a commercial entity to test in just such an environment," says Forbes. The approach of the team could be used in both free-space and optical fibre networks.
http://www.nature.com/articles/srep27674
Medicine
Stopping a daily aspirin routine increases heart attack risk
Discontinuation of low dose aspirin and risk of myocardial infarction: case-control study in UK primary care, BMJ 2011; 343:d4094 doi: 10.1136/bmj.d4094 (Published 19 July 2011)
http://dx.doi.org/10.1136/bmj.d4094
https://medicalxpress.com/news/2011-07-daily-aspirin-routine-heart.html
Abstract Objectives To evaluate the risk of myocardial infarction and death from coronary heart disease after discontinuation of low dose aspirin in primary care patients with a history of cardiovascular events. Design Nested case-control study. Setting The Health Improvement Network (THIN) database in the United Kingdom. Participants Individuals aged 50-84 with a first prescription for aspirin (75-300 mg/day) for secondary prevention of cardiovascular outcomes in 2000-7 (n=39 513). Main outcome measures Individuals were followed up for a mean of 3.2 years to identify cases of non-fatal myocardial infarction or death from coronary heart disease. A nested case-control analysis assessed the risk of these events in those who had stopped taking low dose aspirin compared with those who had continued treatment. Results There were 876 non-fatal myocardial infarctions and 346 deaths from coronary heart disease. Compared with current users, people who had recently stopped taking aspirin had a significantly increased risk of non-fatal myocardial infarction or death from coronary heart disease combined (rate ratio 1.43, 95% confidence interval 1.12 to 1.84) and non-fatal myocardial infarction alone (1.63, 1.23 to 2.14). There was no significant association between recently stopping low dose aspirin and the risk of death from coronary heart disease (1.07, 0.67 to 1.69). For every 1000 patients, over a period of one year there were about four more cases of non-fatal myocardial infarction among patients who discontinued treatment with low dose aspirin (recent discontinuers) compared with patients who continued treatment. Conclusions Individuals with a history of cardiovascular events who stop taking low dose aspirin are at increased risk of non-fatal myocardial infarction compared with those who continue treatment. Introduction Low dose regimens of the antiplatelet agent aspirin (acetylsalicylic acid) are a standard treatment for the secondary prevention of cardiovascular outcomes. Meta-analysis of randomised controlled trials has shown that low dose aspirin is protective in most types of patient at increased risk of occlusive vascular events, including those who have had an acute myocardial infarction or ischaemic stroke and those who have stable or unstable angina, peripheral artery disease, or atrial fibrillation. 1 Guidelines recommend long term use of low dose aspirin (75-150 mg/day) as an effective antiplatelet regimen for patients with cardiovascular disease, unless contraindicated. 2 3 Despite the strong evidence supporting the protective effects of low dose aspirin, discontinuation rates of around 50% have been reported in patients who have been taking this medication for several years. 4 5 It is therefore of concern that recent discontinuation has been linked to an increase in the risk of ischaemic events and death. Cessation of treatment with oral antiplatelet agents (including aspirin and thienopyridines) has been shown to be an independent predictor of an increase in mortality after acute coronary syndromes, 6 and multivariate analysis has shown an increased risk of transient ischaemic attack in the four weeks after discontinuation of aspirin. 7 Another study of a cohort of patients with acute coronary syndromes found that acute coronary syndrome events occurred on average 10 days after discontinuation of low dose aspirin. 8 A systematic review of the literature to date showed that withdrawal of low dose aspirin is associated with a threefold increase in the risk of adverse cardiovascular events. 9 All the studies on this topic to date, however, have taken place in secondary care centres. We used a validated primary care database to evaluate the risk of non-fatal myocardial infarction and of death from coronary heart disease (both as separate end points and as a combined measure) after discontinuation of low dose aspirin in primary care patients taking it as secondary prevention for cardiovascular disease. Methods Data source The Health Improvement Network is a computerised medical research database that contains systematically recorded data on more than three million patients enrolled in primary care practices in the United Kingdom. Almost all of the UK population is registered with a primary care practitioner, and the network is representative of the UK population with regard to age, sex, and geographical distribution. It has also been validated for use in pharmacoepidemiological research. 10 Participating primary care practitioners record data as part of their routine care of patients, including demographic factors, consultation rates, referrals, hospital admissions, results of laboratory tests, diagnoses, and prescriptions written, and send them to the network for use in research projects. The Read classification is used to code specific diagnoses, 11 and a drug dictionary based on data from the MULTILEX classification is used to code drug prescriptions. 12 Studies have shown that 60-80% of UK patients who take aspirin for secondary prevention obtain their treatment by prescription rather than over the counter. 13 14 15 This proportion increases with age 13 and in those patients who do not have to pay prescription charges. 15 The Health Improvement Network should therefore be a representative source of data on low dose aspirin use in the UK. Source population We used the network to identify individuals aged 50-84 with a first ever prescription of low dose aspirin (defined as 75-300 mg/day) for the secondary prevention of cardiovascular or cerebrovascular events (defined as a diagnosis of angina (including stable angina), unstable angina, ischaemic heart disease, myocardial infarction, cerebrovascular disease, stroke, or transient ischaemic attack) from 1 January 2000 to 31 December 2007 (figure ⇓ ). Indications for first ever prescriptions for low dose aspirin were identified from the patients’ computerised records. This was done manually when there was more than one potential indication. Study participants were required to have been registered with their primary care practitioner for at least two years and to have a computerised prescription history for at least a year before the start of the study. They were also required to have no diagnosis of cancer, alcohol abuse, or alcohol related disease. Study design and case ascertainment of non-fatal myocardial infarction and death from coronary heart disease among people prescribed aspirin in primary care Download figure Open in new tab Download powerpoint All individuals in the study cohort were followed up from the day after their first prescription of low dose aspirin (start date) until the first of the following end points: first recorded diagnosis of myocardial infarction, cancer, alcohol abuse, age 85, death, or the end of the study period (31 December 2007). The final study cohort of 39 513 patients was followed up for a mean of 3.2 years (range 1.0 day to 8.0 years, SD 2.2). Selection of cases and case validation During follow-up, 3155 patients in the study cohort had a recorded diagnosis of myocardial infarction (figure ⇑ ). We manually reviewed the profiles of these patients, including the free text comments, to ascertain the number of patients with a new diagnosis of myocardial infarction or who died from coronary heart disease. We excluded patients with myocardial infarction if they were not admitted to hospital after the ischaemic event (and patients who were admitted to an emergency department and discharged on the same day) because events that do not require admission have lower diagnostic value than those that do require admission, which results in greater misclassification. Patients were also excluded if they were admitted to hospital for any reason other than cardiovascular disease and had a myocardial infarction while admitted. There were 2869 recorded deaths during the follow-up, and 824 of these patients had a recorded cardiovascular diagnosis in the 30 days before death (figure ⇑ ). We manually reviewed the profiles of these 824 individuals to identify those who had died from coronary heart disease. All those with coronary heart disease recorded on their death certificate as the underlying cause of death, or who had had a recent coronary artery occlusion or antemortem evidence of coronary heart disease in the absence of another cause of death, were considered to have died from coronary heart disease. All other patients were excluded. Previous studies have found that validation of diagnoses of myocardial infarction by a primary care practitioner and records of death from coronary heart disease results in a confirmation rate of more than 90%, 16 17 so we did not carry out further validation with primary care practitioners in our study. After the complete review process of patients with a diagnosis of myocardial infarction and patients who had potentially died from coronary heart disease, we classified 876 individuals as having non-fatal myocardial infarction and 346 individuals as having died from coronary heart disease (including fatal myocardial infarction) (figure ⇑ ). Selection of controls From the same source population of 39 513 patients, we randomly sampled a control group of 5000 individuals, frequency matched to the cases by age, sex, and calendar year. We used incidence density sampling so that the likelihood of being selected as a control was proportional to the person time at risk. To do this, we generated a date at random during the study period for each of the members of the source population. We obtained the random date with a pseudo-random number generator included in the C++ standard library. If the random date of a study member was included in his or her eligible person time, we used his or her random date as the index date and marked that person as an eligible control. Assessment of risk factors From the database we collected data on potential risk factors, including the number of visits to a primary care practitioner, referrals, and admissions to hospital (the year before the index date), lifestyle factors (any time before the index date), morbidities (any time before the start date), and drug treatment (between the start date and the index date). Drug treatment other than low dose aspirin was classified into four categories: current use—when the supply of the most recent prescription lasted until the index date or ended in the six days before the index date recent use—when the supply of the most recent prescription ended seven to 90 days before the index date (for all medications, except warfarin, for which recent use was defined as seven to 365 days before the index date) past use—when the most recent prescription ended 91 to 365 days before the index date non-use—when there was no recorded use of the relevant drug in the 365 days before the index date. We chose six days as the cut off for current use to allow for patients who did not completely adhere to their treatment and might still have been using the drug after the specified completion date. Assessment of discontinuation of low dose aspirin Current users were defined as individuals who were taking low dose aspirin at the index date, and discontinuers were defined as individuals with a period of over 30 days after the last prescription would have been finished (assuming complete adherence) who did not refill their prescription during this time. Discontinuers were then categorised into two mutually exclusive groups: recent discontinuers were patients whose last prescription for low dose aspirin finished 31 to 180 days before the index date distant discontinuers were those whose last prescription finished 181 to 365 days before the index date. To assess the effect of the definition of discontinuation, we performed a second analysis with discontinuation defined as a period of over 15 days after the last prescription would have been finished (assuming complete adherence), with no refill of the prescription during this time. In each case, investigators were blinded as to whether the record belonged to a case or a control. We identified reasons for discontinuation through manual review of patients’ profiles and classified them into four mutually exclusive categories: treatment change—defined as a switch, initiated by a physician, from low dose aspirin to another antiplatelet drug (such as clopidogrel or dipyridamole) or to an anticoagulant such as warfarin, with no evidence to suggest an adverse event related to aspirin safety concerns—defined as evidence of an adverse event related to low dose aspirin treatment (such as upper gastrointestinal bleeding or other upper gastrointestinal complications), intolerance to low dose aspirin (allergy/urticaria), initiation of gastroprotective medication, or planned surgery use of over the counter aspirin—reported when the general practitioner specified that patients were taking low dose aspirin in the absence of a recorded prescription for aspirin non-adherence—defined as discontinuation in the absence of any of the above factors. Analysis We calculated the incidence of non-fatal myocardial infarction and of death from coronary heart disease and performed a nested case-control analysis using unconditional logistic regression to assess potential risk factors for these outcomes. 18 The logistic regression was used to estimate odds ratios, which are unbiased estimates of incidence rate ratios in incidence density sampling. 19 The analyses used the occurrence of myocardial infarction and death from coronary heart disease as the dependent variable and the factors listed below as independent variables. Missing demographic data were assessed as a separate category. Risk estimates were adjusted by age, sex, calendar year, time to event, smoking status, ischaemic heart disease (at start date), cerebrovascular disease (at start date), diabetes (at start date), chronic obstructive pulmonary disease (at start date), and use of clopidogrel, statins, anticoagulants, nitrates, antihypertensives, oral steroids, or non-steroidal anti-inflammatory drugs. Analyses were stratified by sex and age. The significance of the interaction was tested with a likelihood test ratio by comparing a model with the main effect of two variables (sex and discontinuation) and the interaction term with a reduced model incorporating only the main effects. We also performed sensitivity analyses to assess the risk of residual confounding. Results Incidence of non-fatal myocardial infarction and death from coronary heart disease Over a mean follow-up of 3.2 years, we identified 876 individuals with a new diagnosis of non-fatal myocardial infarction (figure ⇑ ). In addition, we identified 346 individuals as having died from coronary heart disease. The overall incidence of non-fatal myocardial infarction was 6.87 per 1000 person years (95% confidence interval 6.43 to 7.34). The overall incidence of death from coronary heart disease was 2.71 per 1000 person years (2.44 to 3.02). The incidence of these outcomes stratified by indication for low dose aspirin is shown in table B in the appendix on bmj.com. The combined incidence of non-fatal myocardial infarction or death from coronary heart disease was 9.58 per 1000 person years (9.06 to 10.14). This was higher in the first year of follow-up (12.92 per 1000 person years, 11.78 to 14.17) than in the rest of the study period (8.33, 7.76 to 8.94). Risk factors for non-fatal myocardial infarction and death from coronary heart disease Several baseline characteristics and lifestyle factors were associated with a significantly increased risk of non-fatal myocardial infarction or death from coronary heart disease in users of low dose aspirin (table 1 ⇓ ). Current smokers had a significantly increased risk of non-fatal myocardial infarction or death from coronary heart disease compared with non-smokers, and patients who had been admitted to hospital in the year before the index date had a significantly greater risk than those who had not been admitted in that time. Compared with no diagnosis of the respective disease, a previous diagnosis of chronic obstructive pulmonary disease or diabetes was also associated with a significant increase in the risk of non-fatal myocardial infarction or death from coronary heart disease in this cohort of patients taking low dose aspirin (table 1). ⇓ Table 1 Rate ratios for combined non-fatal myocardial infarction (MI) or death from coronary heart disease (CHD) associated with various factors in patients taking low dose aspirin View this table: View popup View inline Concomitant use of oral steroids (rate ratio 2.32, 95% confidence interval 1.65 to 3.26) and traditional non-steroidal anti-inflammatory drugs (1.36, 1.04 to 1.77) was associated with a significant increase in the risk of non-fatal myocardial infarction or death from coronary heart disease compared with non-use. In contrast, patients currently taking statins had a significant decrease in risk compared with non-users (0.82, 0.69 to 0.97). Discontinuation of low dose aspirin and cardiovascular outcomes Among the 1222 cases, 877 (72%) were still using low dose aspirin, 108 (9%) were recent discontinuers, and 41 (3%) were distant discontinuers. Among the 5000 controls, 3784 (76%) were still using low dose aspirin, 357 (7%) were recent discontinuers, and 195 (4%) were distant discontinuers. Most recent discontinuers were non-adherent (68% of all recent discontinuers). Twelve per cent of patients switched to another antiplatelet or anticoagulant medication and 6% were using over the counter low dose aspirin. Individuals who had recently discontinued low dose aspirin had a significantly increased risk of non-fatal myocardial infarction or death from coronary heart disease compared with current users (rate ratio 1.43, 1.12 to 1.84; table 2 ⇓ ). This increased risk among recent discontinuers was similar for different durations of treatment with aspirin (table 2), and for the different indications (data not shown). There was no significant association between distant discontinuation of low dose aspirin and the risk of non-fatal myocardial infarction or death from coronary heart disease (1.19, 0.82 to 1.71) compared with current low dose aspirin use. There was also no significant difference in risk in distant discontinuers compared with recent discontinuers (data not shown). The increase in risk was virtually unchanged when we changed the definition of recent discontinuation to a period of more than 15 days after the end of the last prescription for low dose aspirin (1.41, 1.12 to 1.76). The risk estimate was also similar when we further adjusted the multivariate model by the number of admissions to hospital in the year before the index date (1.46, 1.14 to 1.87). Table 2 Risk of non-fatal myocardial infarction (MI) or death from coronary heart disease (CHD) among recent discontinuers of low dose aspirin View this table: View popup View inline When we categorised recent discontinuers according to their reason for discontinuation, there was a significant increase in the risk of myocardial infarction or death from coronary heart disease in those who were non-adherent (rate ratio 1.54 (1.15 to 2.06) compared with current use), but not in those who were defined as discontinuers but who were subsequently found to be taking over the counter low dose aspirin (rate ratio 0.82 (0.28 to 2.38) compared with current use). Recent discontinuers of low dose aspirin had a significantly increased risk of non-fatal myocardial infarction compared with current users (rate ratio 1.63, 1.23 to 2.14; table 3 ⇓ ). Based on an incidence of non-fatal myocardial infarction of about six per 1000 person years among current users of low dose aspirin, the incidence among recent discontinuers can be estimated as 10 per 1000 patient years: an extra four cases of non-fatal myocardial infarction associated with discontinuation among 1000 aspirin users. The risk of non-fatal myocardial infarction was significantly increased in patients who did not adhere to treatment but not in those who were taking over the counter low dose aspirin (table 3) ⇓ . The risk of non-fatal myocardial infarction in recent discontinuers varied slightly across the different age groups (table 3). The risk was higher in women than in men, but this difference was not significant (P=0.16 for interaction). Table 3 Risk of non-fatal myocardial infarction (MI) in recent discontinuers of low dose aspirin View this table: View popup View inline There was no significant association between discontinuation of low dose aspirin treatment and death from coronary heart disease among recent discontinuers (rate ratio 1.07, 0.67 to 1.69) or distant discontinuers (1.02, 0.54 to 1.94) compared with current users. There was also no significant increase in the risk of death from coronary heart disease when recent discontinuers were restricted to those who were truly non-adherent (rate ratio 1.03, 0.59 to 1.79). The risk of myocardial infarction or death from coronary heart disease seemed to be unaffected by adherence to other medications. For example, the risk in patients who discontinued low dose aspirin but were adherent to antihypertensive drugs (rate ratio 1.54, 1.00 to 2.38) was similar to that in the overall cohort. Similarly, there was no significant increase in the risk in patients who discontinued other drugs (see table C in appendix on bmj.com). Sensitivity analyses showed the association between discontinuing low dose aspirin and the risk of myocardial infarction or death from coronary heart disease to be robust. For the increased risk of coronary events among recent discontinuers to become non-significant, the analysis would have to be adjusted by an unknown confounder with an overall prevalence of 25% that is twice as common among discontinuers than non-discontinuers and a major risk factor for coronary events (rate ratio of ≥3). Discussion Patients with a history of cardiovascular or cerebrovascular disease in primary care who stop taking low dose aspirin are at a significantly increased risk of non-fatal myocardial infarction compared with those who continue such treatment. The increased risk is present irrespective of the length of time the patient had previously been taking low dose aspirin. This supports the results of previous studies in secondary care 6 8 and shows that they are applicable to the general population. An additional important finding is that the cumulative incidence of non-fatal myocardial infarction or death from coronary heart disease in patients taking low dose aspirin after a myocardial infarction was 4% during the mean follow-up of three years. This is consistent with data from clinical trials on the effectiveness of antithrombotic treatment for the secondary prevention of cardiovascular events, which indicate that 2-14% of patients (followed for a mean of 1 to 41 months) have a subsequent ischaemic event. 20 21 22 23 Strengths and weaknesses A major strength of this study is that use of The Health Improvement Network enabled analysis of an extensive sample that was representative of the UK primary care population and had age and sex distributions similar to those in the national population. Also, the network includes all patients in participating practices who have been diagnosed as having a primary cardiovascular event and prescribed low dose aspirin to prevent a secondary event in primary care, supporting the broad external validity of these findings. Moreover, we observed the increased risk of non-fatal myocardial infarction in patients who were truly non-adherent but not in those who were found to be taking over the counter aspirin, which reinforces the internal validity of this study. A potential limitation of the study is that use of aspirin might have been misclassified in some cases. For example, the recording of a prescription for low dose aspirin in The Health Improvement Network does not necessarily mean that the patient actually took it, although it is likely that many did as there was a prescription almost every month for most patients. Lack of systematic recording of over the counter aspirin is another potential source of misclassification. As described above, however, in the age range studied use of low dose aspirin for secondary prevention is predominantly prescription based. 13 15 Another limitation is the potential for confounding, which is a limitation of all observational studies. We have tried to control for this as much as possible by adjusting the multivariate analyses by demographic factors, traditional cardiovascular risk factors, comorbidity, and drug use. Nevertheless, it is not possible to control for all possible confounding factors, and it should be acknowledged that some of these factors might have had an impact on aspirin discontinuation rates. Sensitivity analyses showed that the association between discontinuation and the risk of myocardial infarction or death from coronary heart disease was robust. We think that it would therefore be unlikely for the association to be explained by unmeasured confounding. Conclusions and clinical implications We have shown that discontinuation of low dose aspirin increases the risk of non-fatal myocardial infarction in patients with a history of ischaemic events in primary care. The magnitude of this short term increase in risk after discontinuation is about the inverse of the benefit obtained with use of low dose aspirin treatment for secondary prevention. The implications of interrupting such treatment should be taken into account when managing the secondary prevention of cardiovascular events in primary care. Non-adherence in patients was the most common reason for discontinuation of low dose aspirin. Additional research is required to determine why patients stop this treatment in the absence of a clinical reason. Patients might not adhere to treatment because they forget to take it, because they do not perceive that it has therapeutic benefit, or because of adverse events not discussed with their primary care practitioner. Recorded safety concerns were the second most common reason for discontinuation in this study. Upper gastrointestinal side effects, including peptic ulcer disease and bleeding, 24 25 are the most serious adverse effects related to aspirin use. 26 Low dose aspirin, however, is of substantial net benefit in secondary prevention because the reduction in the risk of major coronary events outweighs the increased risk of major gastrointestinal bleeding in patients at high risk of cardiovascular events. 1 22 23 27 Reducing the number of patients who discontinue low dose aspirin could therefore have a major impact on the benefit obtained with low dose aspirin in the general population. Research is now needed to evaluate whether efforts to encourage patients to continue prophylactic treatment with low dose aspirin will result in a decrease in non-fatal myocardial infarction. What is already known on this topic Low dose aspirin is standard treatment for the secondary prevention of cardiovascular disease, though up to half of long term users stop taking it Secondary care studies have shown that discontinuation is associated with an increased risk of ischaemic events and death What this study adds Discontinuation of low dose aspirin increases the risk of non-fatal myocardial infarction or death from coronary heart disease by almost 50% in patients in primary care who have a history of ischaemic events There is no increase in the risk of death from coronary heart disease alone in patients who discontinue low dose aspirin The increased risk of non-fatal myocardial infarction after discontinuation is present irrespective of the length of time the patient had previously been taking low dose aspirin. Research is now needed to test whether efforts to encourage patients to continue prophylactic treatment with low dose aspirin result in a decrease in non-fatal myocardial infarction Notes Cite this as: BMJ 2011;343:d4094 Footnotes We thank Nesta Hughes and Catherine Hill, of Oxford PharmaGenesis, who provided writing support funded by AstraZeneca. Contributors: LAGR contributed to study design, data collection, statistical analysis, interpretation of data, and drafting the report; LC-S and EM-M contributed to data collection and statistical analysis and reviewed the report; SJ contributed to study design, interpretation of data, and reviewed the report. LAGR is guarantor. Funding: This study was funded by an unrestricted research grant from AstraZeneca Research and Development Mölndal. The sponsors played no part in the design or conduct of the study. Competing interests: All authors have completed the Unified Competing Interest form at (available on request from the corresponding author); LAGR, LC-S and EM-M work for CEIFE, which has received research funding from AstraZeneca; SJ is an employee of AstraZeneca. Ethical approval: This study was approved by the multicentre research ethics committee (08/H0305/49). Data sharing: No additional data available. This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: and .
(Medical Xpress) -- A new study published in the British Medical Journal suggests that people who have been diagnosed with heart disease and placed on a daily aspirin dose are at an increased risk of a heart attack if they stop taking the aspirin. Low dose aspirin, usually in a dose range between 75 and 300 milligrams, are prescribed to patients to reduce the risk of blood clots and a possible heart attack. However, for many different reasons, half of these patients eventually stop this routine. The researchers, led by Dr. Luis Garcia Rodriguez from the Spanish Center for Pharmacoepidemiologic Research, gathered data from medical records located in a large database in the United Kingdom called the Health Improvement Network. They looked at 39,513 patients between the ages of 50 and 84 that had been prescribed low dose aspirin between 2000 and 2007. What they discovered after a three year follow-up was that there was a 60 percent increase of a non-fatal heart attack in those patients who had discontinued taking their aspirin therapy. This breaks down to about four heart attacks per 1,000 patients who cease taking their aspirin therapy. Rodriguez emphasizes that patients should never stop taking their aspirin therapy unless directed to do so by their physician. This research shows how important just a tiny little pill once a day can make a big difference in decreasing the risk of another heart attack. The authors believe that more research needs to be done to look at what reasons might be causing patients to stop their aspirin therapy. Researchers believe that reasons such as simply forgetting, not believing it is therapeutically beneficial or possible adverse reactions that are not being discussed with their physician could be behind the discontinuation of aspirin treatment. They believe that more awareness needs to be made on the importance of adhering to an aspirin therapy treatment plan and advise all patients currently on aspirin therapy to make sure they take their aspirin every day to reduce their risk of another heart attack.
doi: 10.1136/bmj.d4094
Medicine
Neuroscientists find memory cells that help us interpret new situations
Hippocampal neurons represent events as transferable units of experience, Nature Neuroscience (2020). DOI: 10.1038/s41593-020-0614-x , nature.com/articles/s41593-020-0614-x Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/s41593-020-0614-x
https://medicalxpress.com/news/2020-04-neuroscientists-memory-cells-situations.html
Abstract The brain codes continuous spatial, temporal and sensory changes in daily experience. Recent studies suggest that the brain also tracks experience as segmented subdivisions (events), but the neural basis for encoding events remains unclear. Here, we designed a maze for mice, composed of four materially indistinguishable lap events, and identify hippocampal CA1 neurons whose activity are modulated not only by spatial location but also lap number. These ‘event-specific rate remapping’ (ESR) cells remain lap-specific even when the maze length is unpredictably altered within trials, which suggests that ESR cells treat lap events as fundamental units. The activity pattern of ESR cells is reused to represent lap events when the maze geometry is altered from square to circle, which suggests that it helps transfer knowledge between experiences. ESR activity is separately manipulable from spatial activity, and may therefore constitute an independent hippocampal code: an ‘event code’ dedicated to organizing experience by events as discrete and transferable units. Main How is daily experience represented in the brain? Most daily experiences involve traveling to different places and/or seeing different things, and so contain a multitude of spatial and sensory variations (Fig. 1a , top and middle). Hippocampal cells monitor these continuous changes in space, passing time and sensory stimuli 1 , 2 , 3 , 4 , 5 . Fig. 1: Experimental design to study the segmentation of experience into units. a , Illustration of experience as a sequence of continuous, moment-to-moment variations in space, time and sensory stimuli (top) or as a sequence of discrete events as fundamental units of the experience (middle). In our behavioral task (bottom), a skeletal experience stripped of sensory and spatial differences was used to identify neuronal representations that track events as fundamental units. b , Schematic of implantation of the microendoscope into the dCA1 of Wfs1 -Cre mice with AAV2/5-Syn-flex-GCaMP6f-WPRE-SV40 virus injected in the dCA1 for imaging CA1 pyramidal cells. c , Top: a coronal section of hippocampus showing the area of cortex aspiration (white dotted line) and labeled Wfs1 + cells (green). The image is representative of aspiration surgeries from n = 14 mice. Bottom: Δ F/F calcium traces of n = 15 Wfs1 + (pyramidal) cells in the CA1, where red traces denote significant calcium transients. d , During the standard four-lap-per-trial experiment, reward was delivered to the animal at the beginning of lap 1 in the reward box once every four laps. e , CA1 calcium activity sorted by spatial position and lap number showed activity in the same place on every lap, but displayed a higher activity level during a specific lap compared with other laps (263 cells from an example animal shown). The red label on the x axis indicates the reward box spatial bin, and the green label on the x axis indicates the 100-cm-long maze track. f , Trial-by-trial calcium activity of lap-specific neurons for example laps 1, 2, 3 and 4 (L1–L4), organized by the location of activity along the track and by lap number. The top panels show trial-by-trial calcium activities, while the bottom panels show trial-averaged calcium activities (mean ± s.e.m.). The standard error was cut off at 0 because negative activity does not exist. g , Model correction of lap-dependent neuronal activity. Top left: example neuron with the raw calcium activity level (light blue) sorted by the lap number and spatial bin. Bottom left: the peak spatial bin was analyzed to detect lap-specific calcium activity. The plotted calcium rate was explained by the speed and head orientation fitted linear model (pink trace; see Methods ). Top and bottom right: the same as the left charts, but plotted with the lap-specific remaining calcium rate after the linear model was subtracted, resulting in the MC calcium activity. h , Summary statistics of the percentage of ESR cells in the entire CA1 pyramidal population that were tuned to laps 1, 2, 3 or 4 in the standard 4-lap experiment ( n = 14 animals). i – k , For this set of experiments, reward was given to the animal at every lap ( i ). The percentage of significant ESR cells (9%; 101 out of 1,072 cells) was significantly reduced when reward was given at every lap (1R/L) compared with the same animals running the standard 4-lap-per-trial task (1R/4L; 28%; 371 out of 1,328 cells) ( χ 2 = 128.7, *** P <1 × 10 −16 , blue lines represent five mice) ( j ). Summary statistics of the percentage of ESR cells in the entire CA1 pyramidal population that were tuned to laps 1, 2, 3 or 4 during the reward every lap experiment ( k ). Full size image Meanwhile others, based on recent human imaging studies 6 , 7 , 8 , 9 , have suggested that besides tracking the continuously changing sensory environment, the brain tracks daily experience as a chain of discrete, segmented subdivisions or events. Each event arises as a discrete epoch of experience, with its continuous sensory and spatial changes grouped together as a unit. It has been suggested that events are abstract and generalizable entities and can be divorced from specific sensory details 10 , 11 , 12 , 13 . Take dining in a restaurant as an example. Two different dinner experiences can share the same set of events; that is, eating an appetizer, main course and dessert, even if they occur at different restaurants, involve different foods and last varying amounts of time (Fig. 1a , middle). In other words, each of these events has a degree of invariance to the variations of their actual physical and sensory contents. Instead, these events are defined by the abstract, ordered relationships to one another; that is, an appetizer is eaten first, followed by a main dish, which is followed by dessert. This allows events to describe widely varying experiences in a generalized manner. Encoding these abstract events is important to behave perspicaciously in the world. Beyond representing continuous changes in space, there is evidence to indicate that hippocampal neurons encode broader episodic information, including changes in sensory cues 14 and past and future trajectories 15 , 16 . Hippocampal neurons encode this broader episodic information by changing the activity rate at a given place field (rate remapping). However, neural representations dedicated to encoding events as units of experience, separate from the tracking of the immediate continuous environment, remain poorly understood. In this study, we found a hippocampal representation that treats events as discrete units of experience and show that these event representations can be transferred between different experiences. We also show that this representation is reciprocally and independently manipulable from continuous representation of space. Results Task design to study the segmentation of experience into units With oft-used behavioral paradigms that involve changes in spatial 14 , 15 , 16 , 17 and sensory variables 18 , 19 , it is difficult to separately identify neurons that track discrete and unitary events from those that track continuous sensory stimuli or spatial differences. For these reasons, we designed a repetitive behavioral task in which sensory cues and spatial trajectories were kept constant for multiple events (Fig. 1a,d ). This allowed the influence of events as fundamental units to be separated from the influence of changing sensory or spatial information. In our task, mice repeatedly ran through a square maze subdivided into four laps per trial (Fig. 1d ). A reward was delivered at the onset of lap 1 of every trial, as a single temporal cue, with the subsequent three laps unrewarded. Salient stimuli can potentially serve as boundaries between events 9 , 17 ; therefore, it is possible that reward box visits may serve as event boundaries between lap events. In fact, these mice visited the reward box after every lap, regardless of whether a reward was delivered (Extended Data Fig. 1a , left). We tested whether there were specific neurons that track lap events as discrete units of experience. An adeno-associated virus (AAV) expressing the calcium indicator GCaMP6f (AAV2/5-Syn-flex-GCaMP6f-WPRE-SV40) 20 was injected into the dorsal CA1 (dCA1) of the hippocampus in Wfs1 (Wolframin-1) promoter-driven Cre transgenic mice 21 , 22 . A microendoscope was implanted above the dCA1 (ref. 23 ) to enable long-term calcium imaging in freely moving mice (Fig. 1b,c ). We recorded calcium activity and characterized the spatial selectivity of CA1 neurons (Extended Data Fig. 1c ) as mice navigated the square maze (Fig. 1d ). During testing, animals completed 15–20 trials (60–80 laps) in succession. On average, test mice took 98 s to complete one trial (Extended Data Fig. 1a , right). For each neuron during each of the four laps, we calculated its average calcium activity level during moving periods (>4 cm s –1 ) within spatial bins that tiled the maze ( Methods ). In total, 72% (2,509 out of 3,506) of CA1 cells from 14 animals were significant place cells. Some neurons were most active during reward consumption (lap 1) in the reward box (see Extended Data Fig. 1d for examples of reward-driven neurons); these cells were excluded from further analysis because they were active in direct response to the reward ( Methods ). In general, neurons that were active in the start box during non-rewarded laps, or in the maze, were active at the same location on every lap, but showed higher activity for a specific lap compared with other laps (Fig. 1e ; Extended Data Fig. 1b ). Here, we show the trial-by-trial calcium activities of example neurons for each of the four laps. Indeed, example neurons showed robustly higher activity during a particular lap across trials (Fig. 1f ; Extended Data Fig. 1e ). Since CA1 activity is sensitive to a variety of behavioral variables, including spatial location 2 , running speed 24 , 25 and head direction 25 , 26 (Extended Data Fig. 2b,c ), we fitted the activity of each neuron to a linear model incorporating the spatial location, head direction and running speed of the animal ( Methods ) to investigate whether these modeled variables were enough to account for the lap preference. We then calculated the remaining calcium activity across four laps that was not accounted for by the model; we refer to this activity as model-corrected (MC) calcium activity (Fig. 1g ). Each CA1 cell had a lap number during which the cell had the highest activity rate (its preferred lap), and in 30% of CA1 cells (1,055 out of 3,506 cells, n = 14 mice; see Supplementary Fig. 1 for results for individual mice), this peak, lap-specific MC activity was significantly different (outside the 95th confidence interval) compared to shuffles. Although CA1 calcium activity exhibited stochasticity in its trial-by-trial activation (Fig. 1f ; Extended Data Fig. 1e ), these significant lap-modulated CA1 cells exhibited a systematic lap-modulated pattern that was robust across trials (Extended Data Fig. 3b ). As a control comparison, the percentage of lap-modulated CA1 cells was reduced when reward was given every lap (9% (101 out of 1,072 cells), n = 5 animals; Fig. 1i–k ). Within the confines of the four identical lap task structure, this lap-event-specific MC activity manifested as a sequence of rate remapping 14 , 15 , 16 over a constant place-field location. These cells are henceforth called ESR cells. All the other CA1 cells were considered non-ESR cells. To study the lap-modulated calcium activity pattern for a given ESR cell, we compiled its lap-modulated MC activity in the peak spatial bin ( Methods ) for all four laps; this sequence of differential activity levels across the four laps is referred to as the ESR activity pattern (Fig. 1g , bottom). For the rest of this study, we investigated the features of the experience that give rise to the ESR phenomenon. For a set of five animals that were exposed to the four-lap-per-trial task for the first time, the percentage of ESR cells was 17% (176 out of 1,008 cells, n = 5 mice), but following 8 days of training on the lap task, the proportion rose to 29% for these same mice (335 out of 1,168 cells; Extended Data Fig. 4c–d ; χ 2 = 37.9, P = 7.4 × 10 −10 ), which shows that ESR activity patterns are learned. Correspondingly, mice did not run at increased speed during the fourth lap compared with the first lap during the first exposure to the four-lap task, but they ran significantly faster during the fourth lap compared with the first lap following 8 days of training on the four-lap task (Extended Data Fig. 4e ). In addition, once mice had been well trained, if reward was unexpectedly delayed by an extra lap on some trials, mice still ran significantly faster on the fourth lap compared with the first lap, but also ran significantly slower during this extra fifth lap compared with the fourth lap, which suggests that animals anticipated reward exactly at the end of the fourth lap (Extended Data Fig. 4b ). We tracked ESR cells across days (Extended Data Fig. 5 ) to examine the preservation of ESR activity patterns. For every individual ESR cell on day 1, we defined an index for how much the ESR activity patterns are preserved across days as the Pearson’s correlation of its ESR activity pattern on day 1 versus day 2. The proportion of ESR cells that were highly preserved across days was significantly greater than chance for each of the separate populations of lap 1 cells, lap 2 cells, lap 3 cells and lap 4 cells (Fig. 2e ; see Fig. 2a–c for example cells; see Extended Data Fig. 6a for the analogous raw change in fluorescence (Δ F / F ) results without model correction; Methods ). The ESR activity patterns for laps 1–4 were highly preserved even when half the trials were eliminated (Extended Data Fig. 7a–c ), as measured by the ESR correlation index. Generally, lap 1 cells were more highly represented (Fig. 1h ) compared with laps 2, 3 and 4 cells, although all four subpopulations of lap-specific cells were significantly preserved in their own right (Fig. 2e ; Extended Data Fig. 6a ). Fig. 2: Lap 1, 2, 3 and 4 ESR cells are reliably preserved across days. a – c , Trial-by-trial calcium activities of lap-specific neurons for example laps 1, 2, 3 and 4, matched across two consecutive days ( a ). The top panels show trial-by-trial calcium activities, while the bottom panels show trial-averaged calcium activities (mean ± s.e.m.). The number of trials for each cell is indicated in each panel. The standard error was cut off at 0 because negative activity does not exist. The spatial activity ( b ) and ESR activity ( c ), as measured by MC calcium activity, were calculated for these example neurons. For each example neuron, the Pearson’s correlation between its ESR activity pattern across days 1 and 2 was computed ( c , values in orange text). This ESR correlation serves as an index of how well the ESR pattern was preserved across days. d , The calcium activity of individual ESR cells sorted by spatial location on the track during day 1, with this cell order matched across days, affirm that preserved place fields occur across days (622 cells in total, n = 8 animals). e , Summary data of Pearson’s correlation of ESR activity across days for these individual cells, plotted separately for lap 1, 2, 3 and 4 cell populations. The ESR activity of each individual cell on day 1 correlated with its own ESR activity on day 2 is in orange. The ESR activity of each cell on day 1 correlated with the ESR activity of arbitrary cells (that is, shuffled cell identities) from day 2 is in gray. The proportion of cells with highly preserved ESR patterns across days (that is, cells with Pearson’s r > 0.6, outlined in blue boxes) was significantly greater compared to shuffles. χ 2 and P values are shown in the figure ( n = 622 cells in total). f , Individual cells show both high ESR correlations and high spatial correlations across days ( n = 622 cells). Full size image ESR treats events as fundamental units of the experience Several key results support the notion that ESR tracks lap events as discrete units of the experience, separate from the continuous moments that also make up the experience. Since previous studies 2 , 3 , 4 , 5 have shown that the hippocampus encodes continuously changing variables, we investigated in detail the relationship between ESR activity and several continuous episodic variables. Instead of tracking lap events, could ESR cells be tracking a particular duration of time since the start of the trial? Time cells 3 , 4 require a reliable temporal delay period, otherwise they do not arise 4 . Because the animals in our task were allowed to behave freely, and took unpredictable and variable durations to complete the trials of the task (Extended Data Fig. 8a,b ) and ran at varying speeds (Extended Data Fig. 9a,b , purple), ESR cells were unlikely to act as time cells in our task. Could ESR cells instead be representing the total distance continually traveled along the course of the four-lap task since the start of the trial? When we elongated the maze in one dimension to twice the usual length (Extended Data Fig. 8c ; Methods , “Task-specific training”), the ESR activity patterns were still significantly preserved across days (Extended Data Fig. 8f ; see Extended Data Fig. 8d,e for example cells; see Extended Data Fig. 6b for raw Δ F / F results), which suggests that it is unlikely that ESR cells directly track the continuous distance traveled. We conducted another experiment to investigate whether laps are treated as units of experience. A four-lap-per-trial task was conducted in which the maze was elongated on pseudorandomly chosen laps of pseudorandomly chosen trials (see Fig. 3a , left, for trial types; see Fig. 3b for the full task schedule; Methods ). Importantly, this maze was largely stripped of predictability in traveled distance, but the four discrete-lap structure was preserved. A total of 27% of CA1 cells (306 out of 1,128 cells, n = 6 animals) active in all trial types of this experiment were significant ESR cells. For these cells, their ESR activity pattern during the standard (short SSSS sequence, where S denotes a short lap) trials was preserved during each of the pseudorandomly elongated trial types (Fig. 3c–f ; see Fig. 3a , right, for an example cell; see Extended Data Fig. 6c for raw Δ F / F results). Thus, ESR activity of this sizeable population of CA1 cells was unperturbed by arbitrary and unpredicted variations within the relevant lap event or even variations within neighboring (preceding and succeeding) lap events (illustrated in Fig. 3a , middle). For these cells, their ESR activity was preserved even during SSLL sequence (where L denotes a long lap) trials compared with LLSS trials (Fig. 3f ), which were trials that had the same total distance (Fig. 3a , middle) but had different internal segmentation into long and short laps. Therefore, ESR activity treats these lap events as separate units of the experience that are unaffected by spatiotemporal variations within the current or neighboring event units. Fig. 3: ESR treats events as fundamental units of experience. a , Left: four types of trials during the random maze elongation experiment: SSSS, LLSS, SSLL and LLLL. Each trial type has consistent four-laps-per-reward structure despite variability within the lap events. Right: an example L2 cell showing its ESR activity and spatial activity during each of these different experiments. b , Schematic of the pseudorandom experiment schedule, whereby a total of 28 trials were performed, with 7 trials pseudorandomly represented for each of the four types. c – f , ESR correlations of individual cells during standard four-lap trials (SSSS) versus LLSS trials ( c ), SSSS versus SSLL trials ( d ), SSSS versus LLLL trials ( e ) or SSLL versus LLSS trials ( f ) (the same 306 cells were used, n = 6 mice, for each separate trial type comparison). The proportion of cells with highly preserved ESR patterns across trial types (Pearson’s r > 0.6, outlined in the blue boxes) was significantly greater compared to shuffles. Full size image ESR activity is transferrable between experiences The results thus far suggest that the lap events tracked by ESR have a generalizable nature and are robust against continuous variabilities like time (Extended Data Fig. 8a,b ) or distance (Fig. 3a–f ; Extended Data Fig. 8c–f ). If this notion is correct, then we predict that ESR activity should exhibit a degree of independence from sensory and spatial content. To test this concept, we conducted a 2-day experiment with the four-lap-per-reward task on two geometrically distinct mazes. The standard square maze was used on the first day and a circular maze was used on the second day (Fig. 4a ; Methods , “Task-specific training”). The spatial activity of ESR cells globally remapped on the circular maze compared with the standard square maze as a result of the different geometry of the maze (Figs. 4e,g (blue histogram); see Fig. 4c for example cells; see Extended Data Fig. 6d for raw Δ F / F results). Nevertheless, a significant proportion of ESR cells tracked circular laps using the same lap-specific activity pattern as the corresponding square maze laps (38% (176 out of 461 total cells), with ESR correlation > 0.6 across days; Fig. 4f ; see Fig. 4b,d for example cells). Thus, the knowledge of lap specificity acquired during the square maze was reused (transferred) when the animals were faced with the circular maze. Fig. 4: ESR tracks lap events despite changes in maze geometry. a , Schematic of the circular maze experiment. The standard square maze was used on day 1, and the circular maze was used on day 2. b – d , Example lap-specific neurons for example laps 1, 2, 3, and 4, with trial-by-trial calcium activities ( b ), spatial activities ( c ) and ESR activities ( d ) matched across the standard maze and the circular maze sessions. The number of trials for each cell is indicated in each panel in b . e , The calcium activity of individual ESR cells sorted by spatial location on the square linear track during day 1, with this cell order matched across days (461 cells, n = 5 mice). f , ESR correlations of these individual cells during the square maze versus circular maze sessions. The proportion of cells with highly preserved ESR patterns across days (Pearson’s r > 0.6, outline in the blue boxes) was significantly greater compared to shuffles ( n = 461 cells total). g , Top: individual cells showed high ESR correlations, while spatial fields remapped, during the circular maze experiment ( n = 461 cells). Bottom: the same plot as above, but applied to the subpopulation of highly preserved (that is, ESR correlation > 0.6, see Fig. 2e for details) ESR cells; their spatial activity was also remapped ( n = 176 cells). Full size image To further test the generalizable nature of these lap events, we modified the repetitive square maze by adding spatial trajectory variation. A 2-day experiment was conducted in which the four-lap-per-reward was preserved on the second day, whereas the spatial trajectories were altered every two laps (Extended Data Fig. 10a ; Methods , “Task-specific training”). Here, a significant proportion of lap 1–4 ESR cells still had preserved ESR activity across sessions (Extended Data Fig. 10d ; see Extended Data Fig. 6e for raw Δ F / F results), and coded laps 1–4 despite the animals experiencing differential spatial trajectories on different laps. ESR tracks the relationships between events ESR does not reflect precise sensory information per se, so we hypothesized that it might instead reflect more generalized information from the structure of the task, such as the ordered relationships 27 , 28 between the event units. This was suggested by the fact that the four lap events of our task are identical to one another in their sensory and spatial content, yet, ESR activity still reliably distinguished each of the four lap numbers (laps 1, 2, 3 and 4). To further test the hypothesis that ESR tracks the ordered relationships between events, we conducted two experiments. First, we conducted an experiment in which the relationships between lap events were abolished. The standard four-lap-per-trial experiment was conducted on the first day, but the once-in-four lap delivery of the reward (which serves as a temporal marker) was perturbed on the second day and instead the reward was provided every lap (Fig. 5a ). We found that preservation of the ESR activity pattern across laps 1, 2, 3 and 4 was significantly abolished across days (Fig. 5b,c ). Fig. 5: ESR tracks the relationships between events. a – c , This set of experiments used the reward every lap design. a , The standard four-lap-per-trial experiment was used on day 1; the reward every lap experiment was used on day 2. b , ESR correlations across the standard four-lap versus reward every lap experiment (134 cells, n = 3 mice). See Fig. 2e for a more detailed description and the methods. c , Individual cells showed high spatial correlation while ESR representations were perturbed during the four-lap versus reward every lap experiment ( n = 134 cells). d , For the lap addition experiment, the standard four-lap-per-trial experiment was used on day 1, and the five-lap-per-trial experiment was used on day 2. e , ESR correlations across the four-lap and five-lap experiment sessions (382 cells, n = 4 mice). See Fig. 2e for description and methods. f , Individual L3 cells showed high spatial correlation while ESR representations were perturbed during the lap addition experiment (55 cells, n = 4 mice). g , Two example neurons matched across four-lap and five-lap experiment sessions that transformed from lap 3 to lap 4 preference. h , Percentage of cells that transformed from lap 3 to lap 4 preference (17 out of 55 cells, blue circles represent 4 mice). i , Two example neurons matched across four-lap and five-lap experiment sessions that transformed from lap 4 to lap 5 preference. j , Percentage of cells that transformed from lap 4 to lap 5 preference (42 out of 60 cells, blue circles represent 4 mice). k , The MC activity of cells from j during lap 4 on day 1 was significantly decreased during the same lap on day 2 (42 cells, Wilcoxon signed-rank test: z = 4.57). l , The MC activity of the cells from j during lap 4 on day 1 was not significantly different from the MC activity during lap 5 on day 2 (42 cells, Wilcoxon signed-rank test: z = −0.72). Box and whisker plots display the median, the 25th and 75th percentiles (box), and the maximum and minimum values (whiskers). *** P < 0.001; NS, not significant. Full size image Second, we conducted an experiment in which the relationships between the lap events were more subtly perturbed, and asked how this affects ESR activity. The standard four-lap-per-trial experiment was conducted on the first day and a non-rewarded fifth lap was added to all the trials on the second day before reward presentation (Fig. 5d , Methods, “Task-specific training”). On day 2, lap 1 and 2 cells had preserved ESR activity despite the added lap (Fig. 5e ; see Extended Data Fig. 6f for raw Δ F / F results). By contrast, lap 3 cells were abruptly and discretely perturbed (Fig. 5e,f ). Indeed, a significant proportion of lap 3 cells shifted to track lap 4 (Fig. 5g,h ; 31% (17 out of 55 cells), n = 4 mice). By contrast, only 9% (5 out of 55) of cells maintained their lap 3 preference. Similarly, lap 4 cells shifted to track lap 5 (Fig. 5i,j ). Although, overall, the pattern of most lap 4 cell activity across the 2 days was well correlated (Fig. 5e ), a significant proportion of lap 4 cells (70% (42 out of 60 cells), n = 4 mice, P <1 × 10 −4 compared to shuffling) showed a significant decrease in overall activity level during lap 4 on day 2 (Fig. 5k ; see Extended Data Fig. 6f , right, for raw Δ F / F results), but also showed an apparent concomitant restoration of activity level during lap 5 (Fig. 5l ; see Extended Data Fig. 6f , right, for raw Δ F / F results). This decrease in activity during lap 4, and shift in activity by precisely one lap unit, reflects the fact that the fourth lap is no longer rewarded and that an extra lap is needed to fulfill the total requirement of 5 laps to receive a reward. By contrast, only 3% (2 out of 60) of cells maintained their lap 4 preference. Therefore, ESR tracks the skeletal structure of experience, whereby it tracks events as fundamental units and the ordered relationships between them. ESR activity and spatial activity are jointly but separately represented in the same cells ESR activity and spatial activity occur jointly in the same cells. But what is the relationship between these two representations? Within the confines of the standard four-lap task (Fig. 1 ), ESR activity manifests as the differential activity rate during each of the different laps (Fig. 1e ) at the same spatial field location; therefore, it is a form of rate remapping 4 , 14 , 15 (Fig. 1e ; Extended Data Fig. 1b ). We then investigated whether this ESR activity pattern is necessarily tied to its particular place-field location by testing whether ESR activity could be reciprocally and separately manipulable from the spatial activity. When the maze and task were not altered across days, both ESR activity patterns and spatial activity patterns remained unperturbed (Fig. 2f,d ). By contrast, as we previously showed, the four-lap task conducted on a circular maze geometry distorted the spatial activity, but ESR activity remained largely intact (Fig. 4g,c ), which shows that ESR activity is not tied to its particular place-field location. Can the converse result, an alteration of ESR activity pattern without a concomitant alteration of the spatial activity pattern, be observed? First, perturbing the relationships between events by adding a lap (Fig. 5d ) altered some ESR activity patterns (Fig. 5g–l ; Fig. 5f , orange histogram), but spatial activity was still preserved in the same cells (Fig. 5f , blue histogram). Second, we investigated medial entorhinal cortex (MEC) axonal terminal inhibition in the CA1 and asked how it affects the ESR activity versus the spatial activity patterns. MEC input into the hippocampus has been implicated in the sequential organization of experiences 29 , 30 , 31 , although this is not the only brain region to be implicated 32 , 33 . Based on these earlier studies, a virus expressing inhibitory opsin (AAV2/2-EF1a-DIO-eNpHR3.0-mCherry) was bilaterally injected into the MEC subregion of pOxr1 -Cre mice 34 , 35 (Fig. 6a,b ). In addition, a virus expressing the calcium indicator GCaMP6f (AAV2/5-CamKII-GCaMP6f-WPRE-SV40) was unilaterally injected into the dCA1 of the same mice (Fig. 6a ). An optoendoscope was implanted above the dCA1 to enable long-term calcium imaging and optogenetic inhibition of the axonal terminals from MEC neurons in the dCA1. The mice ran 28–40 trials of the four-lap task, whereby the trials alternated between optogenetic inactivation (light-on) and no inactivation (light-off) (Fig. 6c ) of the MEC axonal terminals. Inactivation of MEC terminals in the dCA1 did not change the spatial activity patterns in the dCA1 (Fig. 6g,i (blue histogram); see Fig. 6e for example cells), which is consistent with previous studies 30 . However, this inactivation altered ESR activity patterns in the same cells (Fig. 6h,i (orange histogram); see Fig. 6d–f for example cells). By contrast, control mice that were injected with AAV2/2-EF1a-DIO-mCherry (that is, no eNpHR3.0 to inhibit opsin) had preserved ESR activity patterns across light-on and light off trials (Fig. 6j,k ). Fig. 6: ESR activity and spatial activity are separately manipulable. a , Schematic of CA1 imaging and MEC terminal inhibition in the CA1, which were simultaneously conducted. b , An image of a coronal section of hippocampus showing the area of cortex aspiration (white dotted line) and MEC inputs labeled with eNpHR3.0 (red). DG, dentate gyrus; SLM, stratum lacunosum moleculare; SP, stratum pyramidale. Image is representative of aspiration surgeries from n = 6 pOxr1 -Cre mice. c , During the standard four-lap-per-trial experiment, optogenetics light-on and light-off conditions were alternated every two trials for a total of 32–40 trials. d – f , Lap-specific neurons for example laps 1, 2, 3 and 4 with trial-by-trial calcium activity ( d ), spatial activity ( e ) and ESR activity ( f ) matched across the light-off versus light-on trials. The number of trials for each cell is indicated in each panel in d . g , The calcium activity of individual ESR cells sorted by the spatial location on the track during day 1, with this cell order matched across days (182 ESR cells, n = 3 mice). h , ESR correlations of these individual cells across light-on versus light-off. The proportion of cells with highly preserved ESR patterns across conditions (Pearson’s r > 0.6, outlined in the blue boxes) was not significantly different compared to shuffles. χ 2 and P values shown in the figure ( n = 182 cells). i , Individual cells showed high spatial correlations while ESR representations were perturbed across light-on versus light-off conditions ( n = 182 cells). j , ESR correlations across light-on versus light-off conditions for control mice injected with AAV2/2-EF1a-DIO-mCherry (164 ESR cells, n = 3 mice). k , Individual cells from these control mice showed high spatial correlations and ESR correlations across light-on versus light-off conditions ( n = 164 cells). Full size image Taken together, the lap-specific activity pattern (that is, ESR) is reciprocally and independently manipulable from spatial activity, although the two representations are jointly expressed in the same cells. Discrete event-modulated activity occurs together with continuous non-spatial activity ESR activity and spatial activity are jointly represented in the same cells. What happens when the main continuous changes in the experience are not spatial? To answer this question, we conducted another four-lap-per-trial experiment in which the first arm of the standard four-lap-per-trial maze was replaced by a treadmill (Fig. 7a ). Animals ran for 12 s on the treadmill at 14 cm s –1 for every lap. Monitoring the activity of neurons on a treadmill obviates the necessity of model corrections for running speed and head direction changes (Fig. 1g ) because they are nearly constant on the treadmill (Extended Data Fig. 9a,b ). Consistent with previous studies 3 , 4 , cells were active during this non-spatial treadmill period (Fig. 7b,d ). During the period restricted to the treadmill, 20% of CA1 cells (243 out of 1,222 cells, n = 5 animals) had significantly lap-modulated activity (Fig. 7g ; Methods ; for examples, see Fig. 7e ) that exhibited a systematic lap-modulated pattern that was robust across trials (Fig. 7f ). The percentage of ESR cells was reduced when reward was given every lap (6% (42 out of 681 cells), n = 3 animals; Fig. 7h–j ). These results indicate that the tracking of experiences by the hippocampus occurs in a joint manner with an event-specific representation along with a continuous variable-tracking representation, even when the continuous experience is primarily non-spatial in nature. Fig. 7: Discrete ESR activity occurs together with continuous non-spatial activity. a , Schematic of the four-lap-per-trial experiment with a 12-s treadmill period on each lap. ESR activity in this experiment was only investigated during the treadmill period. b , CA1 calcium activity sorted by time (s) of activity on the treadmill; the lap number showed activity at the same time on every lap, but displayed a higher activity level during a specific lap compared with other laps (222 cells from an example animal shown). c , Cartoon of mouse running during the treadmill period. The maze and door were not transparent in the task, but are shown as transparent here for illustration of the treadmill below. d , e , Trial-by-trial calcium activity of example neurons that did not have lap preference ( d ) and lap-specific neurons ( e ) for example laps 1, 2, 3 and 4. The top panels show trial-by-trial calcium activities, while the bottom panels show trial-averaged calcium activities (mean ± s.e.m.). The number of trials for each cell is indicated in each panel. The standard error was cut off at 0 because negative activity does not exist. f , ESR correlations between even numbered trials versus odd numbered trials of individual ESR cells (243 cells, n = 5 mice) as an indicator for preservation between trials within the session. The proportion of cells with highly preserved ESR patterns across trials (Pearson’s r > 0.6, outlined in the blue boxes) was significantly greater compared to shuffles. g , Summary statistics of the percentage of ESR cells in the entire CA1 pyramidal population that were tuned to lap 1, 2, 3 or 4 in the 4-lap treadmill experiment (1,222 cells, n = 5 mice). h , For this task schedule, a reward was given to the animal following every lap. Every lap contained a 12-s treadmill period. i , The percentage of significant ESR cells was significantly higher during the four-lap-per-trial task (147 out of 696 cells) compared with the same animals during the reward every lap task (42 out of 681 cells), all during the treadmill period ( χ 2 = 65.0, P = 7.8 × 10 −16 , blue lines represent 3 mice). j , Summary statistics of the percentage of ESR cells in the entire CA1 pyramidal population that were tuned to lap 1, 2, 3 or 4 during the reward every lap experiment during the treadmill period (681 cells, n = 3 mice). *** P < 0.001. Full size image Discussion ESR tracks discrete units of experience While mice ran a four-lap-per-reward task, approximately 30% of individual CA1 pyramidal neurons exhibited calcium activity levels that were significantly higher during one of the four laps. This ESR activity tracked the identities of laps despite variations in the duration needed to cover the lap events (Extended Data Fig. 8a,b ) and even when pseudorandom variations in distance traveled were introduced during lap events (Fig. 3a–f ). Therefore, ESR activity treats lap events as separate event units that are unaffected by spatiotemporal variations within current or neighboring events. In CA1 pyramidal cells, the ESR activity and spatial activity are jointly represented, so CA1 cells are active at a particular spatial field on the maze at a differential activity rate. But ESR activity tracks the lap identity even when this spatial field of CA1 cells was moved to an arbitrary location on a different maze (Fig. 4 ). Taken together, this shows that ESR treats different locations of a particular lap as part of the same event unit. When ESR was changed, it changed as a discrete, lap-specific shift rather than gradually through the course of the task experience (Fig. 5e,f : laps 1 and 2 not shifted versus laps 3 and 4 shifted). Our finding of the ESR phenomenon is consistent with the theoretically derived concept 36 , 37 , and with data obtained from human imaging studies 6 , 7 , 8 , 9 , 13 , that alongside codes tracking continuous changes in spatial and sensory content, the brain tracks an experience by identifying one discrete unitary event after another as the entire experience progresses. These previous studies showed the involvement of the brain in event segmentation by showing heightened blood-oxygen-level-dependent activity at the boundaries between events 9 , an observation that was recently confirmed by electrophysiology experiments 17 . Our present study provides further insight into the encoding of events by identifying a representation within single cells (ESR activity) that is tuned to the event units themselves rather than solely the event boundaries. ESR activity treats these events as fundamental units that make up the experience; therefore, they could be part of the neurophysiological basis for encoding events by the brain. ESR tracks abstract, relational features of events and could support transfer learning Recent human imaging and computational studies 10 , 11 , 12 , 13 , including a computational model termed the “Eichenbaum–Tolman machine” 38 , have suggested that the hippocampus is involved in coding the abstract structure that constitutes an episodic experience. Consistent with these studies, our present work, at the single-cell level, suggests a hippocampal activity pattern (the ESR) that not only tracks events but also tracks these events as putatively abstract entities. Indeed, our results show that the ESR consistently tracked lap events, irrespective of concrete sensory and spatiotemporal variations within the events (Figs. 3 and 4 ; Extended Data Figs. 8 and 10 ). Furthermore, ESR tracks the abstract and iterative relationships between events. In fact, we showed that ESR activity reliably distinguished the four lap events that were materially identical to one another but differed in their iterative and ordered relationships to the preceding and succeeding lap numbers (Fig. 1f,h ). It is possible that the ESR differentially represents the four laps simply because it represents an internal variable like the gradually increasing level of motivation for acquiring another reward 39 . However, this is unlikely for two reasons. First, ESR activity patterns were not affected by arbitrary variations in the time and traveled distance required to complete the trial and receive reward (Fig. 3 ). Even during LLLL sequence trials, which were presented to the animal in an unpredictable fashion and the animal had to run on a maze length that was twice as long as the standard maze to reach the reward (Fig. 3e ), the ESR activity patterns remained lap specific. Second, when an additional fifth lap was added to every trial in the task, lap 3 and 4 cells changed their activity on their respective lap, and shifted their activity by precisely one lap unit (Fig. 5d–l ). This result suggests that ESR cells differentiate between the lap numbers as the animal’s running progresses, and reflects the precise knowledge that lap 4 is no longer rewarded and that a fifth counted lap is now necessary to receive a reward. Altogether, our results indicate that ESR tracks the skeletal structure of experience via events as abstract entities, with abstract relationships between them. In real life, there are few truly novel experiences, and most new experiences share either physical or abstract features with past experiences 10 , 12 , 28 , 40 , 41 . Learning a new task is improved through the transfer of knowledge from the web of related tasks that have already been learned, a phenomenon called “transfer learning” 42 , 43 , 44 , 45 . ESR appears to track the events as abstract units and their abstract relationships; therefore, it may facilitate the transfer of knowledge between experiences that share these abstract features even if concrete spatial and sensory stimuli are distinct. Indeed, when the geometry of the maze was shifted from square to circular under the four-laps-per-reward conditions (Fig. 4 ), ESR activity was significantly maintained (Fig. 4f,d,g (orange histogram)) across these different experiences, even though place-field activity was globally remapped (Fig. 4c,e,g (blue histogram)). ESR activity could therefore capture not only the abstract structure within an experience (Figs. 1d and 3a–f ) but also provides the elements (events) that can be reused during a different experience (Fig. 4 ). ESR activity is independently manipulable from continuous spatial activity Previous studies demonstrated that hippocampal cells change their firing rate in a particular spatial field in response to broader experiential changes, including sensory cues and past and future trajectory changes 1 , 4 , 14 , 15 , 16 , 18 , 46 , 47 , 48 , 49 . This has been termed “rate remapping”. Consistent with these studies, ESR also showed a pattern of rate remapping (Fig. 1e ) within the confines of the standard four-lap task. Yet, ESR activity possessed an additional property: the pattern of rate remapping (that is, ESR activity) was maintained (Fig. 4g ) even when the place-field location was moved to a new spatial location. These data indicate that ESR is not tied to a particular place-field location. In a reciprocal manner, ESR activity patterns could also be perturbed without a concomitant perturbation of spatial activity patterns, for instance, by the addition of a lap to the task (Fig. 5f ) and by optogenetically inhibiting incoming MEC fibers (Fig. 6 ). The influence of ESR activity on neuronal activity is therefore mechanistically separate from the influence of spatial activity. Together, these results demonstrate that ESR activity is reciprocally and independently manipulable from spatial activity without affecting each other. What are the benefits of tracking experience in a joint manner with both discrete and continuous representations? In fact, the two neural representations track different aspects of the same episodic experience. The tracking of immediate experience within an event likely requires a level of detail that would be best served by a continuous (spatial or non-spatial) neural representation. Conversely, tracking extended experience in a compact, flexible and generalizable way likely requires a level of abstraction above the moment-to-moment variational details and may be best served by encoding discrete and abstract event units. Ultimately, we found that ESR activity tracks events as fundamental units, is transferable between different experiences and is independently manipulable from continuous spatial activity. We propose that it might therefore constitute an independent code from the spatial code: an ‘event code’ that is dedicated to tracking events as discrete units of experience, alongside codes monitoring continuously changing variables. This event code may help the brain track experience in an efficient and flexible manner. Methods Animals All procedures relating to mouse care and treatment conformed to the Massachusetts Institute of Technology’s Committee on Animal Care guidelines and NIH guidelines. Animals were individually housed in a 12-h light (19:00–7:00)–dark cycle. Twenty-four male Wfs1 -Cre mice aged between 2 and 4 months were implanted with an Inscopix microendoscope into the CA1 and food was restricted to 85–90% normal body weight for the experiments. For each of the six main maze-manipulation imaging experiments (random maze elongation experiment, circular maze experiment, lap addition experiment, treadmill experiment, fixed maze elongation experiment and spatial alternation experiment), the number of animals used (at least four) is indicated in the main text for each experiment. In each of these experiments, at least two of these tested animals did not previously undergo any of the other manipulative experiments. The other animals were experienced animals from the other manipulative experiments. Significant ESR cells were found during each of these maze manipulation sessions (Supplementary Figs. 2 and 3 ). Six pOxr1 -Cre mice (three for the MEC terminal inhibition experiment and three control mice), aged 2–4 months, were also implanted with an Inscopix microendoscope into the CA1 for dual imaging and optogenetics experiments, and were trained in a same manner as the Wfs1 -Cre mice. Histology and immunohistochemistry Mice were transcardially perfused with 4% paraformaldehyde in PBS. Brains were then post-fixed with the same solution for 24 h, and brains were sectioned using a vibratome. Sections were stained using 4,6-diamidino-2-phenylindole (DAPI). Micrographs were obtained using a Zeiss AxioImager M2 confocal microscope and Zeiss ZEN (black edition) software. Preparation of AAVs The AAV2/5-Syn-flex-GCaMP6f-WPRE-SV40 was generated by and acquired from the University of Pennsylvania Vector Core, with a titer of 1.3 × 10 13 genome copies per ml. The AAV2/5-CamKII-GCaMP6f-WPRE-SV40 was generated by and acquired from the University of Pennsylvania Vector Core, with a titer of 2.3 × 10 13 genome copies per ml. The AAV2/2-EF1a-DIO-eNpHR3.0-mCherry was generated by and acquired from the University of North Carolina (Chapel Hill) Vector Core, with a titer of 5.3 × 10 12 genome copies per ml. Stereotactic surgery Stereotactic viral injections and microendoscope implantations were all performed in accordance with Massachusetts Institute of Technology’s Committee on Animal Care guidelines. Mice were anesthetized using 500 mg per kg of avertin. Viruses were injected using a glass micropipette attached to a 10-µl Hamilton microsyringe through a microelectrode holder filled with mineral oil. A microsyringe pump and its controller were used to control the speed of the injection. The needle was slowly lowered to the target site and remained for 10 min after the injection. For CA1 imaging experiments, unilateral viral delivery into the right CA1 of the Wfs1 -Cre mice was aimed at the following coordinates relative to Bregma: anterior–posterior (AP): −2.0 mm; medial–lateral (ML): +1.4 mm; and dorsal–ventral (DV): −1.55 mm. Wfs1 -Cre mice were injected with 300 nl of AAV2/5-Syn-flex-GCaMP6f-WPRE-SV40. Approximately 1 month after injection, a microendoscope was implanted into the dorsal part of the CA1 of the Wfs1 -Cre mice aimed at the following coordinates relative to Bregma: AP: −2.0 mm; ML: +2.0 mm; and DV: approximately −1.0 mm. To implant at the correct depth, the cortex was vacuum-aspirated, resulting in the removal of the corpus callosum, which is visible under a surgical microscope as fibers running in the ML direction. The fibers of the alveus, which are visible as fibers running in the AP direction, were left intact by the procedure. For CA1 optogenetic and imaging experiments, 300 nl of AAV2/5-CamKII-GCaMP6f- WPRE-SV40 was unilaterally delivered into the right CA1 of pOxr1 -Cre mice using the following coordinates relative to Bregma: AP: −2.0 mm; ML: +1.4 mm; and DV: −1.55 mm. Also, 300 nl of AAV2/2-EF1a-DIO-eNpHR3.0-mCherry was bilaterally delivered into the MEC of these mice using the following coordinates relative to Bregma: AP: −4.85 mm; ML: ±3.45 mm; and DV: −3.35 mm. Control pOxr1 -Cre mice received the same CA1 viral delivery and received a bilateral delivery into MEC, but the bilaterally delivered virus was the control virus AAV2/2-EF1a-DIO-mCherry. Following these virus injections, the microendoscopy lens was implanted in the same manner as for the dual optogenetic and imaging experiments, as described above for the CA1 imaging experiments. The baseplate for the miniaturized microscope camera was attached above the implanted microendoscope in the mice. After experiments, animals were perfused, and post hoc analyses were examined to determine the actual imaging position in the CA1 (Figs. 1c and 6b ). Apparatus description and experimental conditions The apparatus was a square maze 25 cm in length and width, with a 5-cm-wide track width, and 7 cm in height. A 10 cm × 10 cm square reward box was located in one corner of the square maze. Sugar pellets (Bio-Serve, F5684) were placed in the reward box at the beginning of lap 1 of each trial. Four versions of this apparatus were used. Version 1 was used in Figs. 1 , 2 , 5 and 6 . Version 2 used in Fig. 3 had a length elongation that was twice the standard length (50 cm = 2 × 25 cm), but was otherwise identical to Version 1 in other dimensions. Version 3, used in the treadmill experiment (Fig. 7 ), had a 18-cm-long treadmill installed in the arm of the maze that immediately faces the reward box (Fig. 7a ). Version 3 otherwise used the same dimensions as Version 1. Version 4, used in Extended Data Fig. 10 , had an eight-maze configuration, with the other square of the eight-maze being 25 cm in length and width. Version 4 otherwise used the same dimensions as Version 1. The circular maze, used in Fig. 4 , was constructed to have the same total path length (that is, circumference) as the Version 1 square maze, and the same reward box size. All mazes were opaque and black. All maze experiments were performed under dim light conditions, with prominent visual cues within 50 cm on all sides of the box. Ca 2+ imaging in the maze lasted at least 20 min to collect a sufficient number of Ca 2+ transients to power our statistical analyses. The maze surface was cleaned between sessions with 70% ethanol. Immediately before and after imaging sessions, the mouse rested on a pedestal next to the maze. The basic task used in this manuscript was the standard four-lap-per-trial task, whereby animals traversed round a square maze 25 cm in length (1 m journey in total) (Fig. 1d ). The task was designed so that a sugar pellet reward was delivered manually to the reward box at the beginning of lap 1 once every 4 such laps, which we call a single trial (Fig. 1d ). Identical motions were made on each lap, regardless of whether a pellet was delivered. During the testing phase, animals completed 15–20 of such trials in repetitive succession without interruption. For any behavioral session in which the animal missed going into the reward box more than once in the entire sequence of runs (15–20 × 4 = 60–80 runs in total), the experiment was excluded. Crucially, for all experiments, animals first underwent task training before the final testing days. Training procedures are described below. Habituation to reward in the maze All behavior experiments took place during the dark cycle of the animals. All implanted mice were habituated to human experimenters and the experimental room. At the same time, they were mildly food-restricted and habituated to the sugar pellet reward. The criterion for habituation to sugar pellets and the maze was running counter-clockwise around the maze and eating a sugar pellet in the reward port of the maze (described below) in 15 successive repetitions without missing a single pellet. Reward periodicity training Animals were trained for approximately 8 days. If during any training day the mice appeared unmotivated or too satiated to complete the 15 trials, that training day was repeated the following day. Animals were pretrained for 2 days on the maze to habituate to receiving sugar pellet rewards in the reward port: on each of these days, they did a one-lap-per-trial task; that is, they received a reward after every run around the maze, and ran 15 such trials. For the next 3 days (days 3–5), animals were trained to receive periodic rewards. On day 3, animals ran 15 trials of a 2-lap-per-trial task; that is, they received reward every 2 laps around the maze. On day 4, animals ran 15 trials of a 3-lap-per-trial task. On day 5, animals ran 15 trials of a 4-lap-per-trial task. Finally, animals ran 3 more days (days 6–8), 15 trials per day of a 4-lap-per-trial task, before they were considered well trained on the basic 4-lap-per-trial task. Untrained versus well-trained experiment protocol In the particular case of the untrained versus well-trained animal experiment (Extended Data Fig. 4c–e ), animals that had only been habituated to the reward (described above) were immediately tested and imaged by running 15 trials of the standard 4-lap-per-trial task. Following this initial testing, these animals then underwent the reward periodicity training (described above). Following periodicity training, animals were tested and imaged again for 15 trials of the standard 4-lap-per-trial task to compare ESR cells seen after training compared with when they were untrained. Reward on every lap experiment Animals in this experiment (Figs. 1i–k and 5a–c ) were given a sugar pellet on every lap and completed a total of 60–80 laps total. This is equivalent to the total number of laps in the 15–20 trials of the 4 lap-per-trial experiment. This experiment did not require extra or task-specific training. Task-specific training Each of the main maze-manipulation experiments (random maze elongation experiment, circular maze experiment, lap addition experiment, treadmill experiment, fixed maze elongation experiment and spatial alternation experiment) required its own special task training after completing the habituation and reward periodicity training. Random maze elongation experiment For the random maze elongation experiment (Fig. 3 ), animals were tested and imaged on 28 4-lap trials. The maze was manually elongated on pseudorandom laps of random trials using detachable walls, such that each of the four types of trials (SSSS, SSLL, LLSS and LLLL, where S denotes a short lap and L denotes a long lap) were presented in a pseudorandom order and appeared 7 times each within the 28 trials. The entire 28 consecutive sequence of trials was as follows: SSSS, LLSS, LLLL, SSSS, SSSS, SSLL, SSLL, LLSS, LLLL, LLSS, LLSS, SSLL, LLLL, LLLL, SSSS, LLLL, LLLL, LLSS, LLLL, LLSS, SSSS, LLSS, SSLL, SSSS, SSSS, SSLL, SSLL, SSLL. Before the test day, animals underwent 3 days (days −3 to −1) of habituation training to the short and long laps, where SSSS, SSLL, LLSS and LLLL trials were randomly presented. Circular maze experiment For the circular maze experiment (Fig. 4 ), animals were tested and imaged in a 2-day experiment. These animals underwent 3 days (days −3 to −1) of habituation training before the first test day. On the first two training days (days –3 to −2), animals ran 15 trials on the circular maze each day. On the third day of training (day −1), animals ran 15 trials on the square maze again to get them habituated to the test day. On each of the test days, 1 h before experimentation, animals ran on the maze for five four-lap-per-reward trials. Five-lap-per-trial experiment For the five-lap-per-trial experiment (Fig. 5 ), animals were tested and imaged in a 2-day experiment. These animals underwent 3 days (days −3 to −1) of habituation training before the first test day. On the first two training days (days –3 to −2), animals each day ran 15 trials of a 5-lap-per-trial task. On the third day of training (day −1), animals ran 15 trials of a 4-lap-per-trial task again to get them habituated to the test day. Optogenetics experiment Calcium imaging used the Inscopix nVoke miniature optoscope at 20 Hz. During periods of optogenetic manipulation, as defined by our protocol (Fig. 6c ), the Inscopix nVoke miniature optoscope’s orange light (590–650 nm) stimulation was turned on at 10 mW mm –2 power at a uniform and constant level. Orange light delivery was manually performed and was turned on or off at the start of the relevant trial as soon as animals entered the box. For the optogenetics manipulation experiment, animals were tested and imaged in a single day. These animals underwent 2 days of habituation training before the first test day, with 2 days in between each of the training days to allow recovery from the light. On each of the training days, animals ran 16 trials each day of a 4-lap-per-trial task with the light schedule according to the alternating schedule shown in Fig. 6c . Treadmill experiment For the treadmill experiment (Fig. 7 ), animals were tested and imaged in a single day. These animals underwent 6 days of habituation training running on the maze before the first test day. On the first day of training, animals ran 15 trials of a 1-lap-per-trial task. During each lap, the animal ran onto the first arm of the square maze and ran for 12 s (time period accurately indicated via Arduino, and manually initiated) on the treadmill at a constant 14 cm s –1 , before running around the rest of the square maze and entering the reward box. On the next 5 days of training, animals ran 15 trials of a 4-lap-per-trial task again with 12 s on the treadmill to get them habituated to the test day. Fixed maze elongation experiment For the fixed maze elongation experiments (Extended Data Fig. 8 ), animals were tested and imaged in a 2-day experiment. On day 2, 2 h before experimentation, animals were habituated (allowed to run) for 3 min on the distorted maze without any rewards. Alternation maze experiment For the spatial alternation experiment, animals were tested and imaged in a 2-day experiment. These animals underwent 5 days (days −5 to −1) of habituation training before the first test day. On the first four training days (days –5 to −2), animals ran 15 trials each day of a 4-lap-per-trial task, whereby the laps alternated in their spatial trajectories as shown in Extended Data Fig. 10a . Path alternation was manually induced in the maze using detachable walls. On the fifth day of training (day −1), animals underwent 15 trials of an ordinary (non-alternating) 4-lap-per-trial task again to get them habituated to the test day. Behavioral analysis and Ca 2+ events detection The position of the animal was captured using an infrared camera (Ordro infrared camcorder, 30 frames per second) via infrared light-emitting diodes (LEDs) attached to the animal. Calcium events were captured at 20 Hz on an Inscopix miniature microscope. Imaging sessions were time stamped to the start of the behavioral recording session by turning on an LED that was fixed to the animal at the beginning of the session and turning off the LED at the end. Analyses of the calcium images and extraction of independent neuronal traces were done as previously described 23 , 50 . Specifically, the calcium images were binned four times spatially along each dimension and then processed using custom-made code written in ImageJ (dividing each image, pixel-by-pixel, using a low-passed ( r = 20 pixels) filtered version). It was then motion-corrected in Inscopix Mosaic software 1.2.0 (correction type: translation and rotation; reference region with spatial mean ( r = 20 pixels) subtracted, inverted and spatial mean applied ( r = 5 pixels)). A spatial mean filter was applied in Inscopix Mosaic (disk radius = 3), and a Δ F / F signal was calculated. Four hundred putative region of interest (ROI) locations were selected from the resulting movie using principal component (PC) analysis (PCA) and independent component (IC) analysis (ICA) (600 output PCs, 400 ICs, 0.1 weight of temporal information in spatiotemporal ICA, 750 iterations maximum, 1E-5 fractional change to end iterations) in Inscopix Mosaic software. ROIs, half-max thresholded, that were not circular (if its length exceeded its width by >2.5 times) or smaller than 5 pixels in diameter (~12 µm) were discarded. For each remaining ROI (that is, a putative neuron), pixels within the ROI filter that were <75% of the filter’s maximum intensity were zeroed. ROIs in the same session that were closer than 3 pixels (~7 µm) were considered the same cell rather than different cells. Δ F/F calcium traces were calculated for the resulting ROI filters for each processed movie. Slow variations in the calcium traces were eliminated by subtracting the median percentile Δ F/F value at each timepoint value calculated from the calcium trace values ±15 s within this timepoint, similar to a previously described method 23 . The calcium trace was smoothed by four temporal bin-rolling averages (50 ms for each bin). Significant calcium transients (Fig. 1c ) were defined as traces that exceeded three standard deviations above baseline and remained above 1.5 standard deviations above baseline for at least 500 ms. The rest of the Δ F/F calcium traces, aside from its significant transients, were zeroed in a similar way to a previously described method 51 . The decay time of all calcium transients across n = 14 animals was calculated, and the median decay time (the time required for a calcium transient to decay to half its maximum height) of these 137,045 calcium events was 1.35 s (Extended Data Fig. 2a ). Only cells that had a total of at least 25 significant transients during the entire session and non-zero activity in at least 10 trials separately were considered for further analysis in this study. In the sole case of the treadmill experiment, a lesser total of at least 10 significant transients was used since the cumulation of all the treadmill periods was only 12–16 min (15–20 trials). ESR cell calculation Calcium event filtering For each CA1 cell detected, the calcium activity was filtered so that only activity occurring while the mice were in an active state (animal speed > 4 cm s –1 ) were further analyzed. The behaviorally tracked times of interest were also filtered in this way, considering only the times when animal speed was >4 cm s –1 . The maze was divided into nine spatial bins: the reward box (spatial bin of length and width 10 cm) was one spatial bin, and each of the four arm lengths of the maze was divided in half (8 spatial bins, each of which was 12.5 cm in length and 5 cm in width). Next, for each identified cell, individual calcium activity epochs were analyzed by calculating the mean calcium activity in each of the nine spatial bins during each individual lap across trials. Thus, for a session of 15–20 trials, there were 540–720 calcium activity epochs in total (15–20 × 9 × 4 = 540–720). Each CA1 neuron possesses spatially tuning, and in this model, the spatial tuning was captured by a parameter p defined as the probability of having non-zero calcium activity in each separate spatial bin. The p was calculated for each neuron for each of its spatial bins. It differed for different spatial bins, which reflected the spatially modulated activities. Linear model fitting For each activity epoch for each neuron, the mean Δ F/F calcium activity, the mean speed ( s ) and the head direction tuning ( o ) were calculated. The non-zero calcium activity epochs were fit using a linear regression of the mean Δ F/F calcium activity versus speed and head direction tuning. In this regression, the coefficients a , b and c were fit as follows: $$R\left[ {\mathrm{Ca}} \right]\sim (a \times {s} + {b} \times {o} + {c})$$ (1) Where R [Ca] is the mean Δ F/F calcium activity level of this neuron during this activity epoch, s is the mean speed of the animal during this activity epoch and o is the head orientation deviation from the preferred head orientation of this neuron during this activity epoch. In Matlab code, we used the function fitrlinear with lambda = 0.01 to fit equation ( 1 ) using regularized linear regression applied to the calcium activity epochs of all cells. Identification of ESR cells For each identified cell, we shuffled its calcium transients across the lap epochs such that the probability of assigning any particular calcium transient into any particular lap epoch varied according to equation ( 1 ). Calcium transients were only shuffled (using randperm in Matlab) between different epochs taking place in the same spatial field to preserve p . We checked that this shuffle generation procedure gave a mean Δ F/F calcium activity level that matched the model-predicted (equation ( 1 )) calcium activity level (Extended Data Fig. 2d ). These shuffles simulated the calcium activity of the cell explained by spatial field ( p ), head direction ( o ) and animal speed ( s ). A total of 5,000 such shuffles were computed, and a ‘model-explained mean Δ F/F calcium activity level’ was computed as follows: $$R_{\mathrm{model}}\left[ {{\mathrm{Ca}},\,L = i,\,S = j} \right] = {\mathrm{mean}}\left( {R\left[ {{\mathrm{Ca}},\,L = i,\,S = j} \right]} \right)_{\mathrm{shuffles}}$$ Where R [Ca, L = i , S = j ] is the model-explained calcium activity computed as the mean activity in lap i and spatial bin j across all the shuffles for this cell. For every neuron for all four individual laps, the model-explained mean calcium activity level in each individual spatial bin was subtracted from the real mean Δ F/F calcium activity to yield the MC Δ F/F calcium activity, which excluded spatial, mean speed and mean head direction tuning (Fig. 1g ). Thus, this MC effect would mainly reflect the difference in calcium activity due to lap number. For every neuron, the model-explained mean Δ F/F calcium activity level was subtracted from the mean Δ F/F calcium activity level obtained from the 5,000 shuffles to yield a distribution of MC Δ F/F activities for chance level statistics. Cells whose peak, lap-specific MC Δ F/F was outside the 95th percentile confidence interval of shuffled MC Δ F/F were significant ESR cells. If the peak MC calcium activity happened to occur during the reward-eating lap (lap 1) while the animal was in the reward box spatial bin, then the peak MC calcium activity from the next highest spatial bin was selected because we excluded cell activity that was directly driven by reward eating. Robustness of ESR phenomenon to different parameter choices To show that our experimental results were not simply due to our model correction, we reexamined the maze variation experiments using the raw Δ F/F activity of these ESR cells rather than the MC Δ F/F activity. These experiments showed similar results to those obtained when model correction was done (Extended Data Fig. 6 ). To further characterize the robustness of lap-specific activity across trials, an analysis of statistical power was conducted by randomly removing one-quarter of all trials (that is, four to five trials) during the standard four-lap experiment for each mouse. We note that 69% (726 out of 1,055) of previously statistically significant ESR cells retained their significance (Extended Data Fig. 3a , left). By comparison, within the previous subpopulation of non-ESR cells, 5% (131 out of 2,451) now showed significance within the error rate. Even with this one-quarter of trials removed, ESR activity was still well correlated across days (Extended Data Fig. 3a , center and right). To examine whether results were affected by the spatial bin size that we used, we re-analyzed the standard four-lap experiment with a smaller spatial bin size, while keeping the rest of the procedures described above. We divided each of the four arm lengths into four equal bins (each 6.25 cm in length), and the box into four equal bins for a total of 20 spatial bins. We obtained nearly identical experimental results as that of nine spatial bins (Extended Data Fig. 3c ), so we proceeded with utilizing nine spatial bins for the rest of the experiments. To examine whether cells that showed lap-dependent activity were more generally stochastic, we looked at the subpopulation of ESR cells that had a higher consistency of activity, which we defined as cells that were active in the main spatial bin during at least half of all the trials. Again, this subpopulation of neurons had robustly preserved ESR activity across days during the standard four-lap experiment (Extended Data Fig. 3f ). Spatial information The tracked positions were sorted into 16 spatial bins 6.25 cm × 5 cm in size around the track and 4 spatial bins 5 cm × 5 cm in size in the reward box, and the mean Δ F/F calcium activity of each CA1 cell was determined for each bin. The bins that had animal occupancy values of <100 ms were considered unreliable and discarded from further analysis. Without smoothing, the spatial tuning was calculated for each cell as follows: $$\mathop {\sum }\limits_i p_i\lambda _i\log _2\frac{{\lambda _i}}{\lambda }$$ Where λ i is the mean Δ F/F calcium activity of a unit in the i -th bin, λ is the overall Δ F/F calcium activity, and p i is the probability of the animal occupying the i -th bin for all i . This formulation, derived from a previous study 52 , was applied to calcium activity levels, which have a known monotonic relationship to spike rates 20 . All event times of cells were shuffled 2,000 times in an analogous manner to a previously described method 53 by shifting the calcium activity time series around the position data by a random translation of >20 s and less than the session duration minus 20 s. Cells with significant spatial information were determined above the 95th percentile of all shuffles. Registering cells across days Our approach to register cells across days was to do so on the basis of the anatomy of the field of view seen on both days (that is, the pattern of blood vessels, among other parameters), rather than directly on the spatial locations of cells (Extended Data Fig. 5a ). To register two movies across days, a mean projection of the ImageJ-filtered and motion-corrected movie (see above) on each day was computed, and these two movies were registered with respect to one another using Inscopix Mosaic motion correction software. The distances between active cells from day 1 and their putatively matched cells on day 2 (650 cells, n = 4 animals) were calculated. The distribution of distances had a mode of 1.2 µm (Extended Data Fig. 5c , purple bars). By contrast, the distribution of distances between these same cells on day 1 and their nearest neighboring cells on the same day had a mode of 17.6 µm (Extended Data Fig. 5c , yellow bars). After an appropriate image registration was found for the fields of view based on anatomy, the ROIs on day 1 were identified, and calcium traces were calculated based on the resulting ROI filters for day 1 applied directly to the processed movie on day 2 at the matching anatomical location. This is exactly what would have been done if the day 2 movie had been a part of day 1. We note that the resulting spatial fields of registered cells were preserved across days (Fig. 2f ), which provides an independent validation of our cell registration protocol. ESR activity and spatial activity correlations across days For ESR correlations across days, for a given significant ESR cell on day 1, its ESR activity pattern (defined in the main text) was concatenated into a vector. A similar vector was produced for this same cell on day 2. This was done for each significant ESR cell defined on day 1. The ESR correlation acted as an index for ESR preservation across days and was defined as the Pearson’s correlation between the day 1 ESR activity vector and the corresponding day 2 ESR activity vector for the same cell. The day 2 ESR activity vector was produced from the same spatial bin as day 1 to allow for direct comparisons of ESR activity, except for the circular maze and spatial trajectory alternation experiments. In these cases, the spatial bins in which peak activity occurred were calculated anew, since the space was substantially changed in these experiments relative to room cues. ESR cells with Pearson’s r > 0.6 threshold were considered to have highly preserved (that is, highly correlated) ESR activity patterns across days. The distribution of ESR correlations when cell identities were shuffled across days was bimodal (Extended Data Fig. 3d ). Since we required a measure for shuffled cell pairs that were highly correlated by chance, we chose the threshold r > 0.6, which marks the boundary to the mode at r = 1. With other choices of threshold, r > 0.4 or r > 0.8, all the subpopulations of lap 1–4 cells separately were still highly preserved during the 2-day standard 4-lap task (Extended Data Fig. 3e ). For spatial correlations across days, the raw calcium events, speed filtered (>4 cm s –1 ), were sorted into the nine spatial bins defined above, the calcium activity level of each neuron was determined for each bin, and an activity map composed of all the spatial bins was produced. The activity maps for each individual ESR cell was treated as a vector (list of numbers) and Pearson’s correlation between the spatial activity maps of the 2 days was calculated. For the single-day optogenetic inhibition experiment (Fig. 6 ), the Pearson’s correlation was calculated between the spatial activity maps during the light-on trials versus the light-off trials. Alternative analyses to characterize ESR preservation across sessions Our ESR correlation analysis showed that ESR activity patterns were highly preserved across sessions, so several analyses were conducted to provide more information about the nature of this ESR preservation. While ESR correlation was treated as a metric for quantifying preservation, we next quantified the percentage of ESR cells that exhibited significant ESR correlation according to a statistically defined criterion. We shuffled the calcium transients of individual cells during the second session 1,000 times according to equation ( 1 ) and the description in “Identification of ESR cells”. We then calculated the ESR correlation between each cell’s session 1 ESR activity pattern and each of the session 2 shuffled ESR activity patterns. Finally, the percentage of cells whose ESR correlation was above the 95th statistical significance level of the shuffled ESR correlations were reported in Supplementary Fig. 4 for all major experiments. All major experiments showed similar results (Supplementary Fig. 4 ) to those obtained when we used the r > 0.6 criterion and compared ESR correlations versus shuffles (Supplementary Fig. 5 ). Besides quantifying the preservation of the overall ESR activity patterns by conducting ESR correlation analysis, we quantified the percentage of cells that preserved their lap preference (that is, cells that have maximal activity on lap i and remained maximal on the same lap i during the second session). All major experiments showed similar results (Supplementary Fig. 6 ) as those obtained using ESR correlation (Supplementary Fig. 5 ). Venn diagram display The Venn diagram display in Supplementary Fig. 2 was constructed using the MathWorks Venn software package ( ). Statistics Statistical analysis Statistical analyses were performed in Matlab (MathWorks). All statistical tests in this study were two-tailed. Single-variable comparisons were made with two-tailed t -tests. Group comparisons were made using analysis of variance (ANOVA) followed by Tukey–Kramer post-hoc analysis. The statistical analyses of calcium events is discussed in detail in the Methods . The numbers of mice for all experiments are reported in the figure legends. Sample sizes No statistical methods were used to predetermine sample sizes for single experiments, but the sample sizes were similar to or greater than other studies in the field ( n = 3–4 animals per experiment, for example, as in refs. 4 , 16 , 32 ). Most of our experiments included n ≥ 4 animals unless otherwise indicated in the main text and figures. Replication and blinding All experiments reported here were reliably reproduced in individual mice for all calcium imaging and behavioral experiments (Extended Data Fig. 4 ; Supplementary Figs. 4 – 6 ). Data collection and analyses were not performed blind. In all experiments, animals simply ran on the maze while receiving reward. Computer-based analyses ensured unbiased data collection and analyses. Figure displays For display purposes, heatmap figures used spatial and temporal smoothing. Figure 1e and Extended Data Fig. 1b show raw calcium activities organized by spatial location and lap number and used 6.25-cm spatial bins along the 100-cm linear track that were normalized and Gaussian smoothed ( σ = 25 cm). Figures 2d , 4e and 6g show raw calcium activities organized by spatial location, and used 6.25-cm spatial bins along the 100-cm linear track that were normalized and Gaussian smoothed ( σ = 25 cm). Figures 1f , 2a , 4b and 6d , and Extended Data Fig. 1d–f show raw trial-by-trail calcium activities organized by spatial location and lap number and used 6.25-cm spatial bins without smoothing. Figures 2b , 3a , 4c and 6e , and Extended Data Figs. 6a , 8d and 10b display two-dimensional spatial plots with 1 × 1cm 2 spatial bins and Gaussian filter σ = 4 cm. Figure 7b and Extended Data Fig. 9c show raw calcium activities organized by the time of the calcium activity on the treadmill and the lap number, and used 0.5-s time bins along the 12-s treadmill period that were normalized and Gaussian smoothed ( σ = 2 s). Figure 7d,e shows raw trial-by-trail calcium activities organized by the time of the calcium activity on the treadmill and the lap number, and used 0.5-s time bins without smoothing. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data, reagents and materials that support the findings of this study are available from the corresponding authors upon request. Code availability The code that supports the findings of this study is available from the corresponding authors upon request. Change history 05 January 2021 A Correction to this paper has been published:
Imagine you are meeting a friend for dinner at a new restaurant. You may try dishes you haven't had before, and your surroundings will be completely new to you. However, your brain knows that you have had similar experiences—perusing a menu, ordering appetizers, and splurging on dessert are all things that you have probably done when dining out. MIT neuroscientists have now identified populations of cells that encode each of these distinctive segments of an overall experience. These chunks of memory, stored in the hippocampus, are activated whenever a similar type of experience takes place, and are distinct from the neural code that stores detailed memories of a specific location. The researchers believe that this kind of "event code," which they discovered in a study of mice, may help the brain interpret novel situations and learn new information by using the same cells to represent similar experiences. "When you encounter something new, there are some really new and notable stimuli, but you already know quite a bit about that particular experience, because it's a similar kind of experience to what you have already had before," says Susumu Tonegawa, a professor of biology and neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at MIT's Picower Institute for Learning and Memory. Tonegawa is the senior author of the study, which appears today in Nature Neuroscience. Chen Sun, an MIT graduate student, is the lead author of the paper. New York University graduate student Wannan Yang and Picower Institute technical associate Jared Martin are also authors of the paper. Encoding abstraction It is well-established that certain cells in the brain's hippocampus are specialized to store memories of specific locations. Research in mice has shown that within the hippocampus, neurons called place cells fire when the animals are in a specific location, or even if they are dreaming about that location. In the new study, the MIT team wanted to investigate whether the hippocampus also stores representations of more abstract elements of a memory. That is, instead of firing whenever you enter a particular restaurant, such cells might encode "dessert," no matter where you're eating it. To test this hypothesis, the researchers measured activity in neurons of the CA1 region of the mouse hippocampus as the mice repeatedly ran a four-lap maze. At the end of every fourth lap, the mice were given a reward. As expected, the researchers found place cells that lit up when the mice reached certain points along the track. However, the researchers also found sets of cells that were active during one of the four laps, but not the others. About 30 percent of the neurons in CA1 appeared to be involved in creating this "event code." "This gave us the initial inkling that besides a code for space, cells in the hippocampus also care about this discrete chunk of experience called lap 1, or this discrete chunk of experience called lap 2, or lap 3, or lap 4," Sun says. To further explore this idea, the researchers trained mice to run a square maze on day 1 and then a circular maze on day 2, in which they also received a reward after every fourth lap. They found that the place cells changed their activity, reflecting the new environment. However, the same sets of lap-specific cells were activated during each of the four laps, regardless of the shape of the track. The lap-encoding cells' activity also remained consistent when laps were randomly shortened or lengthened. "Even in the new spatial locations, cells still maintain their coding for the lap number, suggesting that cells that were coding for a square lap 1 have now been transferred to code for a circular lap 1," Sun says. The researchers also showed that if they used optogenetics to inhibit sensory input from a part of the brain called the medial entorhinal cortex (MEC), lap-encoding did not occur. They are now investigating what kind of input the MEC region provides to help the hippocampus create memories consisting of chunks of an experience. Two distinct codes These findings suggest that, indeed, every time you eat dinner, similar memory cells are activated, no matter where or what you're eating. The researchers theorize that the hippocampus contains "two mutually and independently manipulatable codes," Sun says. One encodes continuous changes in location, time, and sensory input, while the other organizes an overall experience into smaller chunks that fit into known categories such as appetizer and dessert. "We believe that both types of hippocampal codes are useful, and both are important," Tonegawa says. "If we want to remember all the details of what happened in a specific experience, moment-to-moment changes that occurred, then the continuous monitoring is effective. But on the other hand, when we have a longer experience, if you put it into chunks, and remember the abstract order of the abstract chunks, that's more effective than monitoring this long process of continuous changes." Tonegawa and Sun believe that networks of cells that encode chunks of experiences may also be useful for a type of learning called transfer learning, which allows you to apply knowledge you already have to help you interpret new experiences or learn new things. Tonegawa's lab is now working on trying to find cell populations that might encode these specific pieces of knowledge.
10.1038/s41593-020-0614-x
Biology
Making systems robust
A universal biomolecular integral feedback controller for robust perfect adaptation, Nature (2019). DOI: 10.1038/s41586-019-1321-1 , www.nature.com/articles/s41586-019-1321-1 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1321-1
https://phys.org/news/2019-06-robust.html
Abstract Homeostasis is a recurring theme in biology that ensures that regulated variables robustly—and in some systems, completely—adapt to environmental perturbations. This robust perfect adaptation feature is achieved in natural circuits by using integral control, a negative feedback strategy that performs mathematical integration to achieve structurally robust regulation 1 , 2 . Despite its benefits, the synthetic realization of integral feedback in living cells has remained elusive owing to the complexity of the required biological computations. Here we prove mathematically that there is a single fundamental biomolecular controller topology 3 that realizes integral feedback and achieves robust perfect adaptation in arbitrary intracellular networks with noisy dynamics. This adaptation property is guaranteed both for the population-average and for the time-average of single cells. On the basis of this concept, we genetically engineer a synthetic integral feedback controller in living cells 4 and demonstrate its tunability and adaptation properties. A growth-rate control application in Escherichia coli shows the intrinsic capacity of our integral controller to deliver robustness and highlights its potential use as a versatile controller for regulation of biological variables in uncertain networks. Our results provide conceptual and practical tools in the area of cybergenetics 3 , 5 , for engineering synthetic controllers that steer the dynamics of living systems 3 , 4 , 5 , 6 , 7 , 8 , 9 . Main Integral feedback control is arguably one of the most fundamental regulation strategies in engineering practice. From modern jetliners to industrial plants, integral feedback loops reliably drive physical variables to their desired values with great robustness and precision 10 . It is increasingly appreciated that nature’s evolutionary explorations had already discovered the same strategy, which has functioned at various levels of biological organization to achieve homeostasis and robust adaptation to perturbations 1 , 2 , 11 , 12 , 13 . Integral feedback occurs by sensing the deviation of a variable of interest (controlled variable) from the desired target value (set point), computing the mathematical integral of that deviation (error) over time, and then using it in a negative feedback configuration to drive processes that counteract the deviation and drive it to zero (Fig. 1a, b ). This can be achieved despite considerable uncertainty in process dynamics and constant or slowly varying perturbations. This fundamental network property is known as robust perfect adaptation (RPA), and the importance of integral feedback as a regulation strategy derives from its capacity to realize RPA. Given the complexity of required sensing and computation (for example, subtraction, integration and so on), the in vivo synthetic implementation and demonstration of full integral feedback has remained unrealized. In a recent theoretical work 3 , we introduced the antithetic feedback motif (Fig. 1c ) as a network topology that realizes integral feedback while lending itself to biomolecular implementation. We showed analytically that for cells with intrinsically noisy dynamics, this regulatory motif endows the network with guaranteed robustness properties for the population average and also for the single-cell time average. This motif subtly exploits intrinsic noise, using it as a stabilization force in scenarios in which noise-free dynamics exhibit oscillations. Fig. 1: Integral feedback enables robust perfect adaptation. a , In a circuit without feedback regulation (open-loop circuit), the output is sensitive to external perturbations, which drive it away from the desired value (no adaptation). b , Integral feedback confers robustness to perturbations and keeps the output tightly regulated at desired levels (RPA). c , The antithetic integral control motif offers a biologically realizable integral feedback scheme using two regulator species. The output of interest, X L is sensed by a reaction the product of which, Z 2 , is produced at a rate proportional to X L (rate constant \(\theta \) ). A reference reaction yields Z 1 with rate constant μ . Z 1 and Z 2 annihilate (or sequester) each other, an operation that is central to the integral feedback computation. In turn, Z 1 works as an actuator by affecting processes that lead to the increase in the production of the output of interest, thereby closing the feedback loop. In this scheme, as long as the closed dynamics are stable, the steady-state value of the output is determined solely by the ratio μ/θ . Notably, it does not depend on the topology and parameters of the circuit of interest, which are usually uncertain and noisy, nor on any constant external perturbations that afflict the network. Full size image Consider the problem of controlling an uncertain and noisy biomolecular network by augmenting it with another feedback-controller network (see the Box 1 Figure, panel a). The control objective is to achieve RPA for some variable in the controlled network (output); that is, this variable must be steered to a desired set point and maintained there, even in the presence of unknown constant external perturbations and in spite of uncertainty in the network topology and parameters, including the parameters of the controller network. Insisting on robustness to topology and parameters is particularly important in synthetic biology, in which the controlled network is often unknown or poorly characterized and fine-tuning the parameters of the controller network can be extremely difficult. It is well established in control theory 14 , 15 that in the noise-free setting, such general-purpose controller networks must implement an integral feedback component, but designing one is challenging because of the realizability and other constraints imposed by the biomolecular reaction network 16 , 17 . These challenges are further amplified if we take into account the noisy nature of the intracellular dynamics, in which not all integral feedback-controller implementations lead to RPA and hence the particular topology used is critical. In this stochastic setting, when the dynamics are stable, RPA refers to the steady-state population average of the output variable or—equivalently—its long-term single-cell time average. Given this context, several fundamental questions arise. These include how one can determine definitively whether a candidate network of any size achieves RPA in the presence of intrinsic noise; which architectural features are necessary and sufficient for a biomolecular feedback control topology to achieve RPA; whether all RPA-achieving controller topologies, regardless of their size, can be characterized; and—if the number of species needed to implement a controller topology is used as a measure of its complexity—which controller topologies achieve RPA with minimal complexity. Here we provide definitive answers to all these questions. We prove that each RPA-achieving controller must necessarily embed the antithetic feedback motif, and thus that the antithetic feedback motif we introduced previously 3 is the minimal-complexity RPA-achieving controller for unknown networks with noisy dynamics. It is important to emphasize that these results hold only in the setting of noisy single-cell dynamics. If the network under consideration is noise-free, it is not necessary for a controller to embed antithetic feedback to achieve RPA 18 , 19 , 20 , 21 , but it is sufficient. These results are summarized in Box 1 and the detailed proofs are in the Supplementary Text . The original analysis of the antithetic feedback controller 3 was focused on the ideal case in which the production rates of the molecular species involved in the circuit are unsaturated, and there is negligible degradation of controller proteins Z 1 and Z 2 . In living cells, these assumptions are not satisfied because all genes have a limited production capacity and, in the case of fast-growing cells, dilution of the controller proteins must be factored into the dynamics 16 . This can introduce an error between the actual and the desired values of the controlled variable following a large disturbance. To investigate whether this error could be made small enough, we simulated a model of the antithetic motif in an E. coli implementation (Extended Data Figs. 1 , 2 , Supplementary Text section 1 ). Our analysis revealed that, using realistic model parameters, it is indeed possible to achieve small errors, which suggests that for all practical purposes the antithetic motif can realize perfect adaptation in growing cells. Central to our implementation of the minimal antithetic design 3 , 4 are two controller proteins that annihilate (or stoichiometrically inactivate) each other; for example, by forming an inert dimer (Fig. 1c ). This type of annihilation reaction has been used in existing synthetic devices 6 , 22 , and to realize it, we used a previously reported pair of Bacillus subtilis σ and anti-σ factors (SigW and RsiW), which have roles in cell envelope homeostasis and display annihilation in vivo 22 , 23 . In its natural context, RsiW stably sequesters and holds SigW inactive 23 . Upon cell-membrane stress, RsiW is proteolytically cleaved and active SigW is released, controlling expression of multiple downstream genes 23 . The high native stability of these σ and anti-σ factors enable practical realization of integral action in fast-growing cells, in contrast to RNA-based controllers 5 , 8 , 9 , for which RNA instability poses a major limitation to integral action realization. In our setup, on one plasmid SigW-responsive promoter P sigW drives the controlled genes, the E. coli transcription factor araC and sfgfp (superfolder green fluorescent protein). AraC closes the loop by regulating P BAD promoter-driven rsiW expression in a concentration-dependent manner (Fig. 2a ). sfgfp expression should be proportional to that of araC , which encodes the regulated protein of interest. To mitigate saturation effects, the final plasmid contains two P sigW - araC-sfgfp modules (Fig. 2a , Extended Data Fig. 2 and Supplementary Text ). This circuit is tunable with arabinose (ARA), which increases AraC activity, and N -(3-oxohexanoyl)- l -homoserine lactone (HSL), which activates the constitutively expressed Vibrio fischeri quorum-sensing-pathway transcription factor LuxR to induce sigW expression driven by the lux promoter (P LUX ). For comparison, an open-loop control circuit was constructed without P BAD - rsiW to disable feedback (Fig. 2b ). Fig. 2: Synthetic antithetic integral feedback control circuit. a , Closed-loop circuit. The antithetic control system (beige shaded box) is tunable with HSL and ARA. The controlled circuit of interest (grey shaded box) consists of araC and sfgfp tagged with Mf Lon degradation tags. A negative perturbation is applied by aTc induction of Mf lon expression, resulting in AraC and sfGFP degradation. b , Open-loop circuit. Closed-loop feedback is disabled by deleting the anti-σ module (P BAD - rsiW ). c , Response of the closed-loop circuit to HSL and ARA. Heat map of the mean steady-state sfGFP fluorescence for corresponding concentrations of HSL and ARA normalized to the maximum output for four independent biological replicates. d , Dynamic response of the closed-loop circuit to HSL induction. sfGFP fluorescence is plotted as a function of time and fit with a cubic spline. Data show mean (coloured circles) ± s.d. for n = 3 independent biological replicates (grey dots). e , Output steady states of closed- and open-loop circuits in the presence of Mf Lon protease perturbation. sfGFP fluorescence normalized to the pre-disturbance level for each set of induction conditions. Data show mean ± s.d. for n = 3 independent biological replicates (grey circles). Two-tailed, unpaired, unequal variance t -test. Non-normalized data are shown in Extended Data Fig. 6b . f , Dynamic response of the closed-loop circuit to perturbation. Closed- and open-loop strains at steady state in 0.2% ARA and 5.5 nM HSL were perturbed at 1.5 h by aTc induction of Mf Lon expression. sfGFP fluorescence normalized to the mean at 0 h is plotted as a function of time and fit with a cubic spline for n = 3 independent biological replicates. Shaded regions indicate the s.d. Non-normalized data are available in Extended Data Fig. 6f . Source Data Full size image Testing of the closed-loop circuit shows that steady-state sfGFP levels can be adjusted with ARA and HSL by independently tuning sigW and rsiW expression (Fig. 2c ), consistent with our theory and with recent work 3 , 24 . Increasing SigW production resulted in a corresponding increase in steady-state sfGFP with HSL between 0 and 10 nM, whereas increasing RsiW production with ARA between 0.05% and 0.2% corresponded to a decrease in sfGFP. Dynamic time courses of closed-loop cells precultured in ARA and induced with HSL show unimodal fluorescence detectable over background and stable over long periods of time (Fig. 2d , Extended Data Fig. 3 ). Further, using cell growth rate as a burden indicator 7 , in the conditions tested, differences in growth rates between set points or over time were insignificant (Extended Data Fig. 4 ). To test the response of the system to a constant perturbation, on a separate plasmid we used the orthogonal Mesoplasma florum protease Lon ( Mf Lon) 25 , driven by an anhydrotetracycline (aTc)-inducible promoter (P TET , Fig. 2a, b ). Mf Lon recognizes cognate Pdt degradation tags 25 appended to the C termini of AraC and sfGFP and increases protein degradation (Extended Data Fig. 5, Methods ). Steady-state sfGFP levels of both circuits with and without Mf Lon induction were measured and disturbance rejection was quantified as the relative output decrease post-disturbance. For multiple ARA and HSL conditions, the closed loop showed virtually no change in fluorescence after protease induction, whereas the open loop showed greater than 40% decrease in most conditions tested and neither circuit showed significant burden owing to protease induction (Fig. 2e , Extended Data Fig. 4 ). Dynamically, the closed loop adapts post-perturbation after a short transient (Fig. 2f , Extended Data Fig. 4 ). These results suggest that the closed loop is able to sense and compensate for AraC loss despite the continued presence of the perturbation, unlike the open loop. We further found that adaptation was maintained when decreasing HSL, increasing ARA or increasing both HSL and ARA from these conditions (Extended Data Fig. 6 ). However, increasing HSL and reducing ARA eventually leads to discernible error (though always smaller than the open-loop error), which indicates an exit from the adaptation region—as qualitatively predicted by the simulations (Extended Data Fig. 6 ). The observed conditions in which perfect adaptation to the Mf Lon disturbance is achieved were limited to an approximately threefold output range (Extended Data Fig. 6 ), although the closed-loop circuit is capable of reaching higher set points in the absence of this disturbance. Moreover, the closed loop tended to show increased cell-to-cell variability compared to the open loop, and showed decreased cell-to-cell variability upon perturbation, consistent with theoretical analyses 26 (Extended Data Fig. 7 ). To demonstrate the wider application potential of our integral feedback controller and its inherent capacity to confer robustness, the closed- and open-loop circuits were modified to regulate cell growth rate by exchanging sfgfp for metE (methionine synthase, Fig. 3a, b ). Methionine is required for cell viability and biomass accumulation. In a host strain that lacks metE , cell growth can be controlled in methionine-free medium through regulation of metE expression 27 (Fig. 3a, b ). The closed-loop growth rate is tunable with HSL in cultures grown with a fixed ARA concentration (Fig. 3c , Extended Data Fig. 8 ). Furthermore, when a constant environmental perturbation was applied by changing cell incubation temperature from 37 °C to 30 °C (Fig. 3d , left), the closed loop induced with 10 nM HSL and 0.2% ARA maintained its growth rate. By contrast, the open-loop growth slowed significantly at the lower temperature (Fig. 3d , right, Extended Data Fig. 8 ), which suggests that the perturbation in the closed-loop strain at this set point is compensated for at 30 °C through metE regulation. Given that the set point is determined ratiometrically by SigW and RsiW, within the region of adaptation, any such global perturbation (for example, extrinsic noise) that affects their expression in a similar way should also be rejected by the controller. Fig. 3: Growth-rate control. a , Closed-loop antithetic integral feedback control of growth rate. The antithetic controller (beige shaded box) controls the circuit of interest (grey shaded box), in which sfgfp was replaced with metE . MetE catalyses the last step of cobalamin-independent methionine biosynthesis by transferring a methyl group from 5-methyltetrahydropteroyl-tri- l -glutamate to l -homocysteine (Hcy, blue shaded box). b , Open-loop growth-control circuit. Feedback is disabled by deleting the anti-σ module (P BAD - rsiW ) from the closed-loop circuit. c , Growth rate is tunable in methionine-free medium. Steady-state growth rates of the closed-loop circuit in a Δ metE host strain for the corresponding concentrations of HSL and 0.2% ARA ( n = 3). d , Closed-loop circuit is robust to temperature perturbation. Left, a constant external change in temperature is exerted on the closed-loop circuit strain. Right, steady-state growth rates of closed- and open-loop circuits in a Δ metE host strain grown at 37 °C and 30 °C in methionine-free medium containing 0.2% ARA and 10 nM HSL normalized to the 37 °C rate for each circuit ( n = 3). Two-tailed, unpaired, unequal variance t -test. Non-normalized data are available in Extended Data Figs. 8a, b . Data show mean ± s.d. for n independent biological replicates (grey circles in c , d ). The response of the circuit to temperature perturbation at the higher set points in c was not characterized in this study. Source Data Full size image RPA is exhibited by many endogenous biological systems 2 , 12 , 28 , and understanding which network topologies allow RPA is of fundamental importance. This work is different from existing studies on this phenomenon 19 , 20 , 21 in two distinct ways. First, we study adaptation in the stochastic setting, in which the effects of intrinsic biochemical noise 29 are incorporated. Second, we address scenarios in which the controlled network has unknown players or interactions. We separate the ‘controller network’ from the ‘controlled network’ (see Box 1 ) and allow the latter to be completely arbitrary, while the former can have uncertain parameters. In this setting, our mathematical analysis proves that the integral feedback action necessary for RPA, can only be exactly implemented with biomolecular reactions by a controller that embeds the antithetic feedback-controller topology. Notably, this controller topology is known to enhance intrinsic noise and cell-to-cell heterogeneity 26 , 30 , yet it has a universal role in ensuring RPA for the population average (or the single-cell time average) for arbitrary intracellular networks. Of note, the antithetic topology has been found in several endogenous pathways 3 , 31 . The rationally designed integral controller reported here represents a proof-of-concept design that establishes the feasibility of engineering robust homeostasis in synthetic biology. A suitably optimized version of this controller with expanded dynamic range should find wide applications in all scenarios in which protein expression must remain tightly regulated at the desired level, independent of other intervening processes. In metabolic engineering, for instance, robust set-point regulation of key enzymes could be used to optimize metabolic fluxes to maximize yield and minimize host toxicity. Probing endogenous pathways could also benefit from such regulation, because compensatory cellular mechanisms tend to alter expression in complex ways. A mammalian analogue of our synthetic integral feedback module could also offer exciting perspectives for cell therapy in conditions that result from dysregulation of homeostasis, by enabling the implementation of a fully autonomous and personalized intervention. Box 1 Universality of the antithetic feedback controller motif Consider the problem of controlling an arbitrary biomolecular network, comprising species X 1 , …, X N , by connecting it with a feedback controller that is realizable as another network with species Z 1 , …, Z C ( a ). The controlled network species can interact with the controller species via actuation reactions that do not affect the controller’s state, and via a sensing reaction, which is catalysed by the output species of interest X L and acts by producing or degrading some controller species Z i (for example, Z i + X L → X L or X L → Z i + X L ). Additionally, there is a set-point encoding reaction that can produce or degrade some controller species Z j . The set-point encoding reaction and the sensing reaction follow mass-action kinetics with positive rate constants μ and θ , respectively. The sensed variable is the abundance of X L scaled multiplicatively by θ , and our control objective is to robustly steer its average value to the desired set point μ . Excluding the set-point encoding and sensing reactions, we assume that each of the remaining closed-loop network reactions depends on at least one parameter in the vector of parameters denoted by γ = ( γ 1 , γ 2 , …). To capture the effects of intrinsic noise, we model the reaction dynamics as a continuous-time Markov chain: \({({X}_{\gamma }(t),{Z}_{\gamma }(t))}_{t\ge 0}\) . The output of a single cell at time t is the random state \({X}_{\gamma ,L}(t)\) , denoting the copy number of the output species X L , and the population-averaged output of several identical cells is given by the expectation \({\mathbb{E}}({X}_{\gamma ,L}(t))\) . The controller’s goal is to achieve RPA for the population average by ensuring that \({{\rm{l}}{\rm{i}}{\rm{m}}}_{t\to {\rm{\infty }}}{\mathbb{E}}(\theta {X}_{\gamma ,L}(t))=\mu \) holds, regardless of the initial conditions and the parameter vector γ . In other words, the population average adapts perfectly following a state perturbation or after a disturbance that alters one or more of the parameters in γ . This property also holds for the long-term average output of a single cell given by \({{\rm{l}}{\rm{i}}{\rm{m}}}_{T\to {\rm{\infty }}}{T}^{-1}(\mathop{\mathop{\int }\limits^{T}}\limits_{0}\theta {X}_{\gamma ,L}(t)dt)\) . We prove that all RPA-achieving controllers must have at least two species (that is, C ≥ 2), and that the species involved in set-point encoding (say Z 1 ) and output sensing (say Z 2 ) must be distinct. We further find a linear-algebraic condition that provides a simple parameterization of all feedback controllers of any size that achieve RPA ( Supplementary Text theorem 2.5). This condition can be further unravelled to prove that each RPA-achieving controller must embed an antithetic motif. Specifically, the species set of any RPA controller can be partitioned into three disjoint subsets \({C}_{+}\) , \({C}_{-}\) and \({C}_{0}\) ( b ), containing species Z 1 , Z 2 and the null species ϕ , respectively, and there must exist an annihilation reaction that combines a species in \({C}_{+}\) with one in \({C}_{-}\) to produce a species in \({C}_{0}\) (shown with thick red arrows). Hence, any RPA-achieving controller can be viewed as an extension of the minimal antithetic feedback controller presented in ref. 3 ( Supplementary Text section 2.2.3 ). Moreover, it can be shown that the class of RPA-achieving controllers in the stochastic setting is strictly contained in the class of such controllers in the deterministic setting, in which the population-averaged dynamics coincide with the single-cell dynamics ( Supplementary Text section 2.2.4 ). Show more Methods No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. Growth conditions Cells were grown in 14-ml tubes (Greiner) in LB (1% tryptone, 0.5% yeast extract, 1% NaCl) or M9 medium supplemented with 0.2% casamino acids, 0.45% LB, 0.4% glucose, 0.001% thiamine, 0.00006% ferric citrate, 0.1 mM calcium chloride, 1 mM magnesium sulfate and 20 μg/ml uracil (Sigma-Aldrich), and incubated in an environmental shaker (Excella E24, New Brunswick) at 37 °C with shaking at 230 r.p.m. unless otherwise indicated. Antibiotics (Sigma-Aldrich) were used at the following concentrations: chloramphenicol, 34 μg/ml; spectinomycin, 100 μg/ml; ampicillin, 100 μg/ml; and kanamycin, 40 μg/ml. ARA was obtained from Sigma-Aldrich. HSL and aTc were received from Chemie Brunschwig. For all experiments, growth medium containing HSL was stored at 4 °C for the duration of the experiment and prewarmed to 37 °C 30 min before use to minimize degradation of the inducer. Host strain and plasmids construction Restriction enzymes, T4 DNA ligase and Taq ligase used in this study were purchased from New England Biolabs. Herculase II Fusion DNA Polymerase (Agilent) and Phusion Polymerase (New England Biolabs) were used for cloning PCR. T5 exonuclease was purchased from Epicentre. DNA isolation was performed using ZR Miniprep Classic, DNA Clean and Concentrator and Zymoclean Gel DNA Recovery Kits (Zymo Research). DNA oligonucleotide primers were synthesized by Sigma-Aldrich, Integrated DNA Technologies and Microsynth. Sequences of all plasmid and strain constructs were confirmed by Microsynth. DNA was transformed into cells as previously described 32 . All strains, plasmids, and primers used in this study are listed in Supplementary Tables 1 and 2 . Plasmid maps are presented in Extended Data Figs. 9 , 10 . Strains and plasmids were constructed using standard cloning methods. Construction details are described in section 3 of the Supplementary Text . Strains and plasmids used for fluorescence studies The host strain MG1655 Δ araCBAD Δ lacIZYA Δ araE Δ araFGH attB::lacYA177C Δ rhaSRT Δ rhaBADM Tn7::tetR , referred to as SKA703, was used for all sfGFP 33 circuits in this study. The negative-perturbation plasmid (pSKA417, GenBank accession no. MK775703) used in these studies was constructed by placing the M. florum Lon protease gene Mf lon 25 under a TetR-repressible promoter (P TET ) on a medium-copy plasmid with a p15A origin of replication and spectinomycin resistance (Extended Data Fig. 10a. ). The closed- and open-loop plasmids used in this study were constructed modularly on a high-copy plasmid with a ColE1 origin of replication and ampicillin resistance. Initially, a closed-loop precursor plasmid (pSKA538, GenBank accession no. MK775704) was built, consisting of P sigW-sRBS - V5::araC::pdt#3c - Flag::sfgfp::pdt#1 , luxR - P LUX-RBS5000 - sigW , and P BAD-RBS5000 - rsiW . (Extended Data Fig. 10b ). An open-loop variant precursor plasmid (pSKA539, GenBank accession no. MK775705) was also constructed with feedback disabled by removing the P BAD-RBS5000 - rsiW module (Extended Data Fig. 10b ). The final versions of the circuits used in this study (pSKA562 and pSKA563, GenBank accession no. MK775706 and MK775707) were constructed by adding a second tandem copy of P sigW-sRBS -V5::araC::pdt#3c-Flag::sfgfp::pdt#1 to each plasmid (Extended Data Fig. 10c ). Strains and plasmids used for growth-rate control studies A variant of SKA703 (SKA1328) with deleted endogenous metE was used for all growth-rate control experiments. Closed-loop growth-rate control plasmid pSKA570 (GenBank accession no. MK775708) and open-loop growth-rate control plasmid pSKA571 (GenBank accession no. MK775709) were constructed by exchanging sfgfp for metE with a weak ribosomal-binding site (Extended Data Fig. 10d ). Open-loop variant pSKA571-p15A (GenBank accession no. MK775710) is identical to pSKA571 but with a lower-copy p15A origin of replication (Extended Data Fig. 10d ). V5-AraC-Pdt#3c immunoblot During the course of this study it was observed that the efficiency of Mf Lon protease-dependent degradation was not only affected by the degradation tag sequence but also by the tagged protein itself. Originally, a Pdt#1 degradation tag was used for both AraC and sfGFP. However, immunoblot analysis of V5-AraC-Pdt#1 suggested that AraC was being more efficiently degraded by the Mf Lon protease than sfGFP (data not shown). Changing the AraC degradation tag from Pdt#1 to the weaker Pdt#3c tag resulted in an improvement in matching sfGFP degradation rates (Extended Data Fig. 5 ). SKA703 pSKA417 pSKA539 (open-loop, V5::araC::pdt#3c , Flag::sfgfp::pdt#1 ) was grown overnight in 5 ml M9–0.2% ARA and appropriate antibiotics. SKA703 (no-plasmid negative-control strain) was grown overnight in 5 ml M9–0.2% ARA (antibiotic-free). The overnight cultures were diluted into 5-ml aliquots of M9–0.2% ARA and appropriate antibiotics containing 5 nM HSL with or without 10 ng/ml aTc at a low optical density (OD, 0.00008 starting OD for SKA703 pSKA417 pSKA539; 0.00004 starting OD for negative-control SKA703), and incubated for 4.5 h before being rediluted into 5-ml aliquots of prewarmed matching medium at low OD (SKA703 pSKA417 pSKA539 at 0.00004 OD; SKA703 at 0.00001 OD) followed by another 4.5 h of incubation. After 9 h of induction, culture samples were measured in triplicate for sfGFP fluorescence by flow cytometry. Aliquots (4.5 ml) of each culture were pelleted at 4 °C. Cell pellets were resuspended in 1× lysis buffer (1× BugBuster (Merck and Cie), 1× cOmplete EDTA-free Protease Inhibitor (Roche Diagnostics), 60 mM Tris-HCl pH 6.8 (Sigma-Aldrich Chemie), 10% glycerol (Axon Laboratory), 2% SDS (Sigma-Aldrich Chemie), 5% β-mercaptoethanol (Sigma-Aldrich Chemie), 1 mM phenylmethylsulfonyl fluoride (Sigma-Aldrich Chemie). Lysis buffer volume was calculated as OD × 4,500 μl/5 for all cultures. Cells were lysed at 95 °C for 10 min and used immediately after preparation. Freshly prepared 10-μl aliquots of lysates and 0.75 μl Odyssey Protein Marker (Li-Cor GmbH) were run on a NuPAGE 4–12% Bis-Tris mini gel (1 mm, 15 well, Invitrogen) with NuPAGE MES SDS Running Buffer (Invitrogen) at 200 V for 40 min under denaturing conditions. Proteins were transferred to Immobilon-FL PVDF membrane (Merck and Cie) using NuPAGE Tris-glycine-10% methanol transfer buffer (Invitrogen) at 30 V for 1 h (XCell II Blot Module, Invitrogen). The membrane was dried overnight post-transfer. The dried membrane was reactivated in methanol before being washed with water and stained for total protein using REVERT Total Protein Stain (Li-Cor) as recommended by the manufacturer. The stained membrane was imaged immediately in the 700 nm channel with a Li-Cor Odyssey CLx equipped with Image Studio v.2.1.10 software (169-μm resolution, medium quality, auto intensity). After imaging, the membrane was rinsed with water and then blocked for 1 h at room temperature in Li-Cor Odyssey Blocking Buffer(PBS). The membrane was incubated with 1:5,000 mouse V5 antibody (E10) (AB53418, Abcam) in blocking buffer with 0.1% Tween-20 (Sigma-Aldrich Chemie) at room temperature for one hour before being washed five times with phosphate-buffered saline (PBS) containing 0.1% Tween-20, 5 min each wash, followed by 1 h room-temperature secondary antibody incubation (1:10,000 Li-Cor goat α-mouse IRDye800CW 925-32210) in blocking buffer with 0.1% Tween-20. The membrane was washed 5 times with PBS containing 0.1% Tween-20, twice with PBS (no Tween-20), and scanned on a Li-Cor Odyssey CLx (700-nm and 800-nm channels, 169-μm resolution, medium quality, auto intensity). Densitometry analysis of the total protein and V5 band intensities were analysed using Li-Cor Image Studio v.2.1.10 software. Background subtraction was performed using median pixel values and a border width of three. All lanes were within the linear detection range of both the REVERT Total Protein Stain and V5 antibody staining with the exception of the SKA703 negative control, which was below the linear detection range for V5 signal (data not shown). The V5 immunoblot signal was normalized to the total detected protein for each lane. High throughput ARA and HSL titrations All titration assays to measure the response of closed-loop circuit strain SKA703 pSKA417 pSKA562 to different inducer concentrations were performed on a Tecan EVO 200 robotic platform in 96-well plates (Nunc). For each biological replicate, an independent master culture was grown at 37 °C with shaking for a minimum of 12 h to stationary phase in M9 medium with 0.2% ARA and antibiotics. The final volume of each well was 150 μl. To start the experiment, the master culture was diluted 1:10,000 in individual wells at 4 °C containing M9 minimal medium with antibiotics and the desired combinations of inducers (HSL and ARA). Each biological replicate was cultured in medium prepared independently from the other replicates. This first dilution was incubated at 37 °C for 6 h, and then diluted again 1:10,000 in a fresh plate at 4 °C, with each well being diluted into the corresponding well of the new plate (to maintain the same inducer concentrations). The second dilution was incubated at 37 °C for 5 h, after which the plate was analysed at room temperature with flow cytometry. The reported data are from four independent biological replicates pooled from experiments performed on two separate days. Step responses Five-millilitre aliquots of M9 medium with appropriate antibiotics and 0.15% or 0.2% ARA were inoculated with SKA703 pSKA417 pSKA562 (closed loop) from glycerol freeze stocks at an OD of 8 × 10 −8 . The cultures were incubated for 9 h overnight at 37 °C with shaking. In the morning, the overnight cultures were in early logarithmic phase. The overnight cultures were diluted into 5 ml fresh prewarmed induction medium at 0.005 OD and incubated at 37 °C with shaking. The 0.15% overnight ARA cultures were diluted into medium containing 0.15% ARA and 5 nM HSL. The 0.2% overnight ARA cultures were diluted into medium containing 0.2% ARA and either 5.5 nM, 7.5 nM or 8.5 nM HSL. Every 1.5 h, ODs were measured and all cultures were rediluted into matching prewarmed induction medium at 0.005 OD. At every dilution point, three 200-μl samples of culture were collected and measured by flow cytometry and the average of the technical replicates were used. ODs were used to calculate cell growth rate. The reported data are from three independent biological replicates pooled from experiments performed on the same day. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. Induction of negative perturbation SKA703 pSKA417 pSKA562 (closed loop) and SKA703 pSKA417 pSKA563 (open loop) strains were grown overnight in M9 medium containing 0.1%, 0.15%, and 0.2% ARA. Overnight cultures were diluted into fresh medium with matching ARA concentrations and different concentrations of HSL (3.5 and 4 nM HSL with 0.1% ARA, 5, 5.5, and 7 nM HSL with 0.15% ARA, 5.5, 7, and 9 nM HSL with 0.2% ARA) with or without 10 ng/ml aTc at 0.00004 OD and incubated at 37 °C with shaking. Additionally, the open loop was also induced with 0.2% ARA and 0 nM HSL to match the unperturbed output fluorescence of the closed loop in 0.2% ARA and 7 nM HSL. At 4.5 h, each culture was diluted into fresh matching prewarmed induction medium with the same inducers at 0.00002 OD. At 9 h, 3 200-μl aliquots of culture were collected and measured by flow cytometry and the average of the technical replicates was used. The OD of the cultures were read at time 0, 4.5 and 9 h and used to estimate the growth rate for each culture. The reported data are from three independent biological replicates pooled from experiments performed on two separate days. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. Dynamic negative perturbation Five-millilitre aliquots of M9 medium with appropriate antibiotics and 0.2% ARA were inoculated with SKA703 pSKA417 pSKA562 (closed loop) and SKA703 pSKA417 pSKA563 (open loop) from glycerol freeze stocks at an OD of 8 × 10 −8 . The cultures were incubated for 9 h overnight at 37 °C with shaking. In the morning, the overnight cultures were in early logarithmic phase. The overnight cultures were diluted into 5 ml fresh prewarmed induction medium at 0.005 OD and incubated at 37 °C with shaking. The overnight cultures were diluted into medium containing 0.2% ARA and 5.5 nM HSL. Every 1.5 h, all cultures were rediluted into matching prewarmed induction medium. At 4.5 h, cells were at or close to steady state (time 0 h in Fig. 2f .). After 6 h of incubation (time 1.5 h in Fig. 2f ), each culture was diluted into two separate aliquots of medium and 10 ng/ml aTc was added to one aliquot and subsequent dilutions with these aTc-induced cultures were made with aTc-containing medium. At every dilution point, 3 200-μl samples of culture were collected and measured by flow cytometry and the average of the technical replicates were used. The reported data are from three independent biological replicates pooled from experiments performed on two separate days. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. Growth-rate control Growth-rate titration SKA1328 pSKA570 was grown overnight in M9 medium supplemented with 0.2% ARA. Overnight culture was diluted 1:5,000 into 5-ml aliquots of methionine-dropout medium (M9 salts, 0.4% glucose, 0.001% thiamine, 20 μg/ml uracil, 0.00006% ferric citrate, 0.1 mM calcium chloride, 1 mM magnesium sulfate, and 19 amino acids at 40 μg/ml, methionine-free) containing 80 ng/ml methionine, 0.2% ARA, and 10, 20, 30, or 40 nM HSL. Cultures were incubated at 37 °C with shaking for 12 h. Cultures were then diluted into fresh 5-ml aliquots of prewarmed dropout medium with matching inducer concentrations at an OD of 0.0005. Immediately after dilution, 75-μl samples were removed and mixed with 79 μl of 500 μg/ml rifampicin (Sigma-Aldrich Chemie) in phosphate-buffered saline and 21 μl of 2-μm AccuCount Blank Particles (Spherotech) in a 96-well plate (Greiner) on ice. Samples were collected every hour for 9 h. Absolute cell counts were determined by flow cytometry and used to calculate the actual cell concentration over time. An example of the gating strategy is presented in Extended Data Fig. 8d . Steady-state growth rate was calculated by taking the logarithm of the absolute cell counts and performing linear regression using a time point interval in which cells showed stable linear behaviour (intervals indicated in the source data). The reported data are from three independent biological replicates pooled from experiments performed on the same day. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. Robustness of growth-rate control to different temperatures SKA1328 pSKA570 (closed-loop) and SKA1328 pSKA571 (open-loop) were grown overnight in M9 medium with 0.2% ARA and appropriate antibiotics. Overnight cultures were diluted 1:5,000 into 5 ml aliquots of methionine-dropout medium (as described for growth-rate titrations) containing 80 ng/ml methionine, 0.2% ARA, 10 nM HSL, and appropriate antibiotics. Cultures were incubated at 37 °C or 30 °C with shaking for 12 h to ensure that induced cells were in a state of active growth and that any residual methionine in the medium was fully metabolized. Cultures were then diluted into fresh 5-ml aliquots of prewarmed dropout medium (methionine-free) with matching inducer concentrations and antibiotics at an OD of 0.0005 for 37 °C cultures or 0.001 for 30 °C cultures. The 37 °C cultures were started at a lower OD than the 30 °C cultures to be able to follow the 37 °C cultures over the duration of the experiment without the cells growing out of early exponential phase. Immediately after dilution, 75-μl samples were removed and mixed with 79 μl of 500 μg/ml rifampicin (Sigma-Aldrich Chemie) in phosphate-buffered saline and 21 μl of 2-μm AccuCount Blank Particles (Spherotech) in a 96-well plate (Greiner) on ice. Samples were collected every hour for 12 h. Absolute cell counts were determined by flow cytometry and used to calculate the actual cell concentration over time. An example of the gating strategy is presented in Extended Data Fig. 8d . Steady-state growth rate was calculated by taking the logarithm of the absolute cell counts and performing linear regression as described above for growth-rate titrations. The reported data are from three independent biological replicates pooled from experiments performed on two separate days. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. In an effort to match the growth rate of the closed-loop at 37 °C, SKA1328 containing reduced plasmid-copy-number open-loop circuit pSKA571-p15A was used. To minimize the growth rate of the open loop, it was decided not to induce the open loop with HSL as the leaky expression of metE was sufficient. Growth-rate control experiments were performed with 0.2% ARA and 0 nM HSL. Dilutions, sampling, and growth-rate calculations were performed as described above for growth-rate titrations. The reported data are from three independent biological replicates pooled from experiments performed on two separate days. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. As a reference for the closed- and open-loop growth-rate control circuits, endogenous metE + wild-type strain SKA703 with empty plasmid vector pSKA47 was also tested as described above with modifications to account for faster growth. The strain was grown overnight in M9 medium with 0.2% ARA and appropriate antibiotics. Overnight cultures were diluted 1:100,000 or 1:50,000 into 5 ml aliquots of methionine-dropout medium (as described for growth-rate titrations) containing 80 ng/ml methionine, 0.2% ARA, 10 nM HSL, and appropriate antibiotics. The 1:100,000 dilutions were incubated at 37 °C and the 1:50,000 dilutions were incubated at 30 °C with shaking for twelve hours. Cultures were then diluted into fresh 5-ml aliquots of prewarmed dropout medium (methionine-free) with matching inducer concentrations and antibiotics at an OD of 0.0001 for 37 °C cultures or 0.0002 for 30 °C cultures. Cultures were sampled as described above every 30 min for up to 8 h. Cultures at 37 °C were terminated at 6.5 h as they grew out of early exponential phase. Growth-rate calculations were performed as described above for growth-rate titrations. The reported data are three independent biological replicates pooled from experiments performed on two separate days. For each biological replicate, a separate stock of medium was independently prepared and used only for that particular replicate. Every biological replicate was started from its own independently prepared overnight culture. Flow cytometry The samples from the high throughput titrations in 96-well plates were analysed on a LSRII Fortessa flow cytometer (BD Biosciences) equipped with the FACSDiva v.8.0.1 software program and a high throughput sampler. sfGFP was measured with a 488-nm laser and 530/30 and 505 low-pass emission filters; the voltage gains of the instrument were set as follows: forward scatter 500 V, side scatter 300 V, sfGFP 900 V. A minimum of 5,000 events were collected for each well using thresholds of 500 FSC-H and 500 SSC-H. Fluorescence measurements for the immunoblot lysate preparation cultures, step responses, steady-state negative perturbation, dynamic perturbation, and absolute cell count measurements for the growth-rate control experiments were performed using a CytoFlex S flow cytometer (Beckman Coulter) equipped with CytExpert v.2.1.092 software. sfGFP was measured with a 488-nm laser and 525/40 bandpass filter; the gain settings of the instrument were as follows: forward scatter 100, side scatter 100, sfGFP 500. Thresholds of 2,500 FSC-H and 1,000 SSC-H were used for all samples. Fifty thousand events were collected for the lysate preparation, step response experiments, steady-state disturbance rejection experiments, and dynamic perturbation experiments and 1,000 AccuCount Blank Particles were collected for the growth-rate control experiments. The raw flow cytometry data were gated with FlowJo v.10 (Treestar) (Extended Data Figs. 3 a, 8d ) and cell autofluorescence was subtracted from sfGFP measurements. The data were then further processed using custom R v.3.5.0 scripts or plotted with GraphPad Prism 7. Background fluorescence is plotted in Extended Data Fig. 3b . Excel was used to perform t -tests (unpaired, two-tailed, unequal variance). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability All relevant data are included as Source Data and/or are available from the corresponding author on reasonable request. Plasmid sequences are deposited in GenBank under the accession codes MK775703 – MK775710 . Strains and plasmids used in this study are available from the corresponding author on reasonable request. Code availability Code used for simulations is available on reasonable request from the corresponding author.
The human body keeps the calcium concentration in the blood constant, similarly to an aircraft's autopilot keeping the plane at a constant altitude. What they have in common is that both the body and the autopilot employ sophisticated integral feedback control mechanisms. Researchers in the Department of Biosystems Science and Engineering at ETH Zurich in Basel have now built such an integral controller completely from scratch within a living cell, as they report in the latest issue of the journal Nature. In the future, their synthetic biology approach could make it possible to optimize biotechnological production processes and to regulate hormonal activity through cell therapy. Constant despite environmental disturbances Marine engineers were the first to build such an integral feedback control system, using it to automate ship steering over 100 years ago. Since then, it has been applied wherever there is a need to maintain steady, stable conditions of direction, temperature, speed or altitude in the face of outside influences. The role of integration is that it allows the control system to make corrections based on both the amount and duration of the deviation from the desired constant value. In biology, too, mechanisms have evolved to maintain such conditions as a steady concentration of substances in the blood. Several years ago, researchers led by Mustafa Khammash, professor at the Department of Biosystems Science and Engineering, showed that these biological mechanisms are also examples of integral feedback control. "These kinds of integral controllers are extremely resistant to unexpected environmental disturbances," Khammash says, "which probably explains why the principle prevailed in evolution, and is why it is ubiquitous in technology." Interplay of two molecules Khammash and his interdisciplinary team of control theorists, mathematicians and experimental biologists have now engineered such an integral feedback controller in the form of a synthetic genetic regulatory network inside a bacterium. Their feedback mechanism relies on two molecules—A and B—that bind to each other to become inactive. Together, these two molecules have the ability to maintain a constant concentration of a third molecule, C. The system is designed so that molecule B promotes the production of C, while the production rate of A depends on the concentration of C. The feedback loop is such that when C is abundant, more A will be produced, which will inactivate more B, which in turn will cause production of C to fall. As a proof of concept, the ETH scientists made use of this principle to control the production of a green fluorescent protein in Escherichia coli bacteria. Thanks to the feedback controller, the bacteria produced a constant amount of the fluorescent protein—even when the scientists, who wanted to test the system, attempted to suppress its production using strong inhibitors. In a second experiment, the researchers produced a bacterial population that grew at a constant rate in spite of the scientists' attempts to disrupt growth, again in an effort to test the feedback mechanism. Improving biotech and therapies Biotechnology could now put this new control mechanism to work in bacteria to produce vitamins, medications, chemicals or biofuels, with the mechanism ensuring that the production rate within the bacteria is held constant at its optimum level. The ETH scientists are developing an analogous control mechanism for mammalian cells in subsequent research work, which will pave the way for further applications, including designer cells featuring genetic regulatory networks to produce hormones inside a patient's body. Among those who would stand to benefit from such an approach are people with diabetes or thyroid deficiency. The synthetic feedback controllers could also be used to improve cancer immunotherapy. "In this form of therapy, immune cells need to be active enough to fight the tumor, but not overactive, as they would then attack healthy tissue," Khammash says. "A mechanism like ours would be able to fine-tune their activity." Integral controller According to ETH Professor Mustafa Khammash, regulation of the calcium concentration in the blood is a good example with which to illustrate the principle of integral controllers in biology. This concentration is tightly regulated at a value of approximately 95 milligrams per liter of blood, regardless of how much calcium a person ingests in food. This rate even remains constant during lactation when lots of calcium is drawn from the blood in order to produce milk. "A constant level of calcium is essential to the proper functioning of many physiological processes, including muscle and nerve function or blood clotting," Khammash says. The hormone PTH works as one of two feedback agents in the body in this context: PTH promotes the mobilization of calcium from bone tissue into the bloodstream. The lower the concentration of calcium in the blood, the more PTH is produced by the parathyroid glands. "This is one part of the body's response when the levels of calcium are too low," Khammash says. But to bring the concentration of calcium completely back to normal after a sudden spike or drop, he adds, a second mechanism is required. This role falls to a biologically active form of vitamin D3, which promotes the absorption into the bloodstream of calcium from partially digested food in the small intestine. However, production of this active form of vitamin D3 in the kidneys is dependent on the concentration of PTH. Together, these two hormones are responsible for ensuring that the calcium concentration in the blood over time strays as little as possible and for as short a time as possible from its normal level—or, in other words, that the "integral of deviation with respect to time," as a mathematician would put it, approaches a constant. Therefore, such a control mechanism is called integral.
10.1038/s41586-019-1321-1
Medicine
Teamwork between cells fuels aggressive childhood brain tumor
Mara Vinci et al, Functional diversity and cooperativity between subclonal populations of pediatric glioblastoma and diffuse intrinsic pontine glioma cells, Nature Medicine (2018). DOI: 10.1038/s41591-018-0086-7 Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-018-0086-7
https://medicalxpress.com/news/2018-07-teamwork-cells-fuels-aggressive-childhood.html
Abstract The failure to develop effective therapies for pediatric glioblastoma (pGBM) and diffuse intrinsic pontine glioma (DIPG) is in part due to their intrinsic heterogeneity. We aimed to quantitatively assess the extent to which this was present in these tumors through subclonal genomic analyses and to determine whether distinct tumor subpopulations may interact to promote tumorigenesis by generating subclonal patient-derived models in vitro and in vivo. Analysis of 142 sequenced tumors revealed multiple tumor subclones, spatially and temporally coexisting in a stable manner as observed by multiple sampling strategies. We isolated genotypically and phenotypically distinct subpopulations that we propose cooperate to enhance tumorigenicity and resistance to therapy. Inactivating mutations in the H4K20 histone methyltransferase KMT5B ( SUV420H1 ), present in <1% of cells, abrogate DNA repair and confer increased invasion and migration on neighboring cells, in vitro and in vivo, through chemokine signaling and modulation of integrins. These data indicate that even rare tumor subpopulations may exert profound effects on tumorigenesis as a whole and may represent a new avenue for therapeutic development. Unraveling the mechanisms of subclonal diversity and communication in pGBM and DIPG will be an important step toward overcoming barriers to effective treatments. Main pGBM and DIPG are a highly heterogeneous group of high-grade glial tumors with no effective treatments 1 . Integrated molecular profiling 2 , 3 , 4 , 5 , 6 , 7 has revealed unique, specific and highly recurrent mutations in genes encoding histone H3 variants that mark robust subgroups of pGBM and DIPG with distinct age of onset, anatomical distribution, clinical outcome, and histopathological and radiological features 8 , 9 . A paradigm shift away from extrapolating from inappropriate adult GBM data and toward a more pediatric-biology-specific approach to developing new therapies has been a positive consequence of the discovery of these mechanisms of tumorigenesis 10 , 11 , 12 . Despite these advances in our understanding of the unique biological drivers of these diseases 13 , a major challenge to improving outcomes for children with these tumors is likely to overlap with morphologically similar tumors in adults: their extensive intratumoral heterogeneity 14 . This has been demonstrated spatially by the application of genomic analyses of topographically distinct areas of the tumor at resection 15 , through longitudinal studies of tumor progression and recurrence 16 , and through single-cell RNA sequencing of bulk primary tumor specimens 17 . All of these analyses suggest the presence of multiple coexisting tumor subclones that may be important to the proliferative and invasive capacities of the tumor, as well as cell fate decisions in response to the tumor microenvironment and selective pressure associated with therapeutic intervention. The relative contributions to the tumorigenic phenotype of these subclones is unclear, as is to what extent they interact during the tumor’s evolutionary history—key factors in understanding the implications for new treatment strategies 18 . In adult GBM, multiple subclones may also be marked by differential, mutually exclusive gene amplification events present in an individual tumor 19 , 20 , 21 , an observation also reported in isolated specimens of DIPG 22 , 23 . In these examples, cells harboring distinct receptor tyrosine kinase gene amplifications were found intermingled throughout tumor specimens in a manner that suggested an environment conducive to the coexistence of multiple cellular subpopulations 19 , 20 , 21 . Two-dimensional (2D) mapping of these subclones across specimens showed some evidence of a predilection of certain subclones for perivascular niches, invasive tumor fronts, or the periphery of necrotic areas 19 , 20 . In evolutionary biology terms, this stable coexistence in conjunction with a degree of specialization appears to imply cooperativity 24 . This posits a selective advantage for an interactive cellular network and promotes biological diversity within a tumor population as an important driver of the malignant phenotype in these cancers. With pGBM and DIPG harboring considerably fewer somatic mutations than adult GBM 13 , we sought to investigate the possibility of tumor heterogeneity reflecting cooperation of subclones in what we consider to be an ideal model system for cancers sharing these histologies. Through an integrated approach of single and multiple sequencing strategies of patient samples coupled with in vitro isolation of subclonal populations, we concluded that biological diversity is selected for across time and space, with genotypically and phenotypically distinct tumor compartments working together to enhance key tumorigenic features such as invasion and migration. Results pGBM and DIPG comprise multiple subclones We reanalyzed whole-genome and exome sequencing from 142 recently published pGBM and DIPG specimens for which matched germline data were available 2 , 3 , 4 , 5 , 6 , 7 . We calculated the cancer cell fractions (CCF) for all somatic single nucleotide variants (SNVs) and small insertions or deletions, taking into account the implied tumor cell percentage, overall ploidy, local copy number alterations and loss of heterozygosity 25 , 26 (Supplementary Table 1 ). In almost all cases, we observed a complex inferred subclonal architecture suggestive not of a single clonal expansion, but of multiple codominant subclonal populations, regardless of tumor location ( n = 93 DIPG, n = 20 other midline, n = 29 cerebral hemispheres) or histone mutation subgroup ( n = 10 H3.3 G34R ( H3F3A ), n = 61 H3.3 K27M ( H3F3A ), n = 23 H3.1 K27M ( HIST1H3B , HIST1H3C ), n = 48 histone wild-type) (Fig. 1a ). Despite this variability in the fraction of any tumor harboring a given mutation, at a gene level there were certain recurrent mutations that were found to be consistently clonal ( H3F3A , HIST1H3B , HIST1H3C , ATRX , NF1 ), some that were found to be predominantly clonal, but with some subclonal examples ( ACVR1 , TP53 ), and some frequently found in subclonal populations ( ATM , PIK3R1 , PPM1D , PDGFRA , BRAF , PIK3CA ) (Fig. 1b ). These data provide important evidence for the likely timing of these mutations during tumor evolution. Using the EXPANDS package 27 , 28 , we used the sequencing data to predict an absolute number of subclones present in each tumor sample, deriving a median number of 6 (range 1–14), with more than 85% of tumors appearing to harbor 3–10 subclones (Fig. 1c and Supplementary Table 2 ). The percentage of clonal alterations ranged from 100% ( n = 1) to 5.2% (median = 35.0%) (Supplementary Fig. 1a ). There was a direct relationship between the overall mutational burden (number of somatic coding SNVs) and number of subclones (Pearson r 2 = 0.2188, P = 4.36 × 10 −9 ), though with several outliers (Fig. 1d ). There were no differences in subclonal number between different anatomical sites (Fig. 1e ), despite the differing survival times by tumor location 29 . pGBM with H3.3 G34R mutations had a significantly elevated number of subclones compared with other tumors (median = 8.5, P = 0.044, t -test), while there were significantly fewer in infants (<3 years at diagnosis; median = 4, P = 0.0108) (Fig. 1e ). Plotting the number of subclones against hazard ratios for overall survival in a similar manner to that described in a pan-cancer analysis 28 , we identified tumors harboring more than 10 subclones to have the worst prognosis (relative risk = 3.3) (Supplementary Fig. 1b ). Although patients with H3.3 G34R mutations had a better prognosis ( P = 3.94 × 10 −6 , log-rank test), tumors with >10 subclones nonetheless showed a trend toward a shorter survival time ( P = 0.068, log-rank test) (Fig. 1f ). In multivariate analysis including location, age and subgroup, only H3.3 K27M mutations ( P = 0.000082, Cox proportional hazards model) and a number of subclones greater than 10 ( P = 0.0082) were independent predictors of shorter survival (Supplementary Fig. 1c ). Fig. 1: Pediatric GBM and DIPG harbor a complex subclonal architecture. a , Representative images (from n = 142) of six cases of pGBM and DIPG from different anatomical locations and with different histone H3 mutation status. For each case, a CIRCOS plot shows chromosomal ideograms on the outermost ring, with banding in black and gray and centromere in red, and highlights somatic SNVs and insertions or deletions on the next ring, DNA copy number changes (dark red, amplification; light red, gain; dark blue, homozygous deletion; light blue, single-copy loss) and loss of heterozygosity (yellow) on the inner rings, and intra- or interchromosomal translocations inside the circle (orange). The CCF for each somatic coding mutation is plotted as a histogram with a kernel density overplotted. In all cases, in addition to a peak of mutations present in 100% of cells (clonal), there is a complex pattern of subclonal mutations (<95% CCF) forming peaks at low frequencies within a given tumor. b , Violin plot of CCFs for a given series of gene mutations across all 142 independent cases of pGBM and DIPG (H3.3 G34R or G34V, n = 10; H3.3 K27M, n = 61; H3.1 K27M, n = 23; ATRX , n = 22; NF1 , n = 4; ACVR1 , n = 27; TP53 , n = ; ATM , n = 5; PIK3R1 , n = 8; PPM1D , n = 11; PDGFRA , n = 7; BRAF , n = 5; PIK3CA , n = 15). The shaded area represents a CCF of 95–100% to indicate a clonal mutation. Purported drivers such as histone H3 mutations, ATRX and NF1 are almost wholly found to be clonal (though there are single outliers in some instances). Other genes such as PIK3CA , BRAF and PDGFRA are frequently found to be mutated in smaller subclonal compartments of the tumors. Kernel densities of CCFs are plotted for all samples harboring a given mutation (number of independent cases listed on figure). c , The number of subclones present in 142 pGBM and DIPG is calculated from somatic mutation data using the EXPANDS package 27 and ordered first by the number of subclones (colored using a rainbow palette) and then by the proportion of the tumor defined by the main clone in each tumor. A single case was clonal, with more than 85% of cases harboring 3–10 subclones. d , Dot plot of the number of somatic coding SNVs ( y axis) against the number of subclones ( x axis), demonstrating a significant positive relationship (Pearson r 2 = 0.2188, P = 4.36 × 10 −9 , n = 142 independent samples). Horizontal bar, median. Individual tumors are colored by their histone H3 mutation status, with outliers often seen to harbor H3.3 G34R (blue). e , Clinical and molecular correlates of subclonal numbers. Box plots highlight lack of difference in the number of subclones on the basis of anatomical location, but an increased number in H3.3 G34R tumors ( P = 0.044, t -test) and a reduced number in infants (<3 years, P = 0.0108, t -test) across all n = 142 independent samples. The thick line within the box is the median, the lower and upper limits of the boxes represent the first and third quartiles, whiskers 1.5 times the interquartile range, and individual points outliers. Hemi, hemispheric. f , Prognostic implications. Kaplan–Meier curves demonstrate that H3.3 G34R tumors have a longer overall survival than other pGBM and DIPG ( P = 3.94 × 10 −6 , log-rank test); however, despite the association of this subgroup with an increased number of tumor subclones, an elevated subclonal diversity shows a trend toward shorter survival across all pGBM and DIPG ( P = 0.068, log-rank test). Comparisons included all n = 142 independent samples. * P < 0.05. ** P < 0.01. Full size image The tumor cohort studied was heavily enriched in DIPG samples, and owing to the unresectability of these lesions, it comprised a mixture of pre-treatment biopsy samples and post-treatment autopsy samples 2 , 3 , 5 , 6 , 7 . We observed no systematic differences in subclonal architecture when comparing samples taken at these differing time points, regardless of diagnosis or histone mutation status (Supplementary Fig. 1d ). We were able to assess this directly for eight cases for which paired pre- and post-treatment sequencing data were available. By plotting the change in major subclonal tumor proportion over time, we observed changes in the proportion of individual subpopulations in response to therapy and tumor evolution; in all cases, however, several significant populations remained unchanged, and both before and after treatment the tumor was inferred to harbor multiple subclones, suggesting either equivalent fitness of multiple subclones or pressures restricting the ability of any given clone to sweep to fixation (Supplementary Fig. 1e ). DIPG cells escape the pons early during tumor evolution More direct evidence for the presence of multiple, genetically distinct subclones could be seen from sequencing 62 topographically distinct samples from 14 different patients (Supplementary Table 3 ). Comparing the CCFs from across a given tumor sample clearly demonstrated both the ubiquitous presence of presumed driver alterations (histone mutations, NF1 ) (Supplementary Fig. 2a ) but also a range of mutations private to only one portion of the tumor. Of note, each distinct tumor region itself was inferred to harbor multiple subclones. The collection of DIPG samples at autopsy represented a unique opportunity to evaluate the spatial heterogeneity of these tumors. In one case (QCTB-R091/R092), distinct low-grade and high-grade components were manually dissected and found to harbor key oncogenic mutations in one and not the other region (for example, PIK3CA H1047R in grade IV and not grade II), in addition to ubiquitous drivers such as ACVR1 (Supplementary Fig. 2b ). It has previously been shown that these diffusely infiltrating lesions may be found outside the pons and spread throughout the central nervous system at the time of death 30 . Multi-sample sequencing strategies allowed us to again identify early driver events present throughout the tumor cells of an individual patient ( H3F3A , HIST1H3B ), as well as those occurring only at the point of escape of cells from the brainstem, such as mutations in WNK2 , known to act in glioma cell migration and invasion 31 (Supplementary Fig. 2c ). Across multiple sites in multiple samples (Fig. 2 , Supplementary Fig. 3 and Supplementary Table 3 ), mapping SNVs and copy number aberrations revealed branching evolutionary trajectories. This was particularly evident in the most extensively sampled cases (Fig. 2 ), where distinct branches highlighted the profound laterality of tumor evolution, while tumor cells found in midbrain, cerebellar and thalamic regions were seen to diverge early from the pontine mass. While the difference in mutational profiles may be a result of invasive cells cycling more slowly, the presence of convergent or parallel evolution in key oncogenic drivers such as PIK3CA , NF1 , MKI67 , NOTCH1 and DMNT3A (Supplementary Fig. 3 ) strongly suggests a predominantly early evolutionary divergence of cells that subsequently migrated outside the pons. Fig. 2: DIPGs infiltrate the brain through branching evolution and genotypic convergence. a , Thirteen different tumor-harboring regions of HSJD-DIPG-010 were sampled post mortem, from within and outside the pons. Scale bars, 100 µm. b , Exome sequencing was carried out for all regions. CCFs plotted as a heat map for all variants found in at least one specimen, with anatomical location highlighted and color-coded. c , Phylogenetic trees were reconstructed using neighbor-joining algorithms based on the nested subpopulation phylogenies calculated as part of EXPANDS, with evident laterally directed evolution and early escape from the pons of tumor cells found in distinct anatomical sites. GL, germline. d – f , Eight different tumor-harboring regions of HSJD-DIPG-014 subjected to the same analysis. Scale bars, 100 µm. g – i , Eight different tumor-harboring regions of HSJD-DIPG-015 subjected to the same analysis. Scale bars, 100 µm. Full size image In vitro isolation of genotypically and phenotypically distinct subclones To determine whether the subclonal tumor cell populations present in pGBM and DIPG represent functionally distinct entities (rather than simply reflecting stochastic alterations occurring as a result of increasing genetic instability), we devised a methodology to isolate and expand single tumor cells under stem cell conditions, referred to as ‘stem-like’ cells, in both 2D 32 and three-dimensional (3D) culture 33 for further analysis (Fig. 3a ). Using this approach, we identified three primary patient-derived H3.3 K27M mutant samples (two DIPG, one thalamic pGBM) (Supplementary Fig. 4 ) from a well-characterized panel of six cultures (Supplementary Fig. 5a,b ) to readily form single-cell-derived colonies in both 2D and 3D, at rates varying between 7.5 and 20.8% of cells (Fig. 3b ). Colonies isolated from SU-DIPG-VI were identified using high-content image analysis (Fig. 3c,f ) and displayed highly variable growth characteristics in vitro when grown as 3D neurospheres (Fig. 3d ) and in 2D on laminin (Fig. 3g ). When sequenced at high depth using a custom-designed targeted panel (Supplementary Table 4 ), in addition to ubiquitously shared mutations (for example, H3F3A , TP53 ), around half the colonies harbored a series of shared mutations not seen in the remainder (for example, PRSS1 , CHD3 ), while most were also found to contain a series of private events restricted to individual cell populations, including genes associated with cell shape and motility ( FLNC , CTTN , RANGAP1 ) (Fig. 3e,h ). Individual laminin-grown colonies with fast (A-D10), intermediate (A-B8) and slow (A-E6) growth rates (Fig. 3i ) were seen to have significantly differing capacities for invasion into Matrigel (Fig. 3j ) and migration on fibronectin (Fig. 3k ) in vitro. Thus, individual tumor samples contain a dynamic diversity of overlapping genotypic and phenotypic populations in the stem-like cell compartment. Fig. 3: Isolation of genotypically and phenotypically diverse single stem-like cell-derived subclones of pediatric GBM and DIPG. a , Isolation of subclonal populations: disaggregation of heterogeneous mixtures of patient-derived tumor cells, flow sorting into single cells in 96-well plates, and colony formation as either 2D cultures, adherent on laminin, or 3D neurospheres, all under stem cell conditions. Individual subclonal colonies are subjected to high-throughput phenotypic analysis and targeted resequencing, and further cultured for detailed in vitro and in vivo mechanistic comparison with heterogeneous bulk populations. b , Percentage of single cells that formed colonies under 2D laminin and 3D neurosphere stem cell conditions are given for six pGBM and DIPG primary patient-derived cell cultures, labeled by anatomical location, histone H3 mutation subgroup (dark green, HIST1H3B ; light green, H3F3A ) and name of the cell line. Mid, midline. c , 3D neurosphere culture from single-cell-derived colonies from SU-DIPG-VI assessed by Celigo S imaging cytometer. d , Growth of single-cell-derived colonies over time, assessed as diameter of neurosphere, labeled and color-coded. e , Targeted sequencing contingency plot of somatic mutations common to all subclones (blue), shared among certain subclones (yellow) and private to individuals (red). f , 2D laminin culture from single-cell-derived colonies from SU-DIPG-VI assessed by Celigo S imaging cytometer. g , Growth of single-cell-derived colonies over time, assessed as diameter of neurosphere, with subclones taken for later analysis highlighted: A-D10 (fast, purple), A-B8 (intermediate, pink) and A-E6 (slow, violet). h , Targeted sequencing contingency plot of somatic mutations common to all subclones (blue), shared among certain subclones (yellow) and private to individuals (red). Gene names are colored to highlight private mutations in selected subclones or common to A-D10 and A-B8 (brown). i , Time course for growth of selected subclones replated and grown over 160 h, highlighting statistically significant differences among subclones and heterogeneous bulk cell populations of SU-DIPG-VI (blue). Representative images at 72 h are provided from the Celigo S cytometer, with tumor cells marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. j , Time course of invasion of cells into a Matrigel matrix over 72 h, either as percentage of the total area in the field of view covered by invading cells, or as a percentage of time zero. Representative images given at 72 h, with extent of tumor cell invasion marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. k , Time course of tumor cell migration onto Matrigel over 72 h, either as percentage of the total area of the well covered by migrating cells or as a percentage of that at time zero. Representative images given at 72 h, with extent of tumor cell migration marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. ANOVA. * P < 0.05. ** P < 0.01. *** P < 0.001. All graphs show mean ± s.d. Full size image Rare tumor subclones can harbor pathogenic variants driving differing phenotypes For HSJD-DIPG-007, we were able to utilize the ability to isolate these genetically and phenotypically distinct subclonal populations to investigate the role of individual genotypes without needing to artificially engineer the cells. We identified a single-cell-derived neurosphere colony (NS-F10) as harboring a private mutation in the histone H4 methyltransferase KMT5B ( SUV420H1 ) (Fig. 4a ), which was found to be present in the original bulk primary culture in only 2 of 678 reads (0.295%) (Fig. 4b ). This mutation results in the acquisition of a stop codon at amino acid position 187 (R187*), predicted to truncate the protein. Examining published sequencing datasets, we identified another case of pGBM from Schwartzentruber et al. 4 , PGBM18, as harboring a subclonal R699* truncating mutation of KMT5B in 12.2% of reads (Fig. 4b ), demonstrating this is not a unique observation. By digital droplet PCR, we confirmed that this mutation was present in 49.77% (8,060 of 16,196) of droplets from NS-F10 (assuming heterozygosity, this reflects presence in 99.64% cells), present in 0.48% (108 of 22,512) of reactions from the original culture, and absent (not significantly different from normal human astrocyte control; 1 of 18,484, 0.009%) from a ‘natural isogenic’ (confirmed by exome sequencing) counterpart subclone NS-F8 (Fig. 4c ). The KMT5B mutant (NS-F10) and wild-type (NS-F8) subclones did not show appreciable differences from each other, or from the heterogeneous original bulk HSJD-DIPG-007 cells, in terms of morphology or immunophenotype (Supplementary Fig. 5c ). The methyltransferase encoded by the gene is involved predominantly in dimethylation and, to a lesser extent, trimethylation 34 of histone H4K20, and consequently by immunofluorescence we observed a reduction in H4K20me2 in NS-F10 compared to HSJD-DIPG-007 bulk cells and NS-F8 (Fig. 4d ). An unbiased drug screen of all three colonies against 80 chemotherapeutic and targeted agents (Supplementary Fig. 6a–c and Supplementary Table 5 ) revealed significantly enhanced sensitivity to multiple chemotypes of PARP inhibitors of the KMT5B mutant NS-F10 compared to wild-type NS-F8 and HSJD-DIPG-007 bulk cells (10–30 fold difference for talazoparib, 50% cell survival concentration (SF 50 ) 1 nM vs. 11 nM and 31 nM, respectively; 4.5-fold difference for olaparib, SF 50 0.85 μM vs. 3.83 μM and 3.80 μM, NS-F10 vs. NS-F8 and HSJD-DIPG-007 bulk population ANOVA P < 0.001 in each case) (Fig. 4e ). Notably, when subclones were cocultured, rather than a dilution effect dependent on the relative proportions, mixed cultures were as insensitive as the heterogeneous bulk population (Supplementary Fig. 6d ). Thus the mutation appears to confer a loss of function on these cells, presumably due to an abrogated DNA repair process associated with loss of H4K20me2 (but not H4K20me3 or total H4, Supplementary Fig. 6e ) and recruitment of 53BP1 34 . Fig. 4: Rare DIPG subclones with pathogenic somatic variants driving the cellular phenotype. a , Contingency plot of common (blue), shared (yellow) and private (red) somatic mutations in single-cell-derived neurospheres from primary patient-derived cell culture HSJD-DIPG-007. NS-F10 is the only subclone to harbor a mutation in KMT5B . b , Pile-up representation of sequencing reads aligning to the KMT5B locus at 11q13.2. The R187* (c.559G>A) variant is highlighted in red (boxed for clarity) and is present in 2 of 678 reads of the original heterogeneous sample. Cartoon representation of mutations identified in HSJD-DIPG-007 (c.559G>A, R187*, present in 0.47% total reactions by digital droplet PCR) and MCGL-PGBM18 4 (c.2095G>A, R699*, present in 12.2% total reads by exome sequencing). Amino acid position labeled; SET domain colored blue. c , Digital droplet PCR. Plot of assay for KMT5B wild-type ( x axes) and R187* mutation ( y axes) for normal human astrocytes, heterogeneous bulk cells, and subclones NS-F10 and NS-F8. Mutant reads are present in 49.77% of droplets from NS-F10, equating to 99.64% cells harboring a heterozygous mutation. They are absent from astrocytes and NS-F8, though are found in 0.48% of droplets from the original bulk preparation. Taken from n = 3 independent experiments. FAM and VIC denote the fluorescent dyes used. d , Heterogeneous bulk HSJD-DIPG-007 cells and subclones were stained using an antibody directed against H4K20me2 (green) or total H4 (red), with nuclei stained with DAPI (blue). Reduced expression of H4K20me2 is observed in KMT5B mutant NS-F10 cells. Representative images taken from n = 3 independent experiments. Scale bars, 50 µm. e , Effect on cell viability (surviving fraction on y axes) of treatment of heterogeneous bulk cells and subclones with increasing concentrations of two different PARP inhibitors ( x axes, log 10 scale). ANOVA was used to test for significance of NS-F10 vs. NS-F8 and HSJD-DIPG-007 bulk culture for talazoparib and olaparib. *** P < 0.001. Data derived from n = 3 independent experiments. f , RNA sequencing (RNA-seq). Heat map of gene expression analysis from RNA sequencing data highlighting differential expression in KMT5B mutant NS-F10 subclones compared to wild-type NS-F8. The most highly elevated genes included a range of extracellular matrix remodelers (represented in gene set enrichment analysis by the gene set “Bowie response to extracellular matrix”) and numerous secreted chemokines (gene set “Reactome chemokine receptors bind chemokines”). KMT5B itself is also differentially expressed. All cell preparations were sequenced, n = 1, and statistical comparisons made by gene set enrichment analysis using the Kolmogorov–Smirnov test ( P ) with multiple correction testing using the false discovery rate ( q ). ES, enrichment score. g , Left: immunofluorescence of bulk HSJD-DIPG-007 cells and subclones stained using an antibody directed against α 5 -integrin (red). Nuclei are stained with DAPI (blue). Right: immunohistochemistry of embedded bulk HSJD-DIPG-007 cells and subclones stained using an antibody directed against α 5 -integrin, counterstained with hematoxylin. Representative images taken from n = 3 independent experiments. Scale bars, 50 µm. h , Neurosphere growth of HSJD-DIPG-007 and derived subclones seeded with different cell densities, showing significantly elevated growth in the heterogeneous bulk cells, but not among subclones. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. i , Time course of tumor cell invasion into Matrigel over 72 h, as a percentage of that at time zero using the Celigo S cytometer. Representative images given at 72 h, with extent of tumor cell invasion marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. j , Time course of tumor cell migration onto a fibronectin matrix over 72 h, as a percentage of time zero using the Celigo S cytometer. Representative images given at 72 h, with extent of tumor cell migration marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. k , Migration in response to stimulation with either conditioned medium (cond. med.) from HSJD-DIPG-007 heterogeneous bulk cells or the chemokines CCL2 and CXCL2. Values are given as a percentage of that of unstimulated (unstim.) cells at 24 h using the Celigo S cytometer. Representative images are given, with extent of tumor cell migration marked in green. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. All comparisons carried out by ANOVA, * P < 0.05. ** P < 0.01. *** P < 0.001. All graphs show mean ± s.d. Full size image Distinct infiltrative phenotypes of genotypically divergent DIPG subclones in vivo RNA sequencing analysis of the subclones revealed elevated gene expression in NS-F10 cells of a range of genes associated with remodeling the extracellular matrix (Fig. 4f and Supplementary Table 6 ). These included the fibronectin receptors α 3 - and α 5 -integrin, with differential protein expression validated by immunofluorescence and immunohistochemistry (Fig. 4g ). Although there was a slightly enhanced growth capability of the heterogeneous HSJD-DIPG-007 bulk cells (Fig. 4h ), NS-F10 and NS-F8 subclones were similar to each other in growth, though we did observe significant differences in invasion into Matrigel (Fig. 4i ) and migration on fibronectin (Fig. 4j ), even after growth had been controlled for. This absence of α-integrin expression likely underlies the inability of NS-F8 to migrate on fibronectin; of note, NS-F8 neurospheres also harbored a significantly reduced migratory capacity compared with NS-F10 on a range of other substrates, including tenascin-C, laminin and Matrigel (Supplementary Fig. 6 f). In all instances, the mixed population bulk HSJD-DIPG-007 cells were significantly more migratory than either subclone. The KMT5B wild-type NS-F8 cells had significantly reduced invasive and migratory capacities, which could be reversed (unlike those of the KMT5B mutant NS-F10 cells) upon culture with conditioned medium from the HSJD-DIPG-007 bulk cells (Fig. 4k ), suggesting the presence of expressed factors absent from the isolated NS-F8 cultures. These cells also differentially responded to the chemokine CXCL2 in terms of a significantly enhanced migration on fibronectin (Fig. 4k ). We chose this CXC ligand because it was one of the most differentially expressed genes by RNA sequencing analysis in NS-F10 (and HSJD-DIPG-007 bulk) compared to NS-F8 (Fig. 4f ). Thus we have a model whereby paracrine signaling between subclones underlies the cooperative interactions observed in mixed populations. In line with in vitro data, phenotypic differences were also recapitulated in vivo, where both bulk cell populations and NS-F10 subclones formed diffusely infiltrating tumors within 23–24 weeks after orthotopic implantation in the pons of NOD-SCID mice, whereas NS-F8 lesions were substantially less infiltrative and conferred a lower tumor burden, even after 30–32 weeks, despite there being little difference in proliferative capacity and immunophenotype in the brains (Fig. 5a ). NS-F8 tumor-bearing mice also had a longer survival (median = 205 d (NS-F8) vs. 141.5 d (NS-F10) and 169 d (bulk), P = 0.0236, log-rank test) (Fig. 5b ). Tumors from heterogeneous bulk cells were confirmed by digital droplet PCR to harbor a low subclonal frequency of KMT5B R187* mutation (0.23%) (Fig. 5c ), showing no significant selective pressure against the heterogeneous population. Thus even rare tumor cell subclonal populations may have different behaviors in vitro and in vivo of importance to key phenotypic features of DIPG that currently preclude effective treatments. Fig. 5: Distinct infiltrative phenotypes of genotypically divergent DIPG subclones in vivo. a , Heterogeneous HSJD-DIPG-007 bulk cells and NS-F10 and NS-F8 subclones were implanted directly into the pons of NOD-SCID mice and tumors allowed to form over 8 months. At weeks 23–24, bulk cells and NS-F10 formed diffusely infiltrating tumors throughout the brain, as seen by H&E staining as well as immunohistochemistry with anti-human nuclear antigen (HNA) or astrocyte marker GFAP, whereas NS-F8 had formed considerably less infiltrative lesions even at 30 weeks. Representative images from a total of n = 4 mice per group. Main scale bars, 1,000 µm; inset scale bars, 50 µm. b , Tumor-bearing animals implanted with NS-F8 subclones had significantly longer survival than heterogeneous HSJD-DIPG-007 bulk cells and NS-F10 ( P = 0.0236, log-rank test, n = 4 mice per group). * P < 0.05. c , Digital droplet PCR. Plot of assay for KMT5B wild-type ( x axes) and R187* mutation ( y axes) for normal human astrocytes and tumors from mice implanted with heterogeneous bulk cells, and subclones NS-F10 and NS-F8. Mutant reads are present in 51.33% droplets from NS-F10 and 0.23% droplets from the original bulk preparation. Taken from n = 3 independent experiments. d , Heterogeneous SU-DIPG-VI bulk cells and A-D10 and A-E6 subclones were implanted directly into the pons of nude mice and tumors allowed to form over 8 months. At week 10, bulk cells and A-D10 formed highly cellular, infiltrating tumors, as seen by H&E staining or immunohistochemistry with anti-HNA, whereas A-E6 had formed considerably less cellular lesions even at 14 weeks. Representative images from a total of n = 8 mice per group. Main scale bars, 1,000 µm; inset scale bars, 50 µm. e , Tumor-bearing animals implanted with A-E6 subclones had significantly longer survival than heterogeneous SU-DIPG-VI bulk cells and A-D10 ( P = 0.037, log-rank test, n = 8 mice per group). Full size image Notably, we observed similar results in a second model. A slower growing subclone of SU-DIPG-VI in vitro, A-E6, formed a less cellular tumor (Fig. 5d ) and had an extended survival when grown orthotopically in vivo of more than 118 d longer than a rapidly proliferating, highly invasive subclone (A-D10) and 154 d longer than the unselected bulk culture ( P = 0.037, log-rank test) (Fig. 5e ). DIPG subclones cooperate to enhance tumorigenic phenotypes To explore the nature of these subclonal interactions, we differentially labeled and cocultured genotypically and phenotypically distinct subclones from two DIPG samples: A-E6 and A-D10 from SU-DIPG-VI (Fig. 6a–d ) and NS-F8 and NS-F10 from HSJD-DIPG-007 (Fig. 6e–h ). When cultured in equal proportions, having been replated as single neurospheres, although there was little difference in observed growth rates (Fig. 6a,e ), there was a marked enhancement of invasion and migration conferred on the poorly motile subclones by coculture with their more invasive and migratory counterparts (Fig. 6b,c,f,g ). In both models, cell labeling allowed us to demonstrate that this was not a simple dilution effect of the mixture, but that the specific subclones otherwise lacking a pronounced ability to invade into Matrigel (Fig. 6d ) or migrate on fibronectin (Fig. 6h ) had markedly enhanced phenotypes, clearly colocalized and moving in concert alongside their natural isogenic pairs (Supplementary Videos 1 and 2 ). In vivo, cocultured NS-F8 and NS-F10 were found to retain their mixed proportions and infiltrate more extensively throughout the central nervous system than NS-F8 alone (Supplementary Fig. 6g ), conferring shorter survival on mice harboring these orthotopic tumors ( P = 0.045, log-rank test) (Supplementary Fig. 6h ). Thus we conclude that there exists an actively maintained cooperative network of subclones within DIPGs that depends on strongly positive interactions to elicit the highly aggressive clinical phenotypes seen in children with this incurable disease. Fig. 6: DIPG subclones cooperate to enhance tumorigenic phenotypes. Individual subclones of SU-DIPG-VI ( a – d ) and HSJD-DIPG-007 ( e – h ) were differentially labeled and cultured either as pure populations or mixed in equal ratios. a , Growth of cocultured (yellow) and monocultured E6 (green) and D10 (red) cells plated as single neurospheres after 96 h, measured as diameter of the sphere, with representative images provided from the Celigo S cytometer under phase contrast (phase) and fluorescence (fluor). Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. b , Invasion of cocultured (yellow) and monocultured E6 (green) and D10 (red) cells into Matrigel over 168 h, with area assessed by ImageJ software from representative images provided from the Celigo S cytometer under phase contrast and fluorescence. Cocultures and D10 have significantly enhanced invasive capabilities compared to E6. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. c , Migration of mono- and cocultured E6 (green) and D10 (red) cells on Matrigel, assessed by the number of differentially labeled distant cells at 24 h, with representative images provided from the IncuCyte Zoom live-cell analysis system under phase contrast and fluorescence. Cells from individual subclones have enhanced migratory properties when cultured together compared to alone. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. d , Confocal microscopy analysis of invasion of mono- and cocultured E6 (green) and D10 (red) cells into Matrigel after 4 d, with nuclei stained with DAPI. Poorly motile E6 cells are found to invade further and in greater numbers alongside D10 cells than when cultured alone. Representative images taken from n = 3 independent experiments. Scale bars, 200 µm. e , Growth of cocultured (yellow) and monocultured NS-F8 (green) and NS-F10 (red) cells plated as single neurospheres after 96 h, measured as diameter of the sphere, with representative images provided from the Celigo S cytometer under phase contrast and fluorescence. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. f , Invasion of cocultured (yellow) and monocultured NS-F8 (green) and NS-F10 (red) cells into Matrigel over 72 h, with area assessed by ImageJ software from representative images provided from the Celigo S cytometer under phase contrast and fluorescence. Cocultures and NS-F10 have significantly enhanced invasive capabilities compared to NS-F8. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. g , Migration of mono- and cocultured NS-F8 (green) and NS-F10 (red) on fibronectin, assessed by the number of differentially labeled distant cells at 48 h, with representative images provided from the IncuCyte Zoom live-cell analysis system under phase contrast and fluorescence. Cells from NS-F8 have enhanced migratory properties when cultured with NS-F10 compared to alone. Data derived and representative images taken from n = 3 independent experiments. Scale bars, 500 µm. h , Confocal microscopy analysis of migration of mono- and cocultured NS-F8 (green) and NS-F10 (red) cells on fibronectin after 3 d, with nuclei stained with DAPI. Poorly motile NS-F8 cells are found to migrate further and in greater numbers alongside NS-F10 cells than when cultured alone. Representative images taken from n = 3 independent experiments. Scale bars, 200 µm. All comparisons carried out by ANOVA, ** P < 0.01. *** P < 0.001. All graphs show mean ± s.d. Full size image Discussion Widespread intratumoral heterogeneity in human cancer has become a prevalent theme in high-throughput sequence analysis of tumor specimens, with critically important implications for the success of therapeutic targeting 35 . Less attention has been given to the functional implications of this subclonal diversity and the interactions between distinct tumor subpopulations. Here we utilize pGBM and DIPG as cancer types with a relatively low mutational burden, yet a high degree of heterogeneity, to isolate these genotypically and phenotypically different compartments and provide evidence that subclonal diversity is selected for as a result of cooperative interactions that promote tumorigenesis. Single-cell-derived colonies were established and expanded under stem cell culture conditions, though without marker preselection, in contrast to a recent approach 36 . Phenotypic differences were observed, in terms of morphology, growth, migration and invasion, that could be linked directly to concurrent genotypic differences in the subclones. These properties, first identified through high-content screening during initial expansion, were maintained upon repassaging in short-term culture, indicating inherently fixed characteristics in markedly different tumor cell subpopulations. SNVs differing among single-cell-derived colonies could be found at low frequencies in the original tumor mass and thus were reflecting not an acquired artifact of the culture conditions but instead a propensity of genotypically distinct subclones to harbor stem-like properties, further evidenced through their tumorigenic capacity in vivo. It has previously been proposed that a branched Darwinian evolution model integrated with a hierarchy of multiple cancer stem cell populations may help explain the spatial and temporal characteristics of observed intratumoral heterogeneity 37 , with evidence provided in leukemia 38 and solid tumors 39 , 40 , 41 . In our models, the differing phenotypes of individual subclones were substantially less prominent than in heterogeneous unsorted primary cultures, with enhanced growth, invasion and migration properties of mixed populations of cells supporting the interpretation of sequencing analyses suggesting that subclonal diversity is selected for spatially 42 , 43 and temporally 44 in these tumors. The maintenance of such stable coexistence during tumor evolution implies a degree of cooperativity. In our example, we isolated ‘natural isogenic’ subclonal populations differing by a key loss-of-function mutation in an H4K20 methyltransferase, in which the more migratory mutant cells were able to confer such properties on their wild-type counterparts, seemingly at least in part through expression of key chemokines such as CXCL2. A similar concept of ‘cooperative invasion’ was first identified in melanoma 45 , whereby phenotypically distinct subpopulations of cells were found to comigrate, a phenomenon also observed in DIPG in our genotypically distinct cells. Likewise, a recent elegant paper making use of a lentivirally transduced triple-negative breast cancer cell line allowed reconstruction of an aggressive phenotype in vivo using only two cooperating subclones: those overexpressing IL-11 and VEGFD 46 . Such a mechanism obviates the need for clonal selection to drive tumorigenesis and predicts the maintenance of intratumoral heterogeneity we observe. Notably, the proportion of cells that harbor these more enhanced phenotypes may be low and may therefore remain unidentified in bulk tumor profiling studies, though remaining critical in tumor development and maintenance. pGBM harboring H3.3 G34R or G34V mutations were found to harbor a higher mutational burden and a greater subclonal diversity than other tumor subgroups. Although the mechanisms are not known, this likely reflects an underlying DNA repair defect associated with the inability of the mutant H3K36 to be trimethylated, disrupting its important function in mismatch repair 47 , 48 . Despite this, these tumors do not have the mutational burden of hypermutator cases with biallelic mismatch repair deficiency, for whom immune checkpoint inhibitors appear to offer an exciting new therapeutic option 49 . It is not clear, therefore, that patients with H3.3 G34R or G34V mutations would benefit from a similar strategy. Unfortunately, no H3.3 G34R or G34V cultures were available for our study, and most of our functional work was focused on H3.3 K27M mutant DIPG samples, which were more amenable to single-cell-derived colony formation in our assay than other tumor genotypes (although it is not clear whether this reflects imperfect culture conditions for these subgroups). It has previously been shown that these diffusely infiltrating lesions may be found outside the pons and spread throughout the central nervous system at the time of death 30 . Reconstructing phylogenies through sequencing of tumor cells spread throughout the brain at autopsy indicates an early escape of migratory cells from the pons, before the rapid proliferative expansion occurring by the time of presentation and treatment. This has important implications for locally delivered therapies and reopens the debate concerning the initial use of whole-brain irradiation in children with DIPG. The later acquisition of convergent mutations in genes controlling key signaling pathways associated with proliferation at these distant sites also underlies the challenges in preventing tumor recurrence and/or metastasis at anatomically distinct sites in the central nervous system 50 . In summary, these data demonstrate that pGBM and DIPG harbor a complex admixture of genotypically and phenotypically distinct stem-like cells driving a functionally based intratumoral heterogeneity. Understanding how the derived subclones interact and adapt to the tumor microenvironment, and to therapy, will be a key requirement for maximizing patient benefit from existing treatment options. Future strategies aimed at disrupting these interactions may represent a new therapeutic approach in these diseases. Methods Published sequencing data Raw data were obtained from the European Genome-phenome Archive ( ) from five published sequencing studies and provided under data access agreements from the St. Jude Children’s Research Hospital–Washington University Pediatric Cancer Genome Project (accession code EGAS00001000192 ) 6 , 7 , The Hospital for Sick Children ( EGAS00001000575 ) 2 and the McGill University–DKFZ Pediatric Brain Tumour Consortium ( EGAS00001000226 4 and EGAS00001000720 3 ). We also included data from our own study ( EGAS00001000572 ) 5 and from four tumors collected via the Institute of Cancer Research (South West London MREC-approved study 10/H0803/126 with full consent) included in a recent International Cancer Genome Consortium (ICGC) study ( EGAS00001001139 ) 51 , all of which were also part of a recent genomics meta-analysis by our group 29 , processed data from which are housed at . In total, we obtained whole-genome ( n = 70) or exome ( n = 72) data from 142 pGBM and DIPG patients for whom matched germline data was available, six of whom also had data from paired longitudinal sampling. The median age was 6.8 years at diagnosis and the median survival 11.45 months (Supplementary Table 2 ). Patients and samples All patient material studied under South West London Research Ethics Committee approval. We obtained longitudinal paired samples from two patients from the Centre Hospitalier Régional et Universitaire Hautepierre, Strasbourg, France ; five DIPG patients with multiple sampling taken at autopsy from the Hospital San Joan de Deu, Barcelona, Spain; five DIPG patients with multiple sampling taken at autopsy from Stanford Medical School, Stanford, CA, USA; three patients with multiple samples from the Queensland Children’s Tumour Bank, Brisbane, Australia; and one patient each with multiple samples from St Georges Hospital and Kings College Hospital, London, UK (Supplementary Table 3 ), all of which were collected locally after informed consent. The four previously sequenced patient samples were obtained from the Chinese University of Hong Kong, China ( n = 3) and University Hospital Sousse, Tunisia ( n = 1). DNA was extracted from frozen tissue by homogenization before following the DNeasy Blood & Tissue kit protocol (Qiagen, Crawley, UK). DNA was extracted from FFPE material from either 20-µm ribbons ( n = 2–4 per sample) or 5-µm sections cut onto slides ( n = 10 per sample). Slides were hydrated through an ethanol series before manual microdissection into a tube using a sterile fine needle. All tissue was incubated overnight with proteinase K at 56 °C with a further incubation for 3 h the following morning, before following the QIAamp DNA FFPE tissue kit protocol (Qiagen, Crawley, UK) using 360 µL of Buffer AL and 360 µL of ethanol, and eluted using 25 µL of 10 mM Tris buffer at pH 8.5 for 7 min. Matched normal DNA was extracted from blood samples using the DNeasy Blood & Tissue kit (Qiagen, Crawley, UK). Concentrations were measured using a Qubit fluorometer (Life Technologies, Paisley, UK), with at least 400 ng sent for exome sequencing at the Tumour Profiling Unit, ICR, London, UK using the 50 Mb Agilent SureSelect platform (Agilent, Santa Clara, CA, USA), and paired-end-sequenced on an Illumina HiSeq2000 (Illumina, San Diego, CA, USA) with a 100-bp read length. The average median coverage was 148× for the tumor exomes and 108× for tumor genomes. Sequence analysis For both published and newly generated raw sequencing data, reads were aligned to the hg19 build of the human genome using bwa v0.7.5a ( ) and PCR duplicates removed with PicardTools 1.5 ( ). Somatic single nucleotide variants were called using the Genome Analysis Tool Kit v3.3-0 based on current best practices using local realignment around insertions or deletions, downsampling and base recalibration with variants called by the Unified Genotyper ( ). Structural variants were called from whole-genome data using BreakDancer ( ) filtered to remove commonly multi-mapped regions to identify somatic breakpoints separated by a minimum of 10 kbp involving at least one Ensembl gene. Variants were annotated using the Ensembl Variant Effect Predictor v71 ( ) incorporating SIFT ( ) and PolyPhen ( ) predictions, COSMIC v64 ( ) and dbSNP build 137 ( ) annotations. Somatic variants used for further subclonal analysis (non-synonymous and synonymous) were covered by at least 10 reads in both tumor and normal sequences. Copy number was obtained by calculating log 2 ratios of tumor/normal coverage binned into exons of known genes, smoothed using circular binary segmentation ( ) and processed using in-house scripts. To infer the proportion of tumor cells in each sample to carry any given mutation, we calculated the cancer cell fraction (CCF) for each somatic variant 25 . Briefly, we determined the somatic allele-specific copy number profiles using read depth from whole-genome or exome sequencing as above analyzed by ASCAT 26 , which also provided for an estimate of the non-neoplastic cell contamination of the sample as well as the overall ploidy of the tumor. Loss of heterozygosity (LOH) was also calculated using ASCAT based on a minor allele frequency <0.2. Allele-specific copy number, LOH and tumor cell purity were then used to calculate the CCF, which estimates the percentage of tumor cells carrying each mutation 25 , and truncated to 100% where experimental variability in sequence reads produced a value greater than this figure. Intratumoral heterogeneity and the number and frequency of subpopulations within individual tumor samples were calculated with the EXPANDS algorithm using evolutionary biology principles including the Shannon and Simpson indices and allowing for the concept that subclones may share a subset of variants that may be nested within each other 27 . This used copy-number-corrected variant allele frequencies of all somatic coding mutations clustered based on their cell-frequency probability distributions, and subject to pruning, to assign individual mutations to predicted subpopulations 27 . For multi-region samples from the same patient, distance matrices derived from the cancer cell fractions of non-synonymous somatic coding mutations in each sample were used to construct phylogenies based upon neighbor-joining algorithms utilizing the nested subpopulation calculated as part of EXPANDS, and visualized using the ape package (v3.1-4 ) in R. For paired longitudinal samples taken pre- and post-treatment, we fitted a kernel density estimate for the tumor variant allele frequencies at both time points and identified cosegregating clusters using a heat map visualization of the resulting biplot 52 . A customized R function identified the x and y coordinates of each cluster centroid, which served as an estimate of the number and relative composition of major subclones present in each sample. These were plotted pre- and post-treatment with colored lines highlighting the inferred relationship between each cluster. Cell culture pGBM and DIPG patient-derived cultures were established either immediately after collection (biopsy, resection or autopsy) or from live cryopreserved tissue, with authenticity verified using short tandem repeat (STR) DNA fingerprinting 5 and certified mycoplasma-free. SU-DIPG-IV and SU-DIPG-VI have been published previously 11 , 33 . Newly established cultures were first minced with the use of a sterile scalpel followed by gentle enzymatic dissociation with LiberaseTL (Roche Life Science) for 30 min at 37 °C. Red blood cells were then lysed by using the AKC lysis buffer (Life Technologies) and tumor cell passed twice through a 70-μm filter. Cells were grown under stem cell conditions, either as two-dimensional (2D) adherent cultures on laminin 32 or as three-dimensional (3D) neurospheres 33 . Cortical pGBM cultures ICR-G358 and HSJD-GBM-01 were cultured in a serum-free medium composed of the neural stem cell culture medium RHB-A (StemCells, Inc. Cambridge, UK) supplemented with human bFGF (20 ng/mL), human EGF (20 ng/mL), human PDGF-AB (20 ng/mL) (Miltenyi Biotec Ltd. Bisley, UK) and heparin (2 ng/mL) (Stem Cell Technologies, Vancouver, BC, Canada). Thalamic H3.3 K27M pGBM QCTB-R059 and DIPGs HSJD-DIPG-007, SU-DIPG-IV and SU-DIPG-VI were cultured in a serum-free medium designated as Tumor Stem Medium (TSM) as previously described 11 , consisting of 1:1 Neurobasal(-A) (Invitrogen, Carlsbad, CA), and DMEM:F12 (Life Technologies) supplemented with HEPES, NEAA, Glutamaxx, sodium pyruvate (Life Technologies) and B27(-A) (Invitrogen, Carlsbad, CA), human bFGF (20 ng/mL), human EGF (20 ng/mL), human PDGF-AA (10 ng/mL), human PDGF-BB (10 ng/mL) (Shenandoah, Biotech, Warwick, PA) and heparin (2 ng/mL) (Stem Cell Technologies, Vancouver, BC, Canada). Establishment of single-cell colonies Primary cultures were single-cell flow sorted into the inner 60 wells of 96-well plates using a FACSAria I (SORP) instrument (BD) equipped with an automated cell deposition unit. Single cells were dropped in 100 μL per well of the same medium as described above, with the addition of penicillin and streptomycin (Life Technologies). Two 96-well flat bottom plates (Greiner Bio-one) were collected for 2D adherent culture and one 96-well round bottom ultra-low attachment plate (Corning) was collected for 3D neurosphere culture. The outer 16 wells were filled with 200 μL per well of PBS to avoid evaporation of medium. 96-well plates were incubated at 37 °C, 5% CO 2 , 95% humidity, and cells refed twice weekly with 10–20 μL of medium per well. Fully automated image analysis of single-cell-derived colonies in 2D and 3D was carried out on a Celigo S cytometer (Nexcelom Inc.) 53 . At indicated time points, 96-well plates were scanned, images acquired and growth assessed using the Confluence application for 2D adherent culture on laminin and the Tumoursphere application for determining the diameter of the neurospheres. Single-cell-derived adherent colonies were collected when they reached approximately 80% confluency, while the neurospheres were collected at around 700–800 μm diameter. On collection day, 10% of the cells were used to expand individual subclonal cultures, with the remaining 90% used for DNA extraction after overnight incubation with proteinase K and RNase A using the QIAamp DNA micro kit (Qiagen) and elution using 25 µL of 10 mM Tris buffer pH 8.5 for 5 min before quantification. A minimum of 50 ng DNA was used for targeted resequencing using a custom Agilent SureSelect panel of 435 genes recurrently mutated in pGBM or DIPG, or including all members of the histone gene family (Supplementary Table 4 ). High-throughput assays and content image analysis 3D invasion assays were performed as previously described 53 , 54 , with some modifications. Briefly, a total of 100 µL medium was removed from ULA 96-well round-bottomed plates containing neurospheres 250–300 μm in diameter (given the different growth rate among the bulk cells and the single-cell-derived colonies, cell densities were adjusted to obtain similar-size neurospheres). Matrigel (100 μL) was gently added to each well (6 replicates) and plates were incubated at 37 °C, 5% CO 2 , 95% humidity for 1 h. Once the Matrigel solidified, 100 μL per well of culture medium was added on top. Starting from time zero, and at intervals up to 72 h, automated image analysis was carried out on a Celigo S imaging cytometer using the Confluence application. The degree of cell spreading in the Matrigel was measured and the data plotted either as percentage of total area in the field of view covered by invading cells or as percentage of initial size of each neurosphere at time zero ( n = 3). 3D migration assays were similarly performed as previously described 53 , 55 , with some modifications. Briefly, flat-bottomed 96-well plates (Greiner Bio-one) were coated for 2 h at room temperature with 50 μL per well of fibronectin, laminin, tenascin (Sigma-Aldrich) 10 μg/mL in PBS with calcium and magnesium, or 125 μg/mL Matrigel (Corning) in culture medium in absence of growth factors. Once coating was completed, a total of 200 µL per well of culture medium was added. For stimulation assays, CCL2, CXCL2 (20 ng/mL and 50 ng/mL in TSM medium starved of all growth factors and B27 supplement) or medium harvested after 5 d culture of heterogeneous HSGD-DIPG-007 cells were used. A total of 100 μL medium was removed from ULA 96-well round-bottomed plates containing neurospheres 250–300 μm in diameter, and the remaining medium including the neurosphere were transferred to the precoated plates. Starting from time zero, and at intervals up to 72 h, automated image analysis was carried out on a Celigo S cytometer using the Confluence application. The degree of cell spreading on the different matrices was measured and data plotted either as percentage of total area in the well covered by migrating cells or as a percentage of the initial size of each neurosphere at time zero ( n = 3). Digital droplet PCR Digital droplet PCR was carried out on genomic DNA extracted from normal human astrocytes, heterogeneous HSJD-DIPG-007 bulk cells and subclones NS-F10 and NS-F8 using primers designed to detect KMT5B R187* (forward: GGCAATATTTCAAATCCACTGTCAGTT; reverse: GCAGGGTATACCATTTAAAGT CATTATCAATTTTTTTT) on a QX200 digital PCR platform (Bio-Rad). Reporter sequences were CAAACATTCGCAAATA (VIC, wild-type) and CAAACATTCACAAATAA (FAM, mutant). Briefly, the 20-µL reactions consisted of 10 µL ddPCR Supermix for Probes (no dUTP, Bio-Rad), primers and probes at the same molar concentrations as used in qPCR, DNA up to 50 ng, and molecular biology grade water. Each reaction was homogenized and partitioned into a theoretical maximum of around 23,000 droplets by creating an emulsion with Droplet Generation Oil for Probes (Bio-Rad). The 0.85-nL droplets were then amplified using standard PCR cycling parameters and an annealing temperature of 60 °C in accordance with the manufacturer’s recommendations. At endpoint, the fluorescence of each individual droplet was read on the droplet reader to identify presence or absence of mutant and wild-type target sequences. The QuantaSoft program (v1.4) fitted the droplet counts to a Poisson distribution to enumerate the DNA copies, from which the DNA concentration and mutant fraction could be calculated. Drug screening An in-house drug library encompassing 80 drugs used either in clinical practice or in late-stage development was screened. Each compound was dissolved in 100% dimethyl sulfoxide (DMSO) to give 5 mM stocks and then diluted to 0.5, 0.05, 0.005 and 0.0005 mM stocks in 96-well two-dimensional matrix plates. Daughter plates in 384-well format were prepared from these 96-well two-dimensional matrix racks using the Hamilton Microlab Star robotic platform. Compounds were stored under a nitrogen atmosphere using a StoragePod (Roylan Developments, Leatherhead, UK). Cells were seeded (1,500 cells per well) into 384-well plates using a MultiDrop Combi Dispenser (Thermo Fisher Scientific, Leicestershire, UK) and allowed to form neurospheres as described above. Replicate cell plates were then loaded onto Microlab Star screening platform and drug plates were serially diluted in complete tumor stem cell medium before being added to the cell plates. The final drug concentrations used for each drug were 1,000, 500, 100, 50, 10, 5, 1 and 0.5 nM. The final DMSO concentration in all wells was 0.2% (v/v). Controls included 0.2% (v/v) DMSO (negative) and 10 μM staurosporine (positive, Sigma-Aldrich). After incubation in drug-containing medium for 5 d, cell viability was quantified with CellTiter-Glo (Promega) using a Victor X5 Multi-label plate reader luminescence protocol (Perkin Elmer, Waltham, MA, USA). Luminescence data from each well were normalized to the median signal from DMSO-containing wells to calculate the survival fraction. Plate-centered data from each screen were standardized by the use of a Z score statistic, where Z = 0 represents no effect on viability and negative Z scores represent loss of viability. Z scores were calculated using the median absolute deviation (MAD) of all effects in each cell line 56 , 57 . Selective differential hits were validated individually using a wider range of concentrations using CellTiter-Glo as a readout of cell viability, in pure populations as well as mixed cocultures, and surviving fractions calculated as before. RNA sequencing RNA was extracted by following the RNeasy Mini Kit protocol (Qiagen), quantified on a 2100 Bioanalyzer (Agilent Technologies), and sequenced on an Illumina GA-II genome analyzer as 100-bp paired-end reads. RNA sequences were aligned to hg19 and organized into de novo spliced alignments using bowtie2 and TopHat2 ( ). Raw read counts and fragments per kilobase per million reads mapped (FPKPM) were calculated for all known Ensembl genes in assembly v74 using bedtools ( ) and Cufflinks ( ). Immunofluorescence pGBM and DIPG cells were grown either adherent on laminin-precoated 8-well chamber slides (Cole Palmer) or in suspension as neurospheres in 75 cm 2 tissue culture flasks. Cells on chamber slides were fixed with 4% paraformaldehyde at room temperature for 10 min and washed three times with phosphate-buffered saline (PBS) solution. Neurospheres were collected into conical tubes, centrifuged for 10 min at 146 g , washed once with PBS and, after a further centrifugation, fixed in 4% paraformaldehyde overnight 4 °C and then embedded into agarose as previously described 53 . Paraffin-embedded neurospheres were sectioned using a microtome at 4 µm thickness. Cells were permeabilized with 0.5% Triton X-100 solution for 10 min at room temperature and then blocked with appropriate serum according to the species of secondary antibody for 1 h at room temperature. Secondary antibodies used were goat anti-mouse (A11001, ThermoFisher), donkey anti-rabbit (A31572, ThermoFisher) and goat anti-rabbit (A11008, ThermoFisher). Primary antibodies directed against nestin (MAB5326 clone 10C2, Millipore, 1:400), SOX2 (3579, Cell Signaling, 1:400), GFAP (Z334, Dako, 1:50), CNPase (MAB326 clone 11-5B, Millipore, 1:200), TUJ-1 (MMS-435P, Covance, 1:2000), Olig-2 (Ab9610, Millipore, 1:200) and Musashi-1 (Ab5977, Millipore, 1:200) were added and incubated overnight at 4 °C. Cells were then washed in PBS three times and incubated with Alexa Fluor 488- or Alexa Fluor 555-conjugated secondary antibodies 1 h at room temperature. For anti-H3K27me3 (9733, Cell Signaling, 1:100) and anti-α 5 -integrin (ab15031, Abcam, 1:100), samples were incubated at 37 °C for 20 min followed by a secondary antibody incubation at 37 °C for 20 min. Nuclei were counterstained with DAPI and samples mounted with Vectashield (Vector Laboratories) and examined using a Leica DM2500 fluorescence microscope or a Zeiss LSM700 confocal microscope. Coculture experiments Laminin-adherent cultures and neurospheres were dissociated and filtered through a 40-µm cell strainer to remove residual clumps. Single-cell suspensions were then incubated with CellTracker Red CMPTX (D10 and NS-F10) or CellTracker Green CMFDA (E6 and NS-F8 (Life Technologies) at a final concentration of 5 µM following the manufacturer’s instructions for suspension culture. Control unstained cells were incubated with an equivalent amount of DMSO. Once the staining protocol was completed, cells were washed once in complete medium, counted and seeded into 96-well round-bottom ULA plates (Corning) at 1,000 cells per well, either in monoculture or in coculture (50:50), and allowed to form a single neurosphere per well ( n = 6). One or 2 d after seeding, migration assays were performed on fibronectin- (NS-F10 and NS-F8) or Matrigel- (D10 and E6) coated 96-well flat-bottom plates (Essen Bioscience), and brightfield and fluorescent images were acquired on an IncuCyte ZOOM (Essen Bioscience). Images of a region of interest of identical size across all replicates ( n = 6) were imported into ImageJ software and the number of cells migrated (at 12 h for D10 and E6 and 48 h for NS-F10 and NS-F8) was manually counted using the cell counter plugin and normalized to the cell ratio (100% for the monocultures and 50% for the cocultures). Growth was assessed using the Celigo S as above, while invasion was measured as area covered using ImageJ upon image calibration using a 1-mm graticule. Time lapse videos were also acquired using a Zeiss LSM700 confocal microscope with images acquired every 30 min. In vivo orthotopic xenograft All experiments were performed after review by the Animal Welfare and Ethical Review Board at Institute of Cancer Research, in accordance with the UK Home Office Animals (Scientific Procedures) Act 1986, the United Kingdom National Cancer Research Institute guidelines for the welfare of animals in cancer research and the ARRIVE (Animal Research: Reporting In Vivo Experiments) guidelines. A single-cell suspension from heterogeneous bulk cells or subclones (HSJD-DIPG-007, NS-F10, NS-F8 or coculture, in Matrigel; SU-DPG-VI, A-D10 and A-E6, in medium) was prepared immediately before implantation in four to eight NOD-SCID (HSJD-DIPG-007 and subclones) or nude (NCr- Foxn1 nu ) mice (SU-DIPG-VI and subclones) randomly allocated per group at P35. Animals were anesthetized with ketamine and xylazine (100 mg/kg and 5 mg/kg) and maintained under 1% isoflurane. The cranium was exposed via midline incision under aseptic conditions and 1 × 1 mm deep hole was drilled through the skull to the dura. Mice were placed in a stereotactic apparatus and 200,000 cells in 5 μL were stereotactically implanted in the pontine area using a digital pump at an infusion rate of 2 μL/min and a 31-gauge Hamilton syringe. Coordinates used were 1.0 mm lateral to midline, 0.8 mm posterior to lambda, and –4 mm deep to cranial surface. At the completion of infusion, the syringe needle was allowed to remain in place for a minimum of 2 min, then slowly manually withdrawn to minimize backflow of the injected cell suspension. Mice were followed for up to 8 months and were sacrificed upon deterioration of condition and tissue taken for further analysis. Mouse brains were collected and fixed in 10% buffered formalin solution for 48 h before division into four parts and embedding in paraffin. Sections 4 µm thick were cut and stained with hematoxylin and eosin. For immunohistochemistry, sodium citrate (pH 6.0) heat-mediated antigen retrieval was performed and staining was carried out using antibodies directed against human nuclear antigen (HNA) (MAB 4383, Millipore, 1:100), human GFAP (M0761 clone 6F2, Dako, 1:300), H3K27me3 (9733, Cell Signaling, 1:100) and Ki67 (M7240, Dako, 1:100). All primary antibodies were diluted into 1% Tris buffer solution with 0.05% Tween-20 except the Ki67 antibody, which was diluted into Dako antibody diluent, and staining was performed using an autostainer. Anti-human GFAP was incubated for 30 min and anti-H3K27me3 and anti-HNA for 1 h, all at room temperature. An Envision detection system (Dako K5007) was used for Ki67 staining, whereas for the others a Novocastra Novolink Polymer Detection Systems Kit (Leica Biosystem RE-7150) was used. Slides were then mounted using Leica CV Ultra mounting medium and assessed by an experienced pathologist (S.P.) blinded to cell identity. Statistical analyses Statistical analysis was carried out using R 3.3.0 ( ) and GraphPad Prism 7. Comparisons between groups of continuous variables employed Student’s t -test or the analysis of variance (ANOVA) test. Univariate differences in survival were analyzed by the Kaplan–Meier method and significance determined by the log-rank test. Multivariate analyses were carried out using the Cox proportional hazards model. All analyses were two-sided, and P < 0.05 after multiple testing correction was considered significant. Reporting Summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Code availability All custom scripts for data processing are available upon reasonable request. Data availability All new sequencing data are deposited in the European Genome-phenome Archive ( ) under accession code EGAS00001001436 .
Scientists have discovered that cancerous cells in an aggressive type of childhood brain tumour work together to infiltrate the brain, and this finding could ultimately lead to much needed new treatments, according to a new study published in Nature Medicine today. In the study, funded by Cancer Research UK with support from Abbie's Army and the DIPG Collaborative, the researchers investigated a type of childhood brain tumour called diffuse intrinsic pontine glioma (DIPG), shining a light on its most aggressive characteristic—its ability to leave the brain stem and send cancer cells to invade the rest of the brain. DIPG is incredibly difficult to treat. Nearly all children with this type of cancer die within two years. The researchers, led by a team at The Institute of Cancer Research, London, used donations of biopsy tissue and the brains of children who had died as a consequence of DIPG to look deep into the tumour and learn more about its cells. They found that DIPGs are heterogenous, meaning they are made up of more than one type of cell. This enables the cells to 'work' together to leave the original tumour and travel into the brain. The scientists say this shows how complex the genetic make-up of the disease is and that a multi-pronged attack is likely to be necessary for treatment. Professor Chris Jones, who led the study at The Institute of Cancer Research, London, said: "This is the first time we've observed this sort of interaction between different tumour cells in DIPG. The idea that the cells are working together to make the disease grow and become aggressive is new and surprising. Childhood cancers were thought to be very simple but this shows us that isn't always the case. Crucially, this gives us hope that we can develop new treatments. "We desperately want to prevent more families going through the heartbreak of losing a child to this disease. Unfortunately, there is currently no cure for this illness. Children usually can't have surgery because of the tumour's location in the brain stem which controls functions such as breathing, heart rate, blood pressure, and swallowing. And other treatment options such as chemotherapy don't work because it's relatively difficult to get drugs into the brain stem and many DIPG tumours have an inbuilt resistance to chemotherapy." The study also shows that even cells that exist in relatively small numbers in DIPG can exert a profound influence, by leading cells from the main tumour into the rest of the brain to stimulate tumour growth and spread. In this study, researchers saw one type of cell leaving the original DIPG tumour site and migrating into the rest of the brain. This happens early in the evolution of the disease and is a cell type found in relatively small numbers. As it migrates, the cells release a chemical messenger called CXCL2, which has the effect of calling other cells from the tumour to follow it. The next stage of research will see the researchers looking for treatments that target the most important subpopulations of cells in the tumour and/or interfere with the cooperation between cells. Professor Richard Gilbertson, Director of the Cancer Research UK Cambridge Centre at the University of Cambridge, said: "This research begins to unravel the complex community of cells that make up DIPG. Through an elegant combination of molecular and cell biology techniques, this study provides a window into the heart of these tumours, allowing us to begin to decipher how their different cell populations interact with each other to promote the disease. It is exactly this sort of research that is needed if we are to beat this devastating cancer. "Cancer Research UK recognises more must be done to tackle this devastating disease and has committed £25 million to brain tumour research over the next five years. Brain tumours have been identified as a cancer of unmet need; survival rates have not changed significantly in a generation."
10.1038/s41591-018-0086-7
Medicine
Blood test identifies those at-risk for cognitive decline, Alzheimer's within three years
Paper: dx.doi.org/10.1038/nm.3466 Journal information: Nature Medicine
http://dx.doi.org/10.1038/nm.3466
https://medicalxpress.com/news/2014-03-blood-at-risk-cognitive-decline-alzheimer.html
Abstract Alzheimer's disease causes a progressive dementia that currently affects over 35 million individuals worldwide and is expected to affect 115 million by 2050 (ref. 1 ). There are no cures or disease-modifying therapies, and this may be due to our inability to detect the disease before it has progressed to produce evident memory loss and functional decline. Biomarkers of preclinical disease will be critical to the development of disease-modifying or even preventative therapies 2 . Unfortunately, current biomarkers for early disease, including cerebrospinal fluid tau and amyloid-β levels 3 , structural and functional magnetic resonance imaging 4 and the recent use of brain amyloid imaging 5 or inflammaging 6 , are limited because they are either invasive, time-consuming or expensive. Blood-based biomarkers may be a more attractive option, but none can currently detect preclinical Alzheimer's disease with the required sensitivity and specificity 7 . Herein, we describe our lipidomic approach to detecting preclinical Alzheimer's disease in a group of cognitively normal older adults. We discovered and validated a set of ten lipids from peripheral blood that predicted phenoconversion to either amnestic mild cognitive impairment or Alzheimer's disease within a 2–3 year timeframe with over 90% accuracy. This biomarker panel, reflecting cell membrane integrity, may be sensitive to early neurodegeneration of preclinical Alzheimer's disease. Main We enrolled 525 community-dwelling participants, aged 70 and older and otherwise healthy, into this 5-year observational study. Over the course of the study, 74 participants met criteria for amnestic mild cognitive impairment (aMCI) or mild Alzheimer's disease (AD) (Online Methods); 46 were incidental cases at entry, and 28 phenoconverted (Converters) from nonimpaired memory status at entry (Converter pre ). The average time for phenoconversion to either aMCI or AD was 2.1 years (range 1–5 years). We defined three main participant groups in this paper: aMCI/AD, Converter and Normal Control (NC). The participants with aMCI and mild AD were combined into a single group (aMCI/AD) because this group was defined by a primary memory impairment, and aMCI is generally thought to reflect the earliest clinically detectable stage of AD. The aMCI/AD group included the Converters after phenoconversion. The Converters were included at two time points, prior to phenoconversion (Converter pre ), when memory was not impaired, and after phenoconversion ( post ), when memory was impaired and they met criteria for either aMCI or AD. The NC group was selected to match the whole aMCI/AD group on the basis of age, education and sex. In the third year of the study, we selected 53 participants with either aMCI or AD for metabolomic and lipidomic biomarker discovery. Included in this aMCI/AD group were 18 Converters. We also selected 53 matched cognitively normal control (NC) participants. For the Converters, blood from both time 0 (at entry to the study) and after phenoconversion was used; for the other subjects, blood from the last available visit was used. We used an internal cross-validation procedure to evaluate the accuracy of the discovered lipidomics profile in classifying 41 additional subjects, consisting of the remaining subset of 21 participants with aMCI/AD, including 10 Converters, and 20 matched NC participants ( Supplementary Table 1 and Supplementary Fig. 1 ). The aMCI/AD, Converter and NC groups were defined primarily using a composite measure of memory performance (the decline in Z mem for the Converters (C pre versus C post ) is shown Fig. 1a ). In addition, composite measures of other cognitive abilities ( Supplementary Fig. 2 ) and measures of memory complaints and functional capacities were compiled ( Supplementary Tables 2 and 3 ). The discovery and validation groups did not differ on clinical measures ( F (4,170) = 1.376, P = 0.244) or on any composite z -score ( F (5,169) = 2.118, P = 0.066), demonstrating the general equivalence of the participants used for the discovery and validation phases of the biomarker analysis. Figure 1: Memory composite z -scores and trend plots for the ten-metabolite panel in the discovery phase. ( a ) Box and whisker plot shows the composite memory z -scores ( Z mem ) of the combined discovery and validation samples ( Supplementary Table 3 ). The performance of the Converter group (C pre , Converters at baseline) after phenoconversion (C post ) is plotted for direct comparison. The plot shows Z mem , as described in Supplementary Table 3 . The dotted line centered on 0 represents the median memory composite z -score for the entire cohort of 525 participants, and the black horizontal line represents the cut-off for impairment (−1.35 s.d.). Error bars represent ±s.e.m. As defined, all converters had nonimpaired memory at baseline and impaired memory after phenoconversion. NC, n = 73; C pre , n = 28; C post , n = 28; and aMCI/AD, n = 46. ( b ) The SID-MRM-MS–based quantitative profiling data was subjected to the nonparametric Kruskal-Wallis test using the STAT pack module (Biocrates). Results are shown for a panel of ten metabolites in the NC group ( n = 53), C pre ( n = 18), C post ( n = 18) and aMCI/AD ( n = 35) groups, respectively. The abundance of each metabolite is plotted as normalized concentrations units (nM). The black solid bars within the boxplot represent the median abundance, and the dotted line represents mean abundance for the given group. Error bars represent ± s.d. QC, quality control samples. The P values for analytes between groups were P ≤ 0.05. The two metabolites with P values <0.005 are indicated with an asterisk. Each Kruskal-Wallis test was followed by Mann-Whitney U -tests for post hoc pairwise comparisons (NC versus C pre and NC versus aMCI/AD). Significance was adjusted for multiple comparisons using Bonferroni's method ( P < 0.025). Source data Source data Full size image We examined 124 plasma samples from the 106 discovery-phase participants for untargeted metabolomic analysis (Online Methods). Metabolomic and lipidomic profiling yielded 2,700 positive-mode features and 1,900 negative-mode features. Metabolites defining the participant groups were selected using the least absolute shrinkage and selection operator (LASSO) penalty 8 , 9 . The LASSO analysis revealed features that assisted in unambiguous class separation between the two nonimpaired groups, the Converter pre group and the NC subjects who do not phenoconvert ( Table 1 ). This untargeted analysis revealed considerably lower phosphatidylinositol in the Converter pre group and higher glycoursodeoxycholic acid in the aMCI/AD group compared to the NC group. These metabolites were unambiguously identified using tandem mass spectrometry ( Supplementary Fig. 3 ). Table 1 Putative metabolite markers resulting from binary comparison of the study groups Full size table The untargeted LASSO analysis revealed amino acids and phospholipids to be potent discriminators of the NC and aMCI/AD groups. Thus, we performed stable isotope dilution–multiple reaction monitoring (MRM) mass spectrometry (SID-MRM-MS) to unambiguously identify and quantify lipids, amino acids and biogenic amines; this would discriminate our groups with emphasis on differences that might predict phenoconversion from NC to aMCI/AD. This targeted analysis revealed significantly lower plasma levels of serotonin, phenylalanine, proline, lysine, phosphatidylcholine (PC), taurine and acylcarnitine (AC) in Converter pre participants who later phenoconverted to aMCI/AD ( Table 2 ). Table 2 Difference detection of putative metabolites using SID-MRM-MS Full size table A notable finding of this targeted metabolomic and lipidomic analysis was the identification of a set of ten metabolites, comprising PCs, (PC diacyl (aa) C36:6, PC aa C38:0, PC aa C38:6, PC aa C40:1, PC aa C40:2, PC aa C40:6, PC acyl-alkyl (ae) C40:6), lysophophatidylcholine (lysoPC a C18:2), and acylcarnitines (ACs) (Propionyl AC (C3) and C16:1-OH) that were depleted in the plasma of the Converter pre participants but not in that of the NC group ( Fig. 1b ). These metabolites remained depleted after phenoconversion to aMCI/AD (Converters post ) and were similar to the levels in the aMCI/AD group. We then performed targeted quantitative metabolomic and lipidomic analyses using plasma from a separate group of 40 participants as an independent blinded cross-validation, as one sample from the aMCI/AD group was not available for lipidomic analysis. The validation samples were obtained from those clinically defined NC, Converter pre , aMCI/AD subjects. The samples were processed and analyzed using the same SID-MRM-MS technique as in the discovery phase. The targeted quantitative analysis of the validation set revealed similar levels for the ten-metabolite panel ( Supplementary Fig. 4 ) as were observed in the discovery samples ( Fig. 1b ). We used the metabolomic data from the untargeted LASSO analysis to build separate linear classifier models that would distinguish the aMCI/AD and Converter pre groups from the NC group. We used receiver operating characteristic (ROC) analysis to assess the performance of the classifier models for group classification. For the Converter pre and NC group classification, the initial LASSO-identified metabolites yielded a robust area under the curve (AUC) of 0.96 ( Fig. 2a ) and a more modest AUC of 0.83 for aMCI/AD and NC group classification. A separate classifier model using the discovered ten-metabolite panel from the targeted metabolomic analysis classified Converter pre and NC participants with an AUC of 0.96 ( Fig. 2b ) and an AUC of 0.827 for the aMCI/AD versus NC classification. To validate our biomarker-based group classification, we applied the same simple logistic classifier model developed for the discovery samples to the independent validation samples. The model classified Converter pre and NC participants with an AUC of 0.92 ( Fig. 2c ) and an AUC of 0.77 for the aMCI/AD versus NC groups. This model yielded a sensitivity of 90% and specificity of 90%, for classifying the Converter pre and NC groups in the validation phase ( Fig. 2c ). Figure 2: ROC results for the lipidomics analyses. ( a – c ) Plots of ROC results from the models derived from the three phases of the lipidomics analysis. Simple logistic models using only the metabolites identified in each phase of the lipidomics analysis were developed and applied to determine the success of the models for classifying the C pre and NC groups. The red line in each plot represents the AUC obtained from the discovery-phase LASSO analysis ( a ), the targeted analysis of the ten metabolites in the discovery phase ( b ) and the application of the ten-metabolite panel developed from the targeted discovery phase in the independent validation phase ( c ). The ROC plots represent sensitivity (i.e., true positive rate) versus 1 – specificity (i.e., false positive rate). Full size image We then considered the effects of apolipoprotein E (APOE) genotype on our classification of the Converter pre and NC groups. APOE is involved in lipid metabolism, with the ɛ4 allele known to be a risk factor for AD. The proportion of ɛ4 allele carriers was similar in the aMCI/AD (19/69 = 27.5%), NC (17/73 = 23%) and Converter (5/28 = 17%) groups (χ 2 = 0.19, P = 0.68, not significant). We repeated the classification analyses using the ten-metabolite model with APO ɛ4 allele as a covariate. The effect of the ɛ4 allele was not significant ( P = 0.817), and classification accuracy for Converter pre and NC groups changed minimally from an AUC 0.96 to 0.968 ( P = 0.992, not significant). Furthermore, a classifier model using only APOE ɛ4 produced an AUC of 0.54 for classifying the Converter pre and NC groups, implying virtually random classification. These findings indicate that the presumed pathophysiology reflected by the ten-metabolite biomarker panel is orthogonal to APOE-mediated effects. Here we present the discovery and validation of plasma metabolite changes that distinguish cognitively normal participants who will progress to have either aMCI or AD within 2–3 years from those destined to remain cognitively normal in the near future. The defined ten-metabolite profile features PCs and ACs, phospholipids that have essential structural and functional roles in the integrity and functionality of cell membranes 10 , 11 . Deficits of the plasmalemma in AD have been described previously 12 . Studies have shown decreased plasma PC levels 13 and lysoPC/PC ratios 14 and increased cerebrospinal fluid (CSF) PC metabolites in patients with AD 15 , as well as decreased phosphatidylinositol in the hippocampus 16 and other heteromodal cortical regions 17 . Furthermore, amyloid-β may directly disrupt bilayer integrity by interacting with phospholipids 18 . ACs are known to have a major role in central carbon and lipid metabolism occurring within the mitochondria 11 . They have also been associated with regulation, production and maintenance of neurons through enhancement of nerve growth factor production 11 , which is a known potent survival and trophic factor for brain cholinergic neurons, particularly those consistently affected by AD within the basal forebrain 19 , 20 , 21 . Decreasing plasma AC levels in the Converter pre participants in our study may indirectly signal an impending dementia cascade that features loss of these cholinergic neuronal populations. We posit that this ten–phospholipid biomarker panel, consisting of PC and AC species, reveals the breakdown of neural cell membranes in those individuals destined to phenoconvert from cognitive intactness to aMCI or AD and may mark the transition between preclinical states where synaptic dysfunction and early neurodegeneration give rise to subtle cognitive changes 2 . Most approaches to fluid-based biomarker discovery have focused on amyloid-β 1–42 (Aβ42), total tau and phosphorylated tau-181 obtained from CSF. Classification of symptomatic patients versus normal controls or other dementias or conversion from MCI to AD is high 22 , but the predictive value of these CSF biomarkers in preclinical patients is not as strong, suggesting that these markers may be useful only for confirmation of clinical diagnosis 23 . Blood-based biomarkers are not routinely used in clinical practice but may be more useful because they are easily obtained with less risk of complication in older adults. Studies focusing on Aβ42 or Aβ42/tau ratios derived from blood have been disappointing 24 , but recent studies suggest that assessment of the proteome and metabolome in blood may have more promise. One recent study using plasma identified 18 proteins that discriminated subjects with symptomatic AD from normal control subjects with nearly 90% accuracy and predicted conversion from symptomatic MCI to AD with 91% accuracy 25 . Another cross-sectional study reported 18 plasma biomarkers, many related to inflammation, that correctly classified subjects with symptomatic AD and normal control subjects with a sensitivity and specificity of 85% and an AUC of 93% (ref. 26 ). The biomarker panel was externally validated in a cohort of normal control subjects and subjects with symptomatic AD with sensitivity and specificity of 80% and an AUC of 85%. To our knowledge, this is the first published report of a blood-based biomarker panel with very high accuracy for detecting preclinical AD. This metabolic panel robustly identifies (with accuracy above 90%) cognitively normal individuals who, on average, will phenoconvert to aMCI or AD within 2–3 years. The accuracy for detection is equal to or greater than that obtained from most published CSF studies 27 , 28 , and blood is easier to obtain and costs less to acquire, making it more useful for screening in large-scale clinical trials and for future clinical use. This biomarker panel requires external validation using similar rigorous clinical classification before further development for clinical use. Such additional validation should be considered in a more diverse demographic group than our initial cohort. We consider our results a major step toward the NIA-AA (National Institute on Aging and Alzheimer's Association) consensus statement mandate for biomarkers of preclinical AD 2 . Methods Neurocognitive methods. The University of Rochester Research Subjects Review Board and the University of California, Irvine Institutional Review Board each approved a common research protocol for this investigation. Content of informed consent forms was thoroughly discussed with subjects at the time of entry into the study and verbal and written consent was obtained from all subjects, including that for serial neuropsychological testing and blood draws for biomarker evaluation. A total of 525 volunteers participated in this study as part of the Rochester/Orange County Aging Study, an ongoing natural history study of cognition in community-dwelling older adults ( Supplementary Note ). All participants were community-dwelling older adults from the greater Rochester, NY, and Irvine, CA, communities. Participants were recruited through local media (newspaper and television advertisements), senior organizations and word of mouth. Inclusion criteria included age 70 or older, proficiency with written and spoken English and corrected vision and hearing necessary to complete the cognitive battery. Participants were excluded for the presence of known major psychiatric or neurological illness (including Alzheimer's disease or MCI, cortical stroke, epilepsy and psychosis) at time of enrollment, current or recent (<1 month) use of anticonvulsants, neuroleptics, HAART, antiemetics and antipsychotics for any reason and serious blood diseases including chronic abnormalities in complete blood count and anemia requiring therapy and/or transfusion. Briefly, we prospectively followed participants with yearly cognitive assessments and collected blood samples following an overnight fast (withholding of all medications) ( Supplementary Note ). At enrollment, each participant completed detailed personal, medical and family history questionnaires. At baseline and at each yearly visit, participants completed measures assessing activities of daily living, memory complaints, and signs and symptoms of depression and were given a detailed cognitive assessment ( Supplementary Table 2 ). For this study, data from the cognitive tests were used to classify our participants into groups for biomarker discovery. We derived standardized scores ( z -scores) for each participant on each cognitive test and computed composite z -scores for five cognitive domains (attention, executive, language, memory and visuoperceptual) ( Supplementary Table 3 ). Normative data for z -score calculations were derived from the performance of our participants on each of the cognitive tests adjusted for age, education, sex and visit. To reduce the effect of cognitively impaired participants on the mean and s.d., age-, education-, sex- and visit-adjusted residuals from each domain z -score model were robustly standardized to have median 0 and robust s.d. of 1, where the robust s.d. = IQR/1.35, as 1.35 is the IQR (interquartile range) of a standard normal distribution. We categorized the participants into groups of subjects with incident aMCI or early AD (combined into one category, aMCI/AD), cognitively NC subjects and those who converted to aMCI or AD over the course of the study (Converters) based on these composite scores. Impairment was defined as a z -score 1.35 below the cohort median. All participants classified as aMCI met recently revised criteria 29 for the amnestic subtype of MCI 30 . We excluded other behavioral phenotypes of MCI in order to concentrate on the amnestic, which most likely represents nascent AD pathology 31 . All participants with early AD met recently revised criteria for probable AD 32 with impairment in memory and at least one other cognitive domain. For the aMCI/AD group, scores on the measures of memory complaints (MMQ) and activities of daily living (PGC-IADL) were used to corroborate research definitions of these states. All Converters had nonimpaired memory at entry to the study ( Z mem ≥−1.35), developed memory impairment over the course of the study ( Z mem ≤−1.35) and met criteria for the above definitions of aMCI or AD. To enhance the specificity of our biomarker analyses, NC participants in this study were conservatively defined with Z mem ± 1 s.d. of the cohort median rather than simply ≥−1.35, and all other z -scores ≥−1.35 s.d. ( Supplementary Note ). At the end of year 3 of the study, 202 participants had completed a baseline and two yearly visits. At the third visit, 53 participants met criteria for aMCI/AD and 96 met criteria for NC. Of the 53 aMCI/AD participants, 18 were Converters and 35 had incident aMCI or AD. The remaining 53 participants did not meet our criteria for either group and were not considered for biomarker profiling. Some of these individuals met criteria for nonamnestic MCI, and many had borderline or even above average memory scores that precluded their inclusion as either aMCI/AD or NC ( Supplementary Fig. 1 ). We matched 53 NC participants to the 53 aMCI/AD participants based on sex, age and education level. We used blood samples obtained on the last available study visit for the 53 MCI/AD and 53 NC for biomarker discovery. We included two blood samples from each of the 18 Converters, one from the baseline visit (Converter pre ) when Z mem was nonimpaired and one from the third visit (Converter post ) when Z mem was impaired and they met criteria for either aMCI or AD. Thus, a total of 124 samples from 106 participants were submitted for biomarker discovery. We employed internal cross-validation to validate findings from the discovery phase. Blood samples for validation were identified at the end of the fifth year of the study, and all 106 participants included in the discovery phase were excluded from consideration for the validation phase ( Supplementary Fig. 1 ). Cognitive composite z -scores were recalculated based on the entire sample available, and the same procedure and criteria were used to identify samples for the validation phase. A total of 145 participants met criteria for a group: 21aMCI/AD and 124 NC. Of the 21 aMCI/AD, 10 were Converters. We matched 20 NC participants to the aMCI/AD participants on the basis of age, sex and education level as in the discovery phase. In total, 40 participants contributed plasma samples to the validation phase, as 1 aMCI/AD subject's plasma sample was not able to be used. As before, the 10 Converters also contributed a baseline sample (Converter pre ) for a total of 50 samples. Neurocognitive statistical analyses. The neurocognitive analyses were designed to demonstrate the general equivalence of the discovery and validation samples on clinical and cognitive measures. We used separate multivariate ANOVA (MANOVA) to examine discovery and validation group performance on the composite z -scores and on self-reported measures of memory complaints, memory related functional impairment and depressive symptoms, as well as a global measure of cognitive function. In the first MANOVA, biomarker sample (discovery, validation) was the independent variable and MMQ, IADL, geriatric depression scale and mini-mental state examination were the dependent variables. In the second MANOVA, biomarker sample (discovery, validation) was the independent variable, and the five cognitive domain z -scores ( Z att , Z exe , Z lan , Z mem and Z vis ) were the dependent variables. Significance for the two-sided tests was set at α = 0.05, and we used Tukey's honestly significant difference (HSD procedure for post hoc comparisons. All statistical analyses were performed using SPSS (version 21). Lipidomics methods. Reagents . Liquid chromatography–mass spectrometry (LC-MS)-grade acetonitrile, isopropanol, water and methanol were purchased from Fisher Scientific (New Jersey, USA). High purity formic acid (99%) was purchased from Thermo-Scientific (Rockford, IL). Debrisoquine, 4-nitrobenzoic acid (4-NBA), Pro-Asn, glycoursodeoxycholic acid andmalic acid were purchased from Sigma (St. Louis, MO, USA). All lipid standards including 14:0 LPA, 17:0 Ceramide, 12:0 LPC, 18:0 Lyso PI and PC(22:6/0:0) were procured from Avanti Polar Lipids (USA). Metabolite extraction. Briefly, the plasma samples were thawed on ice and vortexed. For metabolite extraction, 25 μL of plasma sample was mixed with 175 μL of extraction buffer (25% acetonitrile in 40% methanol and 35% water) containing internal standards (10 μL of debrisoquine (1 mg/mL), 50 μL of 4, nitrobenzoic acid (1 mg/mL), 27.3 μl of ceramide (1 mg/mL) and 2.5 μL of LPA (lysophosphatidic acid) (4 mg/mL) in 10 mL). The samples were incubated on ice for 10 min and centrifuged at 14,000 r.p.m. at 4 °C for 20 min. The supernatant was transferred to a fresh tube and dried under vacuum. The dried samples were reconstituted in 200 μL of buffer containing 5% methanol, 1% acetonitrile and 94% water. The samples were centrifuged at 13,000 r.p.m. for 20 min at 4 °C to remove fine particulates. The supernatant was transferred to a glass vial for Ultraperformance liquid chromatography–electrospray ionization quadrupole time-of-flight mass spectrometry (UPLC-ESI-QTOF-MS) analysis. UPLC-ESI-QTOF-MS–based data acquisition for untargeted lipidomic profiling. Each sample (2 μL) was injected onto a reverse-phase CSH C18 1.7 μM 2.1x100 mm column using an Acquity H-class UPLC system (Waters Corporation, USA). The gradient mobile phase comprised of water containing 0.1% formic acid solution (Solvent A), 100% acetonitrile (Solvent B) and 10% acetonitrile in isopropanol containing 0.1% formic acid and 10 mM ammonium formate (Solvent C). Each sample was resolved for 13 min at a flow rate of 0.5 mL/min for 8 min and then 0.4 mL/min from 8 to 13 min. The UPLC gradient consisted of 98% A and 2% B for 0.5 min and then a ramp of curve 6 to 60% B and 40% A from 0.5 min to 4.0 min, followed by a ramp of curve 6 to 98% B and 2% A from 4.0 to 8.0 min, a ramp to 5% B and 95% C from 9.0 min to 10.0 min at a flow rate of 0.4 mL/min and finally a ramp to 98% A and 2% B from 11.0 min to 13 min. The column eluent was introduced directly into the mass spectrometer by electrospray ionization. Mass spectrometry was performed on a quadrupole time-of-flight (Q-TOF) instrument (Xevo G2 QTOF, Waters Corporation, USA) operating in either negative (ESI − ) or positive (ESI + ) electrospray ionization mode with a capillary voltage of 3,200 V in positive mode and 2,800 V in negative mode and a sampling cone voltage of 30 V in both modes. The desolvation gas flow was set to 750 l h −1 , and the temperature was set to 350 °C. The source temperature was set at 120 °C. Accurate mass was maintained by introduction of a lock-spray interface of leucine-enkephalin (556.2771 [M+H] + or 554.2615 [M-H] − ) at a concentration of 2 pg/μL in 50% aqueous acetonitrile and a rate of 2 μL/min. Data were acquired in centroid MS mode from 50 to 1,200 m/z mass range for TOF-MS scanning as single injection per sample, and the batch acquisition was repeated to check experimental reproducibility. For the metabolomics profiling experiments, pooled quality control (QC) samples (generated by taking an equal aliquot of all the samples included in the experiment) were run at the beginning of the sample queue for column conditioning and every ten injections thereafter to assess inconsistencies that are particularly evident in large batch acquisitions in terms of retention time drifts and variation in ion intensity over time. This approach has been recommended and used as a standard practice by leading metabolomics researchers 33 . A test mix of standard metabolites was run at the beginning and at the end of the run to evaluate instrument performance with respect to sensitivity and mass accuracy. The overlay of the total ion chromatograms of the quality control samples depicted excellent retention time reproducibility. The sample queue was randomized to remove bias. Stable isotope dilution–multiple reaction monitoring mass spectrometry. LC-MSmass spectrometry (LC-MS/MS) is increasingly used in clinical settings for quantitative assay of small molecules and peptides such as vitamin D, serum bile acid and parathyroid hormone under Clinical Laboratory Improvement Amendments environments with high sensitivities and specificities 34 . In this study, targeted metabolomic analysis of plasma samples was performed using the Biocrates Absolute-IDQ P180 (BIOCRATES, Life Science AG, Innsbruck, Austria). This validated targeted assay allows for simultaneous detection and quantification of metabolites in plasma samples (10 μL) in a high-throughput manner. The methods have been described in detail 35 , 36 . The plasma samples were processed as per the instructions by the manufacturer and analyzed on a triple-quadrupole mass spectrometer (Xevo TQ-S, Waters Corporation, USA) operating in the MRM mode. The measurements were made in a 96-well format for a total of 148 samples, and seven calibration standards and three quality control samples were integrated in the kit. Briefly, the flow injection analysis tandem mass spectrometry (MS/MS) method was used to quantify a panel of 144 lipids simultaneously by multiple reaction monitoring. The other metabolites are resolved on the UPLC and quantified using scheduled MRMs. The kit facilitates absolute quantitation of 21 amino acids, hexose, carnitine, 39 acylcarnitines, 15 sphingomyelins, 90 phosphatidylcholines and 19 biogenic amines. Data analysis was performed using the MetIQ software (Biocrates), and the statistical analyses included the nonparametric Kruskal-Wallis test with follow-up Mann-Whitney U -tests for pairwise comparisons using the STAT pack module v3 (Biocrates). Significance was adjusted for multiple comparisons using Bonferroni's method ( P < 0.025). The abundance is calculated from area under the curve by normalizing to the respective isotope labeled internal standard. The concentration is expressed as nmol/L. Human EDTA plasma samples spiked with standard metabolites were used as quality control samples to assess reproducibility of the assay. The mean of the coefficient of variation (CV) for the 180 metabolites was 0.08, and 95% of the metabolites had a CV of <0.15. Sample size considerations. The signal intensity of the metabolites within similar groups was normally distributed with a standard deviation of 1.5. If the true difference in the Converterpre and NC groups' mean is twofold, we will have over 90% power to detect differential metabolites at an overall significance level of 5% with Bonferroni's adjustment using 30 subjects per group. Lipidomics statistical analyses. The m/z features of metabolites were normalized with log transformation that stabilized the variance, followed by a quantile normalization to make the empirical distribution of intensities the same across samples 37 . The metabolites were selected among all those known to be identifiable using a ROC regularized learning technique 38 , 39 based on the LASSO penalty 8 , 9 as implemented with the R package 'glmnet' 40 , which uses cyclical coordinate descent in a path-wise fashion. We first obtained the regularization path over a grid of values for the tuning parameter λ through tenfold cross-validation. The optimal value of the tuning parameter lambda, which was obtained by the cross-validation procedure, was then used to fit the model. All the features with nonzero coefficients were retained for subsequent analysis. This technique is known to reduce overfitting and achieve similar prediction accuracy as the sparse supporting vector machine. The classification performance of the selected metabolites was assessed using area under the ROC curve (AUC). The ROC can be understood as a plot of the probability of classifying correctly the positive samples against the rate of incorrectly classifying true negative samples. So the AUC measure of an ROC plot is a measure of predictive accuracy. To maintain rigor of independent validation, the simple logistic model with the ten-metabolite panel was used, although a more refined model can yield greater AUC. The validation phase was performed in a blinded fashion such that the sample group was not known by the statistical team. Accession codes in the European Bioinformatics Institute MetaboLights database with accession code MTBLS72 . Lipodomics data were deposited. Accession codes Primary accessions EMBL/GenBank/DDBJ MTBLS72 Change history 20 June 2014 In the version of this article initially published online, the Source Data file for Figure 1 contained a transposition error that occurred when the authors were moving data from their analysis software into Excel. This error does not affect the accuracy of the data shown in Figure 1. This error has been corrected in the HTML version of the article.
Researchers have discovered and validated a blood test that can predict with greater than 90 percent accuracy if a healthy person will develop mild cognitive impairment or Alzheimer's disease within three years. Described in Nature Medicine published online today, the study heralds the potential for developing treatment strategies for Alzheimer's at an earlier stage, when therapy would be more effective at slowing or preventing onset of symptoms. It is the first known published report of blood-based biomarkers for preclinical Alzheimer's. The test identifies 10 lipids, or fats, in the blood that predict disease onset. It could be ready for use in clinical studies in as few as two years and, researchers say, other diagnostic uses are possible. "Our novel blood test offers the potential to identify people at risk for progressive cognitive decline and can change how patients, their families and treating physicians plan for and manage the disorder," says the study's corresponding author Howard J. Federoff, MD, PhD, professor of neurology and executive vice president for health sciences at Georgetown University Medical Center. There is no cure or effective treatment for Alzheimer's. Worldwide, about 35.6 million individuals have the disease and, according to the World Health Organization, the number will double every 20 years to 115.4 million people with Alzheimer's by 2050. Howard J. Federoff, M.D., Ph.D., of Georgetown University Medical Center, explains a new blood test that can predict onset of MCI or Alzheimer's. Credit: Georgetown University Medical Center Federoff explains there have been many efforts to develop drugs to slow or reverse the progression of Alzheimer's disease, but all of them have failed. He says one reason may be the drugs were evaluated too late in the disease process. "The preclinical state of the disease offers a window of opportunity for timely disease-modifying intervention," Federoff says. "Biomarkers such as ours that define this asymptomatic period are critical for successful development and application of these therapeutics." The study included 525 healthy participants aged 70 and older who gave blood samples upon enrolling and at various points in the study. Over the course of the five-year study, 74 participants met the criteria for either mild Alzheimer's disease (AD) or a condition known as amnestic mild cognitive impairment (aMCI), in which memory loss is prominent. Of these, 46 were diagnosed upon enrollment and 28 developed aMCI or mild AD during the study (the latter group called converters). In the study's third year, the researchers selected 53 participants who developed aMCI/AD (including 18 converters) and 53 cognitively normal matched controls for the lipid biomarker discovery phase of the study. The lipids were not targeted before the start of the study, but rather, were an outcome of the study. A panel of 10 lipids was discovered, which researchers say appears to reveal the breakdown of neural cell membranes in participants who develop symptoms of cognitive impairment or AD. The panel was subsequently validated using the remaining 21 aMCI/AD participants (including 10 converters), and 20 controls. Blinded data were analyzed to determine if the subjects could be characterized into the correct diagnostic categories based solely on the 10 lipids identified in the discovery phase. "The lipid panel was able to distinguish with 90 percent accuracy these two distinct groups: cognitively normal participants who would progress to MCI or AD within two to three years, and those who would remain normal in the near future," Federoff says. The researchers examined if the presence of the APOE4 gene, a known risk factor for developing AD, would contribute to accurate classification of the groups, but found it was not a significant predictive factor in this study. "We consider our results a major step toward the commercialization of a preclinical disease biomarker test that could be useful for large-scale screening to identify at-risk individuals," Federoff says. "We're designing a clinical trial where we'll use this panel to identify people at high risk for Alzheimer's to test a therapeutic agent that might delay or prevent the emergence of the disease."
dx.doi.org/10.1038/nm.3466
Computer
Method to fabricate eco-friendly adsorbents for heavy metal ion removal by 3D printing
Abraham Samuel Finny et al, 3D printable polyethyleneimine based hydrogel adsorbents for heavy metal ions removal, Environmental Science: Advances (2022). DOI: 10.1039/D2VA00064D
https://dx.doi.org/10.1039/D2VA00064D
https://techxplore.com/news/2022-10-method-fabricate-eco-friendly-adsorbents-heavy.html
Abstract Heavy metal contamination is one of the leading causes of water pollution, with known adverse effects on human health and the environment. This work demonstrates a novel custom-made 3D printable eco-friendly hydrogel and fabrication process that produces stable biocompatible adsorbents with the ability to capture and remove heavy metals from aqueous environments quickly and economically. The 3D printable ink contains alginate, gelatin, and polyethyleneimine (PEI), which binds heavy metals through primary and secondary amine side chains favoring heavy metal adsorption. The ink's rheological properties are optimized to create mechanically stable constructs, in the form of 3D-printed tablets, fabricated entirely by printing. The optimized tablets have high porosity and accessible surface area with multiple binding sites for heavy metal ion adsorption while the printing process enables rapid and affordable production with the potential for scale-up. The results demonstrate the contribution of hydrogel composition and rheology in determining the printability, stability, and heavy metal binding characteristics of the hydrogel, and indicate the critical role of the PEI in increasing stability of the printed construct, in addition to its metal binding properties. The highest removal capacity was obtained for copper, followed by cadmium, cobalt, and nickel ions. In the optimized formulation, each hydrogel tablet removed 60% from 100 ppm copper in 5 h and up to 98% in 18 h. For more concentrated solutions (1000 ppm), ∼25% of copper was removed in 18 h. The printed tablets are stable, robust, and can be produced in a single simple step from inexpensive biomaterials. The ink's tunability, excellent printability, and stability offer a universally applicable procedure for creating hydrogel-based structures for environmental remediation. These unique capabilities open new avenues for manufacturing tailor-made constructs with integrated functionality for water treatment and environmental applications. This article is part of the themed collection: Best Papers 2022 – Environmental Science: Advances Environmental significance Hydrogel-based adsorbents offer excellent opportunities for the development of eco-friendly technologies for heavy metal ions removal. In this study, an additive manufacturing technique is reported that provides an easy and effective way to rapidly and reproducibly fabricate structured 3D printing hydrogel-based adsorbents for environmental remediation. The results indicate the importance of achieving multifunctionality through reinforcing the hydrogel with PEI and establishes the essential role of hydrogel composition and rheology in determining the printability, stability, functionality and metal binding capacity. An improved understanding of the factors regulating the stability of these hydrogels will allow further development of 3D printable formulations and additive manufacturing techniques for a variety of water treatment and environmental applications. The 3D printing technique described here offers a cost effective, scalable and facile approach to create tunable adsorbents for use in environmental remediation that can be used broadly by the environmental community to custom-made 3D printed structures for environmental removal and sensing applications. This work can contribute to the development of bio-based methods for environmental remediation to achieve the global WHO goals for clean and sustainable water. 1. Introduction Globally, heavy metal pollution with metals such as copper, nickel, mercury, cadmium, lead, and chromium is a significant environmental and health hazard, recognized by the World Health Organization (WHO) as a critical problem with significant consequences worldwide. 1,2 Heavy metals cannot be biodegraded; they are toxic and carcinogenic, and the potential for human exposure is high. 3 Electroplating, mining, tanneries, painting, and semiconductors are a few of the industries that are significant sources of heavy metal pollution. Others include livestock manure, fertilizers, herbicides, atmospheric deposition, and irrigation with polluted wastewater. 4 As a result of heavy metal pollution, plants experience oxidative stress, cellular damage, and disruption of respiratory and photosynthetic activity, 5 the intake of crops contaminated by root transfer from soil to plant tissues can pose substantial health risks for humans. 4,6 Excess metal concentrations in soil alter food quality, leading to various disorders. 7 High levels of heavy metals such as copper, cadmium, nickel, and cobalt have been attributed to increased occurrences of cancer and industries that release an excess of these metal ions are known to pollute the environment. 8 Since heavy metals are not usually degraded by natural processes, they can persist in the environment for a long time. Soil, water, and air are directly impacted by heavy metal contamination. Water runoff from factories, agricultural farms, and water treatment facilities in cities, villages, and towns can transport heavy metals, which eventually accumulate in water bodies, and river beds and is extremely hazardous to the local ecosystem. 9 Particulate matters of heavy metals that are discharged from anthropogenic sources and natural sources cause corrosion, haze, eutrophication, and even acid rains that can further pollute water bodies and soil. 10 Improper waste disposal and landfills, mining, and drilling can pollute soil by resulting in high heavy metal levels, that are further absorbed by living organisms and affect water quality. 11 Copper ions (Cu 2+ ) for example are used heavily in agriculture as an antifungal agent and can enter the water from the electroplating and mining industries. 12 The discharge of these wastes in streams, lakes, and groundwater reservoirs is responsible for health problems in humans and plants. The average Cu 2+ concentration in soil ranges from 5 to 70 ppm but can reach 100 to 1500 ppm in the soil around vineyards where Cu 2+ treatments are used to reduce the growth of mildew. 13 In sediments found in bays and estuaries, the Cu 2+ concentration is less than 50 ppm, but polluted sediments may contain several thousand ppm. Around 4500 ppm of Cu 2+ was reported in the soil around a Cu/Ni smelter. 14 The presence of high concentrations of heavy metals in polluted environments requires efficient and economical ways to remove them to ensure pollutant-free water. Multiple techniques can be used to remove heavy metals from the water. 15,16 These include chemical precipitation, electrochemical reduction, membrane separation, and adsorption. 16 Though chemical precipitation is low-cost and straightforward, 15 the method generates significant waste, leading to secondary pollution. Electrochemical methods are rapid and provide good reduction yields, but the initial capital investment is high, and the technique requires an expensive electrical supply restricting broad applicability. 15 Due to low cost and easy operation, adsorption on different materials ranging from activated carbon to mesoporous and nano-based sorbents is the most broadly for removing contaminants from the wastewater. 17 Despite many adsorbents for the removal of heavy metals, only a fraction of those is eco-friendly. Recently, hydrogel-based adsorbents have gained interest as they are inexpensive, made from abundant materials and are effective for heavy metals removal. Compared to other adsorbents, hydrogels can absorb heavy metals within their three-dimensional, highly porous network, thereby providing more sites per unit volume for adsorption, leading to high adsorption efficiency. 18,19 Herein, we introduce 3D printing as an additive manufacturing technique for fabricating stable custom-made biopolymer-based adsorbents incorporating alginate, gelatin, and PEI to form structured hydrogels, in the form of 3D-printed tablets, to remove heavy metals ions, e.g ., copper, cadmium, cobalt, and nickel, from aqueous environments. 3D-printing technology has attracted much interest because of its ability to customize and tailor macrostructures of different materials for a variety of applications. 20 3D printing allows digital computer-aided designs to be quickly turned into 3D objects by successfully printing customized inks directly guided by computer models. 21,22 3D printed hydrogels have been widely investigated in the biomedical field for organ printing and tissue engineering. Despite their potential to create environmentally friendly bio-based sorbents in a customizable way, 3D printing has been scarcely used for environmental remediation applications. 23 Bioprinting through extrusion provides a more straightforward, flexible, and inexpensive manufacturing process, where the “ink” is extruded layer by layer through fine nozzles until a stable and orderly structure is achieved. Therefore, the assembly of biopolymers into hydrogel adsorbents with ordered macrostructures using 3D printing technology is a promising approach for preparing hydrogel adsorbents. Hydrogels made of natural biopolymers such as alginate, chitosan, and gelatin are among the most amenable classes of 3D printable bioink materials. 24 These biopolymers are easily accessible, biocompatible, and due to their functionalities, have a high sorption capacity for heavy metals binding, making them excellent candidates for environmental remediation. Herein, we report a tertiary hydrogel adsorbent system uniquely suited for 3D printing, with excellent shear-thinning properties and thermodynamic stability and the capability to remove metal ions from environmental water samples ( Fig. 1 ). The hydrogel is made of alginate and gelatin, which provide an ideal 3D printable ink composition amenable to printing. Sodium alginate, a hydrophilic polysaccharide, is used due to its gel-forming characteristics, while gelatin provides strong crosslinking properties and good thermal stability. 25,26 While alginate is known for its ability to uptake metal ions through chelation, electrostatic, and ion exchange interactions, 27 its binding ability is limited and alginate hydrogels lack stability in aqueous environments. Here we show that polyethyleneimine (PEI) forms a homogenous PEI-based cross-linkable network with alginate with high chelation ability and stability in aqueous environments. The as-prepared metal-chelating ink can be directly 3D printed into stable constructs in a single-step process. The PEI's branched cationic polymer with rich primary, secondary, and tertiary amino groups has the ability to form complexes with metal ions such as Co 2+ , Cu 2+ and Cr 3+ , 28 making this composition an ideal candidate for heavy metal capture and removal. It is worth noting that PEI is water soluble and by itself cannot be used as a heavy metal adsorbent, limiting its applicability for environmental remediation. The 3D printed PEI-based structures reported here are physically stable, have high porosity, and high accessible surface area providing multiple binding sites for heavy metal removal. This adsorbent offers a practical and cost-effective method to remove metals from aqueous solutions with excellent sorption performance. The optimized printing composition and manufacturing process can be used to establish design principles for fabricating hydrogel-based adsorbents prepared by advanced manufacturing techniques. This work can contribute to the development of bio-based methods for environmental remediation to achieve the global WHO goals for clean and sustainable water. Fig. 1 Representation of the one-step 3D-printing fabrication (A) and removal (B) process of the hydrogel tablets, showing the interaction between PEI and Cu 2+ ions, as an example. The hydrogel turns blue in the presence of Cu 2+ due to the chelation process leading to the formation of cuprammonium complexes within the printed hydrogel. 2. Experimental section 2.1 Materials and methods All chemicals were obtained from commercial sources and used as received. Sodium alginate NF MW 222.00 (Spectrum Chemical), gelatin from porcine skin gel strength 300 Type-A (Sigma Aldrich), and branched polyethyleneimine (PEI) from Aldrich were used to prepare the 3D printable adsorbent ink. Deionized (DI) water with a resistivity of 18.2 MΩ was obtained with a MiliQ system (Millipore). Copper( ii ) nitrate trihydrate (Acros Organics), nickel( ii ) nitrate hexahydrate (Aldrich), cobalt nitrate (J.T Baker), lead( ii ) nitrate (Mallinckrodt), and cadmium nitrate (Spectrum chemical) were used to prepare the corresponding metal ion solutions. An environmental water sample was collected from the banks of the Raquette River, Potsdam, NY. Rheology tests were performed with a Modular Compact MCR 302 rheometer (Anton Paar). An adequate amount of sample was transferred onto the measuring platform of the rheometer, and the testing was performed using a measuring cone (CP50-1, D: 50 mm; angle: 1°). Nanoindentation experiments were conducted using a nanomechanical-testing instrument (TI950 Triboindenter, Hysitron Inc.) in displacement control mode with a Berkovich tip attachment at room temperature (RT). Tensile testing was conducted using a Mark-10 model no. BG20 Force Gauge with a maximum capacity of 100 N and a MARK-10 ESM 301 motorized test stand with a max load capacity of 1.5 kN. The heavy metal ion concentrations before and after adsorption were measured using PerkinElmer AAnalyst 600 Atomic Absorption Spectrometer. The Brunauer–Emmett–Teller (BET) gas sorption measurements were performed at 77 K under nitrogen using a Quantachrome Autosorb IQ analyzer, with prior overnight degassing of the samples at 100 °C. Before analysis, the freshly prepared hydrogel tablets were soaked in ethanol for 6 h (the ethanol was replaced at every 2 h), followed by the supercritical CO 2 activation using a Tousimis Samdri PVT-3D critical point dryer. To visualize the structure and porosity of the tablets before and after exposure to metal ion Scanning Electron Microscopy (SEM) was used to study the morphology of hydrogels. For SEM analysis, the hydrogel tablets were immersed in liquid nitrogen and, once completely frozen, they were lyophilized for 48 hours. These samples were then attached to an SEM holder and analyzed for morphology and porous structures on a JEOL JSM 7900-LV SEM. 2.2 Formulation of 3D printable adsorbent ink The 3D printable adsorbent ink was formulated using an 8% sodium alginate solution prepared and mixed overnight using a Stir-Pak High-speed (Cole Parmer), low-torque overhead mixer motor (23–2300 rpm), along with a Fisher Scientific Isotemp Stirring Hotplate with a temperature controller thermocouple. Sodium alginate of 8% was used in this study as it was found that levels lower than 8% lead to very low viscosity solutions that were not suitable for printing. At 21 °C, 2% alginate had a viscosity of 6400 mPa s, while 4% had 44 800 mPa s and 8% had 5.40 × 10 5 mPa s. A 10% gelatin solution having a viscosity of 1.21 × 10 6 mPa s was prepared using a vortex mixer. This gelatin solution was transferred to a beaker containing 8% sodium alginate solution and further mixed for 1 hour to form a mixture of alginate–gelatin in a 9 : 1 ratio. The alginate–gelatin combination formed the base ink for 3D printing. The optimized heavy metal removal ink was created by adding 5 ml of PEI solution (50 mg ml −1 ) to the 45 ml of base alginate–gelatin ink. The three-polymer component mixture was mixed for 2 hours to get a homogenous 3D printable adsorbent ink. For printing, the ink was transferred into a syringe-cartridge fitting the Allevi 2 bioprinter and centrifuged at 2000 RPM for 7 minutes to remove air bubbles. For making an optimal ink formulation, various concentrations of alginate, gelatin, and PEI were tested alone and in different ratios. Their viscosities were characterized with respect to temperature. The rheological properties and the temperature-dependent viscosity changes determine the printability of the ink and stability of the 3D printed constructs, as discussed in the results section. 2.3 3D printing models and printing process 3D models were created using Autodesk software, primarily AutoCAD and Inventor. The computer-generated 3D models were processed by ‘slicing’ using Repetier-Host and uploaded to the Allevi 2 3D Bioprinter. The ink was transferred to 10 ml cartridges (BD Luer-Lok tip), and different dispensing tips (Small Premium Dispensing Tip Kit – JG120DN-NT, Large Dispensing Tip Kit – JG120NK from Jensens) were attached to the cartridge and then tested for printability. The cartridge was centrifuged at 2000 rpm with an endcap to remove any air bubbles before printing. Printing parameters were optimized to create optimal configurations that were finally used to fabricate the 3D printed hydrogel tablets ( Fig. 2 ). The printing procedure was further optimized by adjusting the printing pressure, tip diameter, and the 3D printer application settings until stable printed constructs were obtained. The weight of the tablets was 0.68 ± 0.072 g in dry state. Fig. 2 3D models created using Autodesk (A) and printability of the alginate–gelatin–PEI ink showing printing of different shapes: Cu, square, and rectangle (B). Shown here as an example for Cu 2+ ; other structures can be created using similar processing. 2.4 Heavy metal ions removal Optimization experiments of the 3D printed hydrogel tablets to determine removal efficiency were first carried out with Cu 2+ . For testing the Cu 2+ removal performance, the printed hydrogel tablets were immersed in Petri dishes containing 40 ml Cu 2+ solutions of variable concentrations (100, 250, 500, 750 and 1000 ppm). For each Cu 2+ concentration, the tests were performed at least five times. Sample solutions were analyzed for residual Cu 2+ content at the following time intervals: 1, 2, 5, and 18 hours. The 3D printed tablets were then tested with four other heavy metals, cadmium (Cd 2+ ), cobalt (Co 2+ ), nickel (Ni 2+ ), and lead (Pb 2+ ), individually using an identical procedure with one tablet per vial as that used for Cu 2+ and exposed to each metal ion for 18 h. Petri dishes were filled with 40 ml of 100 ppm of Cd 2+ , Co 2+ , Cu 2+ , Ni 2+ , and Pb 2+ solution, respectively. For the determination of the selectivity of the method for heavy metals, a study was performed where a 40 ml mixture containing 100 ppm each of Cu 2+ , Cd 2+ , Co 2+ , and Ni 2+ was exposed to hydrogel tablet, and the residual concentration of each of these ions was determined using atomic absorption spectroscopy after 18 hours of exposure. 2.5 Application to an environmental water sample The practical utility of the tablets was tested in environmental water samples, and their efficiency was established for Cu 2+ removal. The water was collected from Garner Park, Potsdam, NY, and used as-is. Petri dishes with a single hydrogel tablet were used during this study. The sample (40 ml) was spiked with Cu 2+ to create a 100 ppm solution. After 18 hours of exposure, the samples were analyzed by AAS to determine the residual Cu 2+ content. 3. Results and discussion 3.1 Formulation of 3D printable hydrogel adsorbent ink and printability The formulation of ink is crucial for developing compositions that are 3D printable and suitable for creating robust and mechanically stable constructs maintaining their functionality for heavy metal removal in aqueous environments. Hydrogels consisting of 3D crosslinked polymer networks are known for their printability but obtaining robust and reproducible printed constructs requires optimized composition, viscosity, miscibility, structure, and a rheological behavior of the formulation that is amenable to printing. 22,29 Multicomponent polymer mixtures have been traditionally used for 3D printing and are known to provide good gelation properties and viscosity, as opposed to single polymeric systems. However, obtaining miscible single-phase mixtures that are thermodynamically stable can be challenging, and their printing and gelation behavior is difficult to predict. The selection of printing materials is thus critical for ensuring compatibility and preventing phase separation and cracking, which are necessary to achieve printability. To formulate the adsorbent ink, we had the following considerations: (1) the ink should exhibit a good pseudoplastic (shear thinning) behavior to enable extrusion through the printer nozzle such that it enables a layer-by-layer printing; (2) the base materials of the ink should have tunable viscosity; (3) the composition of the ink, crosslinking conditions and printing time should enable solidification/gelation within a reasonable time to enable well-defined stable constructs; (4) one of the ink components should have specific binding or chelation ability for metal binding, and: (5) the printed construct should be mechanically stable in aqueous environments to prevent leaching of captured ions. Because of their biocompatibility and good mechanical properties, alginate and gelatin have been standard choices for the fabrication of hydrogels, particularly for applications in biomedicine and 3D bioprinting. 21,22,25,30–32 In this work, we utilized the interaction between the macromolecular chains of gelatin and anionic polysaccharide sodium alginate, which leads to the formation of polyelectrolyte complexes through hydrogen bonding and electrostatic interactions between the negative carboxylate groups of alginate and the positive nitrogen groups in gelatin. 33 The hydrogen bonding between the OH groups in alginate and the NH groups in gelatin provides a stable hydrogel network forming a base for the primary ink ( Fig. 3 ). To further impart functionality for heavy metal removal, the PEI, a known polymer with abundant primary and secondary amine side chains favoring heavy metal ion adsorption 28,34 has been added to the base ink. In addition to providing metal removal, the branched PEI has the ability to interact with alginate even at low degrees of ionization and homogenously reinforce alginate hydrogels, further improving their stability and preventing alteration of the pore structure. 35 Despite the commonality of these polymers, there have not been any studies on the printability of these polymer composites, the stabilization of the PEI within alginate hydrogels and their characteristics for heavy metal removal. To understand the printability and gelation properties, we studied the gelation behavior of each component individually and then in mixtures with respect to temperature. Fig. 3 Mechanism of physical crosslinking of alginate, gelatin, and polyethyleneimine in the formulation of the hydrogel adsorbent. Fig. 4 shows the viscosities of various hydrogel compositions with respect to temperature (°C) when shear rates of 1 s −1 and 2 s −1 respectively are applied. At 20 °C and a shear rate of 1 s −1 , the viscosity values were: 717.80 Pa s for alginate, 327.88 Pa s for gelatin, 854.36 Pa s for the alginate–gelatin composite, and the highest value of 1148.70 Pa s for the three-component adsorbent ink. The viscosity of alginate alone does not vary significantly when the temperature is changed. By comparison, the viscosity of the gelatin begins to show a drastic downward trend at 35 °C for both shear rates. When gelatin and alginate are mixed, the viscosity decreases much slower with the increase in temperature in comparison with alginate and the gelatin alone. For example, at 50 °C, the viscosity of alginate is 394.74 Pa s, that of gelatin is 0.01 Pa s, and the viscosity of the alginate and gelatin composite is 307.69 Pa s, When PEI is added, the viscosity of the adsorbent ink decreases slower than the alginate–gelatin indicating a value of 365.00 Pa s, demonstrating enhanced stability and a reinforcement effect when PEI is used. The same trend was observed when a higher shear rate of 2 s −1 was applied, indicating stability in the composites' rheological response at a higher shear rate. Therefore, the best printable configuration was achieved when combing alginate, gelatin, and PEI. It is interesting to note that in the absence of PEI when a binary alginate–gelatin composite was used, the printing of the mixture did not produce a stable construct; the printed hydrogel collapsed due to the lack of crosslinking, further supporting the critical role of the PEI to the gelation and printing process. The final composition displayed excellent printability and temperature stability between 20 °C to 50 °C. The BET surface area of the Alg–Gel–PEI was 9.907 m 2 g −1 , comparable with other types of hydrogels: chitosan 36 or alginate/PEI based. 37 Fig. 4 The viscosity of alginate, gelatin, alginate–gelatin (Alg–Gel), and alginate–gelatin–PEI (Alg–Gel–PEI) hydrogel composites as a function of temperature (°C) at a shear rate of 1 s −1 (black) and 2 s −1 (red). The rheological tests shown in Tables S1 and S2 – ESI † support our hypothesis that the polymers used in this formulation are miscible, the three can co-exist in a single phase, and the mixture is thermodynamically stable. 3.2 Mechanical testing of 3D printed constructs The mechanical stability of the 3D printed constructs created using the basic 3D model shown in Fig. 2 was tested by printing the hydrogel tablets, followed by drying and subjecting them to mechanical testing by nanoindentation. These tests were performed at twenty different points on the surface of the tablet, and the results were used to calculate the reduced elastic modulus and hardness at each of these points. Fig. 5 shows the corresponding load versus depth (displacement) curve and the different reduced elastic modulus and hardness at these points. Reduced elastic modulus represents the elastic deformation that the sample and the indenter tip undergoes as the indenter tip indents the sample. Reduced modulus was calculated using the following equation. where: E * = reduced modulus, E 1 = modulus of the specimen, E 2 = modulus of the indenter, v = Poisson's ratio of the specimen, and v 1 = Poisson's ratio of the indenter. Fig. 5 Nanoindentation tests: (A) load vs. depth profile, (B) hardness, (C) reduced elastic modulus of the dried films prepared from alginate–gelatin–PEI composite. The load-depth indentation curves of the dried adsorbent tablets revealed the magnitude of indentation depth corresponding to the volume of the composite that was deformed due to the applied indentation force. From the graph, it can be clearly inferred that these hydrogel adsorbent composites can handle high amounts of applied forces and therefore can be used as high-performance materials. Based on the very low deviation between the reduced elastic modulus values for the twenty indentations and the hardness values across the points, it can be inferred that the formulation is homogenous and the distribution of the PEI across the hydrogel is uniform. Further bending, twisting, and tensile tests were conducted as shown in Fig. S1 and S2, † exhibiting excellent mechanical properties of the fabricated hydrogel tablets. The 3D printed band remained flexible and did not deform even after sequential bending at a 90° angle on each side 50 times (100 times in total) and twisting on each side 50 times (100 times in total) (Fig. S3 † ). The stability was maintained even when a uniaxial force of 100 N was applied along the length of the hydrogel tablet. The next set of characterization experiments was performed to determine the porosity of the printed hydrogel and examine alterations in the pore structure as a result of metal ion complexation. These tests were run with the PEI-reinforced gelatin–alginate hydrogel before and after exposure to Cu 2+ ions, selected here as an example. As shown in the SEM images obtained with frozen hydrogels ( Fig. 6A, D and E ), the surface of the hydrogel is highly porous, with micropores distributed evenly across the entire printed surface. The availability of a large number of pores provides a vast surface area for the interactions between Cu 2+ and PEI. Incubating the gels with a Cu 2+ solution led to a significant change in the hydrogel structure due to crosslinking between the Cu 2+ ions and the PEI. Once crosslinking occurs, the pores close, and the hydrogel has a smooth, uniform surface, as shown in Fig. 6B, D and F . Fig. 6 SEM images of the 3D printed hydrogel tablets before (A, C, E) and after (B, D, F) exposure to 100 ppm Cu 2+ at 250× (A, B), 1000× (C, D) and 7500× (E, F) magnifications. The characterization tests demonstrate that our printing procedure produces highly porous and mechanically stable hydrogels, which could be used for environmental remediation applications. The significant microstructural alterations in the hydrogel pore structure seen by SEM after metal complexation further support our binding mechanism and demonstrate successful interaction between the metal ions and the hydrogel. Moreover, the tablets were highly stable and maintained their structure and morphology in environmental solutions and pH conditions ranging from 4 to 10. These materials and optimized printed constructs were further evaluated in batch adsorption tests to assess their performance for the removal of metal ions with laboratory standards and environmental water samples. 3.3 Metal ion removal studies Tablet adsorption capacity, kinetics, and optimization of the incubation time were first determined by batch experiments with Cu 2+ . Evidence of metal binding was first revealed by the visible micro-structural changes in the tablet, noticeable by the naked eye. After exposure to the Cu 2+ solution, each tablet showed a narrowing of the edges and an immediate change in color from uncolored to blue (Fig. S4 † ). A slight shrinking in the tablet was also observed ( Fig. 7 ). Continued exposure to the Cu 2+ solution caused the entire table to change color to blue, indicating a time-dependent adsorption process. Despite the morphological changes, the hydrogel remains functionally intact, mechanically stable, and does not dissolve in the aqueous environment during the exposure experiments. This behavior is attributed to the increased stability provided by reinforcing the hydrogel with the PEI and the complexation of PEI with the metal ions, increasing stability and robustness. These results are in line with the SEM results showing significant microstructural changes in the porous structure following exposure. The high stability in the aqueous environment, the low cost, and the biocompatibility of the hydrogel make these formulations attractive candidates for environmental applications. Fig. 7 (A) Illustration of the Cu 2+ capture/removal with a 3D printed ‘CU’ construct showing the as printed tablet in the dry state (a), immersion into a Cu 2+ solution of 500 ppb (b), and capture evidenced by a change in the color of the tablet to a dark blue over the time of exposure (c). Note the ability of the hydrogel tablet to capture Cu 2+ , essentially concentrating it from the solution. (B) Color changes of square-printed hydrogel tablets before (0 hours) and after exposure to 100, 500, and 1000 ppm for 0, 2, and 18 h, respectively. Fig. 8 shows the results of batch adsorption experiments with the 3D printed tablets exposed to Cu 2+ solution for 1, 2, 5, and 18 hours. To evaluate the removal capacity of the alginate–gelatin–PEI tablet over time, one or two tablets were incubated with high concentrations of Cu 2+ of 100, 500, and 1000 ppm Cu 2+ . During the experiments, the hydrogels started turning blue immediately after exposure to the Cu 2+ solution indicating the formation of the cuprammonium complexes homogenously throughout the hydrogel tablet's body. Continued exposure caused the entire tablet to turn blue with higher Cu 2+ concentrations inducing a deeper color. Cu 2+ removal begins immediately upon immersion of the tablet in the solution and proceeds gradually over time. Based on the AAS results of the residual Cu 2+ , the amount of Cu 2+ captured and removed per single tablet is significant. For example, when only one tablet was exposed to 100 ppm Cu 2+ solution, 18.43% was removed after one hour, ∼50% were removed for 5 hours, and the entire Cu 2+ present was completely removed, i.e ., 100% removal, after 18 hours. When more tablets are used, the removal time decreases, and Cu 2+ is more efficiently removed with a higher sorption capacity provided by the availability of more capture sites in the two tablets and a shorter time. The same trend was observed when higher Cu 2+ concentrations were tested. Fig. 8 Percent removal for Cu 2+ vs. time with one (∼0.68 g) hydrogel tablet (black) and two (∼1.47 g) hydrogel tablets (red) after 18 hours incubation in Cu 2+ solutions of the following concentrations: 100 (A), 250 (B), 500 (C), 750 (D), and 1000 (E) ppm. The kinetics of the adsorption process and calculation of the total adsorption capacity and binding isotherms were next evaluated using the Langmuir and Freundlich models. The site-specific binding fit was determined using the equation: where: q max e is the maximum equilibrium binding/adsorption of Cu 2+ per gram of the 3D printed adsorbent. This value is extrapolated to higher concentrations of Cu 2+ to get the final value. K d is the equilibrium dissociation constant, measured as the concentration (mg L −1 ) of Cu 2+ necessary to achieve a half-maximum binding/adsorption at equilibrium. The Freundlich fit was performed by plotting the log C e (the equilibrium Cu 2+ concentration in mg L −1 ) vs. log q e (mg of Cu 2+ removed per g of adsorbent) using the equation: Because the printed hydrogel takes up a significant amount of water and swells in aqueous environments due to their hydrophilic groups in the backbone, the isotherms were calculated for both dry and hydrated state hydrogels ( Fig. 9 ). The corresponding kinetic parameters are listed in Table 1 . The plot of log C e vs. log q e shows a straight line, which suggests a Freundlich isotherm indicative of multilayer adsorption on a heterogeneous surface. Given the combined use of alginate–gelatin and PEI in our system, it is expected that metal ions binding to the printed hydrogel occur through multipoint interactions, including chelation to the PEI as well as electrostatic binding to the three-dimensional polymeric network. To further quantify binding and estimate a maximum saturation point for Cu 2+ removal, we have also applied a site-specific binding fit, which allowed us to determine the maximum ion binding for dry and hydrated tablets ( Table 1 ). Fig. 9 Isotherm curves for the adsorbent in a hydrated vs. dehydrated state. Site specific binding (A, C) and Freundlich fit (B, D). Table 1 Site-specific binding and Freundlich isotherm parameters for Cu 2+ removal Adsorbent state Specific binding (best-fit values) Specific binding (95% confidence interval) R 2 Freundlich q max e , mg g −1 K d , mg g −1 q max e , mg g −1 K d , mg g −1 1/ n K f R 2 Hydrated 43.52 800.4 28.9 to 109 329.4 to 3144 0.96 1.59 0.61 0.993 Dehydrated 633.20 1037.0 378.8 to 3562 370.1 to 9469 0.96 1.48 1.61 0.993 The maximum absorption capacity, q max e measured per gram of dehydrated sorbent of 633.2 mg g −1 is significantly higher than the adsorption capacity reported previously with other hydrogel-type sorbents based on similar biomaterials like alginate, chitosan, and cellulose. This extremely high sorption capacity supports our hypothesis of achieving increased binding efficiency due to the favorable adsorption with the porous functional structure of the 3D printed hydrogel network. A comparison of adsorption capacity, q max e , for the removal of Cu 2+ ions using previously reported hydrogel-type sorbents is shown in Table 2 , indicating superior adsorption of the 3D printed hydrogels. The high binding ability is attributed to the macroscopic pores and abundant surface functionality available for chelation, favorable for removing metal ions in aqueous environments. Table 2 Comparison of the removal capacity of the 3D printed alginate–gelatin–PEI with those reported with other hydrogel-type adsorbents for Cu 2+ removal Hydrogel composition Manufacturing process steps and scalability q max e (mg g −1 ) Cu 2+ adsorption Ref. Magnetic calcium alginate hydrogel beads Gelation of alginate/Fe 2 O 3 mixed suspension 159 38 Chitosan cellulose hydrogel beads Droplet addition of chitosan into NaOH through a vibration nozzle system 53.2 39 MXene/alginate composite A mixture of crosslinked MXene/alginate by freeze-drying 87.6 40 Chitosan/PEI/graphene oxide Cellulose membranes dip-coated in chitosan–PEI–graphene and glutaraldehyde in a multistep process NA/90% removal of 20 ppm 41 Polyacrylamide/graphene oxide/sodium alginate Multistep free-radical polymerization, neutralization, and crosslinking 280.3 42 PEI/k-carrageenan composite (CG) PEI/CG mixture dried under vacuum, freeze-dried into a mold 116 43 Alginate/gelatin–PEI PEI-based hydrogen fabricated by one-step printing, fully automatic, and scalable 633.2/100% removal of 100 ppm This study 3.4 Removal of other heavy metals Further investigations were performed to evaluate the adsorption capacity of the 3D printed tablets and demonstrate broad applicability as a universal sorbent for metal ions removal. Experiments were carried out with the adsorbent exposed to 100 ppm of Cd 2+ , Co 2+ , Cu 2+ , Ni 2+ , and Pb 2+ solutions. Fig. 10 provides cumulative results showing percent removal of each metal ion individually and in mixtures after 18 h incubation. The data indicate the ability of the tablets to remove in addition to Cu 2+ , ions like Ni 2+ , Cd 2+ , Co 2+ and Pb 2+ with removal capacities of 90.38%, 59.87%, 46.27%, 38.66%, and 6.45% for Cu 2+ , Ni 2+ , Cd 2+ , Co 2+ and Pb 2+ respectively. Comparative tests indicate the highest removal for Cu 2+ and the lowest for Pb 2+ . Of the metals investigated, Cu 2+ and Co 2+ cause the hydrogel to change color. In the presence of others, such as Ni 2+ , the hydrogel remain pale white, despite displaying relatively high adsorption capacity (∼60%). Fig. 10 Percent removal of heavy metals after 18 hours using five (A) and one (B) tablets exposed to 100 ppm concentration of a particular heavy metal, individually. Each tablet weighed approximately 0.7 g. (C) Comparative analysis of heavy metal removal capability of one hydrogel tablet placed in an aqueous solution containing each individual heavy metal (solid bars) vs. a mixture of the four metals (striped bars). (D) Image of hydrogel tablets exposed individually to 100 ppm metal solutions after 18 hours using one tablet. To investigate if the hydrogel can remove multiple metal ions simultaneously or if they compete for the same binding sites, tables were exposed to mixtures of the four highly adsorbing ions and compared to those exposed to each ion individually ( Fig. 10 ). The data indicate the 3D printed tablets' capability to remove heavy metals, with similar removal capacities as for those with individual ions even when other ions are present. They also indicate that the co-existing metal ions are not competing for the same binding sites nor displace the highly adsorbing Cu 2+ from the PEI binding sites. Therefore, the 3D printed tablets have the potential as a universal tool for the capture and removal of a variety of metal ions. The printed tablets are stored in a dry state at room temperature until use with no sign of degradation over several months, and no special conditions are required for storage. It was found that the adsorption capacity remained unchanged after 240 days under the same testing conditions as described, which demonstrates excellent stability. 3.5 Application to an environmental water sample The stability of the 3D printed tablets in actual environmental water was first investigated by incubating the tablets in raw water collected from the Raquette River, Potsdam, NY. When incubated in the water, the hydrogel tablets were robust and did not show any changes over time. The tablets were further tested from the removal of Cu 2+ in spiked river water. Since the river water had no detectable levels of Cu 2+ for the removal test, it was spiked with 100 ppm Cu 2+ . As with distilled water, the color of the tablets changed to blue, and the tablets were able to remove 87.30% of the total Cu 2+ , close to that obtained for a standard solution prepared in distilled water which showed 90.37% removal. This demonstrates that the removal performance of the tablets in a real water sample is comparable to the performance in laboratory samples and shows the feasibility of using these adsorbents in real environmental samples ( Fig. 11 ). Fig. 11 Removal of Cu 2+ from untreated river water sample that was spiked to create a 100 ppm Cu 2+ solution, compared to the removal from a standard solution prepared in distilled water containing 100 ppm Cu 2+ . Experiments were performed with one tablet after 18 h incubation. 4. Conclusion The rapid advancement of additive manufacturing and 3D printing technologies has opened the door for the fabrication of complex functional products using fully automated manufacturing processes. We have developed a new printable adsorbent ink and 3D printing method for fabricating hydrogel-based tablets for use in heavy metal remediation. These 3D printed hydrogels are insoluble in water after the metal ion removal, which leads to easy recovery of the tablets. Several advantages are achieved using this approach. First, the printing method is scalable, and the materials used to formulate the ink are biodegradable and inexpensive. Second, the ink has the metal-binding ability through the PEI. These tablets utilize the strong crosslinking properties between PEI and metal ions to capture the ions from the environment. Third, the 3D printed hydrogel provides a simple method for removal using the 3D printed hydrogel tablets. The composition of the tri-polymer 3D printable ink includes sodium alginate, gelatin, and PEI, which together provide an excellent composite material system for capturing heavy metal ions, easily accessible and environmentally sustainable. Forth, we have shown that in addition to Cu 2+ removal, they could be used for removing other heavy metal ions like cadmium, nickel, lead, and cobalt. The tablets and the adsorbent inks can be used for environmental remediation to help treat polluted wastewater, particularly in areas with high heavy metal concentrations in agricultural or industrial settings. The results indicate the effect of achieving multifunctionality and increasing stability through reinforcing the hydrogel with PEI and demonstrate that the hydrogel composition and its rheology are essential factors in determining the printability, stability, and functionality of the hydrogel. An improved understanding of the factors regulating the stability of these hydrogels will allow further development of 3D printable formulations and manufacturing processes for a variety of applications in the environment and other fields. These findings are likely to be important when utilizing these materials and methods as green adsorbents for heavy metals and potentially other contaminants removal. This novel and cost-effective approach could help fabricate inexpensive systems for environmental decontamination. Conflicts of interest All authors have approved the final version of the manuscript. The authors declare no competing financial interest. Acknowledgements This work was funded by the National Science Foundation under Grants No. 1561491 and 2141017. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The authors thank Dr Daniel Andreescu (Chemistry and Biomolecular Science, Clarkson University) for the help with the AAS. The authors thank Dr Xiaocun Lu (Chemistry and Biomolecular Science, Clarkson University) for providing access to the rheometer (Anton Paar Modular Compact Rheometer MCR 302) and for their help with the rheology tests. The authors also thank Dr Philip Yuya (Department of Mechanical and Aeronautical Engineering, Clarkson University) and Janith Wanniarachchi (Department of Mechanical and Aeronautical Engineering, Clarkson University) for providing access to the nanomechanical testing instrument (TI-950 Triboindenter, Hysitron Inc.) and for their help with the nanoindentation tests. The authors also thank Dr Devon Shipp (Chemistry and Biomolecular Science, Clarkson University) and the Shipp Research Group for providing access to the tensile testing equipment. The authors also thank Mohamed Hassan and Shefa Alomari for helping with the BET analysis.
One of the leading causes of water pollution is heavy metal contamination which has profound adverse effects on human health and the environment. That's why Clarkson University researchers have developed a cost-effective, 3D printing technology to create sustainable bio-based adsorbents that can effectively remove toxic heavy metal ions from contaminated environments. The 3D printing technique offers a cost-effective, scalable and simple approach to creating tunable adsorbents for environmental remediation that can be used broadly by the community for environmental remediation and sensing applications. The work performed in the laboratory of Professor Silvana Andreescu, the Egon Matijevic Chair in Chemistry, was recently featured on the front cover page of the journal, Environmental Science Advances. Nadia Cheng, a biomolecular science undergraduate, and two chemistry graduate students, Abraham S. Finny and Oluwatossin Popoola, were involved in the project. Nadia started her work on this project as a senior in high school and then as a Clarkson School student. "Our work demonstrates unique capabilities of green and sustainable materials to be additively manufactured and designed so that they have the ability to capture and remove toxic contaminants, providing innovative solutions for next-generation detection and remediation technologies. This work contributes to the development of materials and methods for environmental monitoring and clean up to achieve the global WHO goals for clean and sustainable water," said Professor Andreescu. Abraham S. Finny PHD, a Senior Scientist at Waters Corporation and a former member of Prof. Andreescu's lab, says, "Exposure to such innovative, application-focused, and cutting-edge scientific research at Clarkson makes Clarkson graduates excellent problem solvers who go on to become impactful leaders tackling global challenges; another reason why employers find Clarkson graduates highly attractive."
10.1039/D2VA00064D
Physics
Novel material design for undistorted light waves
"Constant-intensity waves and their modulation instability in non-Hermitian potentials." Nature Communications: dx.doi.org/10.1038/ncomms8257 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms8257
https://phys.org/news/2015-08-material-undistorted.html
Abstract In all of the diverse areas of science where waves play an important role, one of the most fundamental solutions of the corresponding wave equation is a stationary wave with constant intensity. The most familiar example is that of a plane wave propagating in free space. In the presence of any Hermitian potential, a wave’s constant intensity is, however, immediately destroyed due to scattering. Here we show that this fundamental restriction is conveniently lifted when working with non-Hermitian potentials. In particular, we present a whole class of waves that have constant intensity in the presence of linear as well as of nonlinear inhomogeneous media with gain and loss. These solutions allow us to study the fundamental phenomenon of modulation instability in an inhomogeneous environment. Our results pose a new challenge for the experiments on non-Hermitian scattering that have recently been put forward. Introduction Our intuition tells us that stationary waves, which have a constant intensity throughout an extended region of space, can only exist when no obstacles hamper the wave’s free propagation. Such an obstacle could be an electrostatic potential for an electronic matter wave, the non-uniform distribution of a dielectric medium for an electromagnetic wave or a wall that reflects an acoustic pressure wave. All of these cases lead to scattering, diffraction and wave interference, resulting in the highly complex variation of a wave’s spatial profile that continues to fascinate us in all its different manifestations. Suppressing or merely controlling these effects, which are at the heart of wave physics, is a challenging task, as the quest for a cloaking device 1 or the research in adaptive optics 2 , and in wavefront shaping through complex media 3 make us very much aware. Strategies in this direction are thus in high demand and would fall on a fertile ground in many of the different disciplines of science and technology in which wave propagation is a key element. A new avenue to explore various wave phenomena has recently been opened up when it was realized that waves give rise to very unconventional features when being subject to a suitably chosen spatial distribution of both gain and loss. Such non-Hermitian potential regions 4 , 5 , which serve as sources and sinks for waves, respectively, can give rise to novel wave effects that are impossible to realize with conventional, Hermitian potentials. Examples of this kind, which were meanwhile also realized in the experiment 6 , 7 , 8 , 9 , 10 , are the unidirectional invisibility of a gain–loss potential 11 , devices that can simultaneously act as laser and as a perfect absorber 12 , 13 , 14 and resonant structures with unusual features like non-reciprocal light transmission 10 or loss-induced lasing 15 , 16 , 17 . In particular, systems with a so-called parity-time ( ) symmetry 18 , where gain and loss are carefully balanced, have recently attracted enormous interest in the context of non-Hermitian photonics 19 , 20 , 21 , 22 , 23 , 24 . Inspired by these recent advances, we show here that for a general class of potentials with gain and loss, it is possible to construct constant-intensity wave solutions. Quite surprisingly, these are solutions to both the paraxial equation of diffraction and the nonlinear Schrödinger equation (NLSE). In the linear regime, such constant-intensity waves resemble Bessel beams of free space 25 . They carry infinite energy, but retain many of their exciting properties when being truncated by a finite-size input aperture. In the nonlinear regime, this class of waves turns out to be of fundamental importance, as they provide the first instance to investigate the best known symmetry breaking instability, that is, the so-called modulational instability (MI) 26 , 27 , 28 , 29 , 30 , 31 , in inhomogeneous potentials. Using these solutions for studying the phenomenon of MI, we find that in the self-focusing case, unstable periodic modes appear causing the wave to disintegrate and to generate a train of complex solitons. In the defocusing regime, the uniform intensity solution is modulationally unstable for some wavenumbers. Results One-dimensional constant-intensity waves Our starting point is the well-known NLSE. This scalar wave equation encompasses many aspects of optical wave propagation as well as the physics of matter waves. Specifically, we will consider the NLSE with a general, non-Hermitian potential V ( x ) and a Kerr nonlinearity, The scalar, complex valued function ψ ( x , z ) describes the electric field envelope along a scaled propagation distance z or the wave function of a matter wave as it evolves in time. The nonlinearity can either be self-focusing or defocusing, depending on the sign of g . For this general setting, we now investigate a whole family of recently introduced potentials V ( x ) (ref. 32 ), which are determined by the following relation, where W ( x ) is a given real function. In the special case where W ( x ) is even, the actual optical potential V ( x ) turns out to be -symmetric, since V ( x )= V *(− x ). We emphasize, however, that our analysis is also valid for confined, periodic or disordered potentials W ( x ), which do not necessarily lead to a -symmetric form of V ( x ) (but for which gain and loss are always balanced since in the case of localized or periodic potentials.). For the entire non-Hermitian family of potentials that are determined by equation 2 (see Methods), we can prove that the following analytical and stationary constant-intensity wave is a solution to the NLSE in equation 1, notably with a constant and real amplitude A . We emphasize here the remarkable fact that this family of solutions exists in the linear regime ( g =0) as well as for arbitrary strength of nonlinearity ( g =±1). Under linear conditions ( g =0), the constant-intensity wave given by equation 3 is one of the radiation eigenmodes (not confined) of the potential with propagation constant equal to zero. (non-zero propagation constants are obtained by adding a constant term to the potential in equation 2). Another interesting point to observe is that the above solutions exist only for non-Hermitian potentials, since for W ( x )→0 we also have V ( x )→0. Therefore, these families of counterintuitive solutions are a direct consequence of the non-Hermitian nature of the involved potential V ( x ) and as such exist only for these complex structures with gain and loss. The fact that such constant-intensity waves are a direct generalization of the fundamental concept of free-space plane waves to complex environments can be easily understood by setting W ( x )= c 1 =const. with In this case, the potential V ( x )= c 1 2 corresponds to a bulk dielectric medium for which the constant-intensity waves reduce to the plane waves of homogeneous space, ψ =. It can also be shown that the potential W ( x ) determines the power flow in the transverse plane that physically forces the light to flow from the gain to the loss regions. In particular, the transverse normalized Poynting vector defined as S =( i /2)( ψ ∂ ψ */∂ x − ψ * ∂ ψ /∂ x ) takes on the following very simple form: S = A 2 W ( x ). To illustrate the properties of such constant-intensity solutions, we consider the following one-dimensional potentials (not counting the direction of propagation z ) generated by Hermite polynomials choosing . The results for vanishing non-linearity (g=0) and n =1, B =0.5 are shown in Fig. 1 . Note that the corresponding localized optical potential V ( x ) is not -symmetric ( Fig. 1a ) and physically describes a waveguide coupler with optical gain in the middle and lossy arms in the evanescent region around it. If the initial beam is not designed to have the correct phase (as given by equation 3) but is instead ψ ( x ,0)= A , then the light diffracts fast to the gain region, as can be seen in Fig. 1b . In Fig. 1c,d , we show the results for the constant-intensity solutions with the correct phase, where diffraction is found to be strongly suppressed. Similar to the diffraction-free beams 25 , we find that the wider the width of the truncation aperture is at the input facet, z =0, the larger is the propagation distance after which the beam starts to diffract (compare Fig. 1c with Fig. 1d ). Figure 1: Constant-intensity waves in a linear waveguide coupler. ( a ) Real part (green line) and imaginary part (black line) of the complex potential V ( x ) satisfying equation 2 (blue filled regions depict loss, whereas the red one depicts gain). ( b ) Evolution of a constant amplitude without the correct phase at the input at z =0. ( c , d ) Spatial diffraction of the truncated constant-intensity solution satisfying the correct phase relation of equation 3. Two different input truncations are shown for comparison. The lines in the x − z planes of ( b , c , d ) around x =0 depict the real refractive index of the potential as shown in a . Note the different vertical axis scale in b . Full size image Two-dimensional constant-intensity waves Similar constant-intensity solutions can also be derived in two spatial dimensions x , y . The family of these complex potentials V ( x , y ) and the corresponding constant-intensity solutions ψ ( x , y , z ) of the two-dimensional NLSE are: where with W x , W y being real functions of x , y and C being any smooth open curve connecting an arbitrary point ( a , b ) to any different point ( x , y ). As in the one-dimensional case, these solutions are valid in both the linear and the nonlinear domain. For the particular case of irrotational flow W x =cos x sin y , W y =cos y sin x , the resulting periodic potential V ( x , y ) is that of an optical lattice with alternating gain and loss waveguides. The imaginary part of such a lattice is shown in Fig. 2a . In Fig. 2b , we display the diffraction of a constant-intensity beam with the correct phase (as in equation 6) launched onto such a linear lattice (g=0) through a circular aperture. As we can see, the beam maintains its constant intensity over a remarkably long distance. In Fig. 2c , we present the corresponding transverse Poynting vector defined as , illustrating that the wave flux follows stream line patterns from the gain to the loss regions. Once the finite beam starts to diffract, this balanced flow is disturbed and the waves are concentrated in the gain regions. Figure 2: Constant-intensity waves in a two-dimensional linear optical lattice. ( a ) Imaginary part of the complex potential V ( x , y ) discussed in the text. Red and blue regions correspond to gain and loss, respectively. ( b ) Iso-contour of the beam intensity launched onto the potential in a through a circular aperture of radius ∼ 40 λ 0 , where λ 0 is the free space wavelength. Also shown are three transverse intensity plots (from bottom to top) at z =0, z =5, z =10. ( c ) Transverse power flow pattern (indicated by arrows) of the beam at z =5. Full size image Modulation instability of constant-intensity waves Quite remarkably, the above diffraction-free and uniform intensity waves are also solutions of the NLSE for both the self-focusing and the defocusing case. This allows us to study the modulation instability of such solutions under small perturbations. In particular, we are interested in understanding the linear stability of the solutions of equation 1 of the form , where the phase function is θ ( x , z )= gA 2 z + ∬ W ( x ) d x . This expression describes the stationary constant-intensity wave under the perturbation of the eigenfunctions F λ ( x ) and G λ ( x ) with ɛ <<1. The imaginary part of λ measures the instability growth rate of the perturbation and determines whether a constant-intensity solution is stable ( ) or unstable ( λ ∈ ). To leading order in ɛ , we obtain a linear eigenvalue problem for the two-component perturbation eigenmodes ϕ λ ( x )≡[ F λ ( x ) G λ ( x )] T , the eigenvalues of which are λ . This eigenvalue problem and the operator matrix are defined in the Methods section. So far the presented MI-analysis is general and can be applied to any real W ( x ) (periodic or not). To be more specific, we now apply this analysis to study the MI of constant-intensity waves in -symmetric optical lattices 19 , 20 , assuming that W ( x ) is a periodic potential with period α . In particular, we consider the example of a -symmetric photonic lattice where (the resulting optical potential and the corresponding constant-intensity solution are given in the Methods section). For all the subsequent results we will always assume (without loss of generality) that V 0 =4 and V 1 =0.2. It is important to note here that for these parameters our -lattice V ( x ) is in the so-called ‘unbroken -symmetric phase’ with only real propagation constants (see Methods). In the broken phase, some of these eigenvalues are complex and the instabilities due to nonlinearity are physically expected. As W ( x ) is periodic we can expand the perturbation eigenvectors ϕ λ ( x ) in a Fourier series and construct numerically the bandstructure of the stability problem (different from the physical band-structure of the optical lattice). Based on the above, the Floquet–Bloch theorem implies that the eigenfunctions ϕ λ ( x ) can be written in the form ϕ λ ( x )= φ ( x , k )e ikx , where φ ( x , k )= φ ( x + α , k ) with k being the Bloch momentum of the stability problem (see Methods). The corresponding results are illustrated in Fig. 3a,b for a self-focusing nonlinearity ( g =1) and for different values of the amplitude A . More specifically, we show the instability growth rate |Im{ λ ( k )}| as a function of the Bloch wavenumber k of the perturbation eigenvector in the first half Brillouin zone. We see that the constant-intensity waves are linearly unstable for any value of the Bloch momentum of the imposed perturbation and that instability band gaps form due to the periodic nature of the imposed perturbations. The different bands are illustrated in Fig. 3a,b with different colours. Figure 3: Modulation instability diagrams for self-focusing and defocusing nonlinearity. Growth rate of the instability |Im{ λ (k)}| as a function of the Bloch momentum (half of first Brillouin zone), for self-focusing nonlinearity and amplitudes ( a ) A =0.5, ( b ) A =1 and for defocusing nonlinearity ( c ) A =1 and ( d ) A =2. Different colours in a and b denote different instability bands. Full size image The situation is different for the defocusing case ( g =−1) where the results are presented in Fig. 3c,d . For some values of k , the constant-intensity solutions are linearly stable and their instability dependence forms bands reminiscent of the bands appearing in conventional MI results for bulk or periodic potentials 26 , 29 , 30 , but quite different and profoundly more complex. To understand the physical consequences of such instabilities and how they lead to filament formation, we have performed independent numerical simulations for the dynamics of the constant-intensity solutions against specific perturbations with results being shown in Fig. 4 . More specifically, we examine the intensity evolution of a constant-intensity solution when being perturbed by a specific Floquet–Bloch stability mode. In other words, at the input of the waveguide structure at z =0, we have , with phase and we want to know whether the linear stability analysis captures the exponential growth of the imposed perturbations correctly. For the considered -symmetric lattice with self-focusing nonlinearity, we examine the nonlinear dynamics of the constant-intensity solution and the result is presented in Fig. 4a . For a perturbation eigenmode with Bloch momentum k =0 and A =1, ɛ =0.01, we can see from Fig. 3b that Im{ λ (0)} ∼ 1. Therefore, we can estimate the growth for a propagation distance of z =5 to be around |1+0.01· e 1·5 | 2 ∼ 6.1, which agrees very well with the dynamical simulation of Fig. 4a . Similarly, for the defocusing nonlinearity ( Fig. 4b ), and for parameters k =0.22 and A =2, ɛ =0.001, we estimate the growth for a propagation distance z =35 to be around |2+0.001· e 0.046·35 | 2 ∼ 4.02, which matches very well with the numerical propagation result of Fig. 4b . Figure 4: Perturbed constant-intensity waves propagating in nonlinear media. Numerical results for the intensity evolution of a constant-intensity wave for ( a ) a self-focusing nonlinearity (g=1) with parameters k =0, A =1, ɛ =0.01, and ( b ) for a defocusing nonlinearity (g=−1) with parameters k =0.22, A =2, ɛ =0.001 The peak values are indicated on the vertical axes and match very well with the results of our perturbation analysis. Full size image Discussion Symmetry breaking instabilities belong to the most fundamental concepts of nonlinear sciences. They lead to many rich phenomena such as pattern formation, self-focusing and filamentation just to name a few. The best known symmetry breaking instability is the MI. In its simplest form, it accounts for the break up of a uniform intensity state due to the exponential growth of random perturbations under the combined effect of dispersion/diffraction and nonlinearity. Most of the early work on MI has been related to classical hydrodynamics, plasma physics and nonlinear optics. Soon thereafter, it was realized that the idea of MI is in fact universal and could exist in other physical systems. For example, spatial optics is one particular area that provides a fertile ground where MI can be theoretically modelled (mainly within the framework of the NLSE) and experimentally realized. Indeed, temporal MI has been observed in optical fibres as well as its spatial counterpart in nonlinear Kerr, quadratic, biased photorefractive media with both coherent and partially coherent beams and in discrete waveguide arrays 26 , 27 , 28 , 29 , 30 , 31 . Up to now, most of the mathematical modelling of MI processes has been focused on wave propagation in homogeneous nonlinear media, where an exact constant-intensity solution for the underlying governing (NLSE type) equations can be obtained. In this context, inhomogeneities are considered problematic as they provide severe conceptual limitations that hinder one from constructing a constant-intensity solution, a necessary condition to carry out the MI analysis. Several directions have been proposed to bypass this limitation. They can be organized into three distinct categories: (i) the tight binding approach, in which case the NLSE equation in the presence of an external periodic potential is replaced by its discrete counterpart that, in turn, admits an exact plane wave solution (a discrete Floquet–Bloch mode), (ii) MI of nonlinear Bloch modes and (iii) direct numerical simulations using a broad beam as an initial condition whose nonlinear evolution is monitored. However, none of these alternatives amount to true MI. We overcome such difficulties by introducing the above family of constant-intensity waves, which exists in a general class of complex optical potentials. These type of waves have constant intensity over all space despite the presence of non-Hermitian waveguide structures. They also remain valid for any sign of Kerr nonlinearity and thus allow us to perform a modulational stability analysis for non-homogeneous potentials. The most appropriate context to study the MI of such solutions is that of -symmetric optics 6 , 7 , 8 , 9 , 10 , 11 , 14 , 19 , 20 , 21 , 22 , 24 . We find that in the self-focusing regime, the waves are always unstable, whereas in the defocusing regime the instability appears for specific values of Bloch momenta. In both regimes (self-focusing, defocusing), the constant-intensity solutions break up into filaments following a complex nonlinear evolution pattern. We expect that our predictions can be verified by combining recent advances in shaping complex wave fronts 3 with new techniques to fabricate non-Hermitian scattering structures with gain and loss 7 , 8 , 9 , 10 . As the precise combination of gain and loss in the same device is challenging, we suggest using passive structures with only loss in the first place. For such suitably designed passive systems 6 solutions exist that feature a pure exponential decay in the presence of an inhomogeneous index distribution. This exponential tail should be observable in the transmission intensity as measured at the output facet of the system. Another possible direction is that of considering evanescently coupled waveguide systems. Using coupled mode theory one can analytically show that our constant-intensity waves exist also in such discrete systems with distributed gain and loss all over the waveguide channels. In this case, the constant-intensity waves are not radiation modes but rather supermodes of the coupled system. With these simplifications an experimental demonstration of our proposal should certainly be within reach of current technology. Methods Constant-intensity solutions of the non-Hermitian NLSE We prove here analytically that stationary constant-intensity solutions of the NLSE exist for a wide class of non-Hermitian optical potentials (which are not necessarily -symmetric). We are looking for solutions of the NLSE of the form ψ ( x , z )= f ( x )exp( iμz ), where f ( x ) is the complex field profile and μ the corresponding propagation constant, to be found. By substitution of this last relation into equation 1, we get the following nonlinear equation − μf + f xx + V ( x ) f + g |f| 2 f =0. We assume a solution of the form f ( x )= ρ ( x )exp[iΘ( x )], with ρ ( x ),Θ( x ) real functions of position x . Since V ( x )= V R ( x )+i V I ( x ), the last nonlinear equation can be separated in real and imaginary parts. As a result we get the following two coupled equations for the real and the imaginary part of the complex potential, respectively: where Θ x ≡dΘ/d x and ρ x ≡d ρ /d x . By choosing V R ( x )=Θ x 2 , and by solving equation 8 to get , we can reduce the above system of coupled nonlinear ordinary differential equations to only one, namely ρ xx − μ ρ + gρ 3 = 0. If we assume now a constant amplitude solution, namely ρ ( x )= Α = const ., we have the following general solution for any real-valued phase function Θ(x): ψ ( x , z )= A exp[iΘ( x )+i gA 2 z ], where , and V I ( x )=−Θ xx . By setting W ( x )≡Θ x ( x ), we can write the optical potential, for which the constant-intensity solution exists, as V ( x )= W 2 ( x )−id W /d x , and the constant-intensity solution itself reads as follows . We can easily see that in the special case where W ( x ) is even, the actual optical potential V ( x ) is -symmetric. Modulation instability analysis in optical potentials To study the modulation instability of the uniform intensity states for any given W , we consider small perturbation of the solutions of the NLSE of the form: , where θ ( x , z )= gA 2 z + ∬ W ( x )d x and ɛ <<1. Here, F λ ( x ) and G λ ( x ) are the perturbation eigenfunctions and the imaginary part of λ measures the instability growth rate of the perturbation. By defining the perturbation two-component eigenmode ϕ λ ( x )≡[ F λ ( x ) G λ ( x )] T , we obtain the following linear eigenvalue problem (to leading order in ɛ ): where the operator matrix is defined by the following expression and the related linear operators are defined by the relationships: So far the above discussion is general and applies to any (periodic or not) potential W ( x ) that is real. Properties of the -symmetric optical lattice We choose a specific example of a well-known non-Hermitian potential, that is, that of a -symmetric optical lattice 19 , 20 . More specifically for the particular case of , we get the corresponding optical potential and constant-intensity wave: It is obvious that this potential is -symmetric as it satisfies the symmetry relation V ( x )= V *(− x ). In order for the constant-intensity solution to be periodic in x with the same period as the lattice, the term V 0 must be quantized, namely V 0 =0,±2,±4,.... This constant term that appears in the potential W ( x ) results in another constant term in the actual potential V ( x ) and can be removed (with respect to the NLSE) with a gauge transformation of the type . Even though this is the case, this term is important because it also appears in the real part of V ( x ). It determines if the -lattice is in the broken or in the unbroken phase, regarding its eigenspectrum. For the considered parameters, the lattice is below the exceptional point and its eigenvalue spectrum is real. Plane wave expansion method Even though our methodology is general, we apply it to study the modulation instability of constant-intensity waves in -symmetric optical lattices. In particular, we consider the periodic W ( x ) (with period α ) that leads to equations 14 and 15. As we are interested in the MIs of the constant-intensity wave solution of the NLSE under self-focusing and defocusing nonlinearities, we want the -lattice V ( x ) to be in the unbroken phase. In the broken phase some eigenvalues are complex and the instabilities are physically expected. That is the reason why we choose (without loss of generality) the parameters V 0 =4 and V 1 =0.2, which lead to an ‘unbroken’ spectrum with real eigenvalues. As W ( x ) is periodic, we can expand the perturbation eigenvectors ϕ λ ( x ) in Fourier series and construct numerically the band-structure of the stability problem. So at this point, we have to distinguish between the physical band-structure of the problem and the perturbation band-structure of the stability problem of equation 9. Based on the above, the Floquet–Bloch theorem implies that the eigenfunctions ϕ λ ( x ) can be written in the form ϕ λ ( x )= φ ( x , k )e ikx , where φ ( x , k )= φ ( x + α , k ) with k being the Bloch momentum of the stability problem. Applying the plane wave expansion method, the wavefunctions φ ( x , k ) and the potential W ( x ) can be expanded in Fourier series as: where q =2 π / α is the dual lattice spacing. Substitution of equations 16 and 17 into the eigenvalue problem of equation 9, leads us to the following nonlocal system of coupled linear eigenvalue equations for the perturbation coefficients u n , υ n and the band eigenvalue λ ( k ) that depends on the Bloch momentum k : where U n,m ( k )=2[ q ( n − m )+ k ] W m and Ω n ( k )= gA 2 −( qn + k ) 2 . The family of constant-intensity wave solutions of the NLSE is modulationally unstable if there exists a wave number k for which Im{ λ ( k )}≠0, while they are stable if λ ( k ) is real. For our case, the periodic function W is given by for which equation 18 becomes: where a n ( k )= V 1 ( qn + k ), μ n ( k )=Ω n ( k )− V 0 ( qn + k ) and ν n ( k )=Ω n ( k )+ V 0 ( qn + k ). Direct eigenvalue method An alternative way (instead of the plane wave expansion method that was used above) of solving the infinite dimensional eigenvalue problem of equation 9 is to directly apply the Floquet–Bloch theorem on the eigenfunctions ϕ λ ( x ), employ the Born-von-Karman boundary conditions (periodic boundary conditions at the end points of the finite lattice) and construct numerically the bandstructure of the instability growth for every value of the Bloch momentum. In particular, the eigenfunctions can be written as ϕ λ ( x )=[ u ( x )e ikx υ ( x )e ikx ] T , where u ( x )= u ( x + α ), υ ( x )= υ ( x + α ). Substituting this form of the perturbation eigenfunctions into equation 9, we get the following eigenvalue problem: where the Bloch momentum takes values in the first Brillouin zone k ∈ [− π / α , π / α ] and the operators are defined by equations 11, 12, 13. By applying the finite difference method, we restrict our analysis to one unit cell x ∈ [− α /2, α /2], in order to calculate the growth rate of the random perturbations for every value of the Bloch momentum. We have checked explicitly that both approaches, that is, the plane wave expansion method based on equation 19 and the direct eigenvalue analysis based on equation 20, give the same results. Analytical results in the shallow lattice limit In the limit of a shallow optical lattice (the refractive index difference between the periodic modulation and the background refractive index value is very small), one can gain substantial insight into the structure of the unstable band eigenvalues by deriving an approximate analytical expression for λ ( k ) valid near the Bragg points based on perturbation theory. These points are given by (for self-focusing nonlinearity): where k c =2 A 2 and n =1,2,3,.... The above analytical formulas lead to an excellent match with the numerical approaches in the shallow lattice limit ( V 0 , V 1 <<1). Complex filament formation To understand better the complex filament formation of a constant-intensity solution in a -symmetric lattice for both signs of nonlinearity, we performed nonlinear wave propagation simulations based on a spectral fast Fourier approach of the integrating factors method for NLSE. The initial conditions that were used to examine the filament formation were based on the perturbation eigenmode profiles. In particular, we have at z =0, the following initial field profile in terms of Bloch eigenfunctions , for specific values of Bloch momentum k and the constant-intensity wave amplitude A . Additional information How to cite this article : Makris, K. G. et al . Constant-intensity waves and their modulation instability in non-Hermitian potentials. Nat. Commun. 6:7257 doi: 10.1038/ncomms8257 (2015).
Materials that locally amplify or absorb light allow surprising new kinds of light waves – this has now been shown by calculations at TU Wien (Vienna). When a light wave penetrates a material, it is usually changed drastically. Scattering and diffraction leads to a superposition of waves, resulting in a complicated pattern of darker and brighter light spots inside the material. In specially tailored high-tech materials, which can locally amplify or absorb light, such effects can be completely suppressed. Calculations at TU Wien (Vienna University of Technology) have now shown that these materials allow new kinds of light waves, which have the same intensity everywhere inside the material, as if there was no wave interference at all. Due to their unusual properties, these new solutions of the wave equation could be useful for technological applications. Obstacles Change the Wave When a light wave travels through free space, its intensity can be the same everywhere. But as soon as it hits an obstacle, the wave is diffracted. At some points in space, the wave becomes brighter, in other places it becomes darker than it would have been without hitting the object. This is the reason we can see objects that do not emit light by themselves. In recent years, however, experiments have been carried out with new materials which have the ability to modify light in a special way: they can locally amplify light, similar to a laser, or absorb light, like sunglasses do. "When such processes are possible, we have to employ a mathematical description of the light wave which is quite different from the one we use for normal, transparent materials," says Professor Stefan Rotter (TU Wien). "In this case we speak of non-hermitian media." Specially designed non-hermitian materials remain completely unperturbed. New Solutions for the Wave Equation Konstantinos Makris and Stefan Rotter from TU Wien, together with Ziad Musslimani and Demetrios Christodoulides from Florida (USA), discovered that this alternative description allows new kinds of solutions for the wave equation. "The result is a light wave with the same brightness at each point in space, just like a wave in free space, even though it travels through a complex, highly structured material", says Konstantinos Makris. "In some sense, the material is completely invisible to the wave, even though the light passes through the material and interacts with it." The new concept is reminiscent of so-called "meta-materials", which have been created in recent years. These materials have a special structure, which allows them to diffract light in unusual ways. In certain cases the structure can bend the light around the object, so that the object becomes invisible, much like Harry Potter's invisibility cloak. "The principle of our non-hermitian materials, however, is quite different", says Stefan Rotter. "The light wave is not bent around the object, but fully penetrates it. The way the material influences the wave is, however, fully cancelled by a carefully tuned interplay of amplification and absorption." In the end, the light wave is exactly as bright as it would have been without the object – at each and every point in space. Several technical problems still have to be solved until such materials can be routinely fabricated, but scientists are already working on that. The theoretical work now published, however, shows that besides meta-materials there is another, extremely promising way to manipulate waves in unconventional ways. "With our work we have opened a door, behind which we expect to find a multitude of exciting new insights", says Konstantinos Makris.
dx.doi.org/10.1038/ncomms8257
Medicine
Study sheds light on the mysterious evolution of DNA rings in tumors
Rocío Chamorro González et al, Parallel sequencing of extrachromosomal circular DNAs and transcriptomes in single cancer cells, Nature Genetics (2023). DOI: 10.1038/s41588-023-01386-y Journal information: Nature Genetics
https://dx.doi.org/10.1038/s41588-023-01386-y
https://medicalxpress.com/news/2023-05-mysterious-evolution-dna-tumors.html
Abstract Extrachromosomal DNAs (ecDNAs) are common in cancer, but many questions about their origin, structural dynamics and impact on intratumor heterogeneity are still unresolved. Here we describe single-cell extrachromosomal circular DNA and transcriptome sequencing (scEC&T-seq), a method for parallel sequencing of circular DNAs and full-length mRNA from single cells. By applying scEC&T-seq to cancer cells, we describe intercellular differences in ecDNA content while investigating their structural heterogeneity and transcriptional impact. Oncogene-containing ecDNAs were clonally present in cancer cells and drove intercellular oncogene expression differences. In contrast, other small circular DNAs were exclusive to individual cells, indicating differences in their selection and propagation. Intercellular differences in ecDNA structure pointed to circular recombination as a mechanism of ecDNA evolution. These results demonstrate scEC&T-seq as an approach to systematically characterize both small and large circular DNA in cancer cells, which will facilitate the analysis of these DNA elements in cancer and beyond. Main Measuring multiple parameters in the same cells is key to accurately understand biological systems and their changes during diseases 1 . In the case of circular DNAs, it is critical to integrate DNA sequence information with transcriptional output measurements to assess their functional impact on cells. At least three types of circular DNAs can be distinguished in human cells 2 , 3 , 4 , 5 : (1) small circular DNAs (<100 kb) 6 , which have been described under different names including eccDNAs 6 , microDNAs 4 , apoptotic circular DNAs 6 , small polydispersed circular DNAs 7 and telomeric circular DNAs or C-circles 8 ; (2) T cell receptor excision circles (TRECs) 9 ; and (3) large (>100 kb), oncogenic, copy number-amplified circular extrachromosomal DNAs 10 , 11 (referred to as ecDNA and visible as double minute chromosomes during metaphase 12 ). Despite our increasing ability to characterize multiple features in single cells 13 , an in-depth characterization of circular DNA content, structure and sequence in single cells remains elusive with current approaches. In cancer, oncogene amplifications on ecDNA are of particular interest because they potently drive intercellular copy number heterogeneity through their unique ability to be replicated and unequally segregated during mitosis 14 , 15 , 16 , 17 , 18 , 19 . This heterogeneity enables tumors to adapt and evade therapies 2 , 20 , 21 , 22 . Indeed, patients with ecDNA-harboring cancers have adverse clinical outcomes 11 . Recent investigations indicate that enhancer-containing ecDNAs interact with each other in nuclear hubs 17 , 23 and can influence distant chromosomal locations in trans 23 , 24 . This suggests that even ecDNAs not harboring oncogenes may be functional 23 , 24 . Furthermore, we recently revealed that tumors harbor an unanticipated repertoire of smaller, copy number-neutral circular DNAs of yet unknown functional relevance 3 . In this study, we report single-cell extrachromosomal circular DNA and transcriptome sequencing (scEC&T-seq), a method that enables parallel sequencing of all circular DNA types, independent of their size, content and copy number, and full-length mRNA in single cells. We demonstrate its utility for profiling single cancer cells containing both structurally complex multifragmented ecDNAs and small circular DNAs. Results scEC&T-seq detects circular DNA and mRNA in single cells Current state-of-the-art circular DNA purification approaches involve three sequential steps, that is, isolation of DNA followed by removal of linear DNA through exonuclease digestion and enrichment of circular DNA by rolling circle amplification 3 , 6 , 25 . We reasoned that this approach may be scaled down to single cells and when combined with Smart-seq2 (ref. 26 ) may allow the parallel sequencing of circular DNA and mRNA. To benchmark our method in single cells, we used neuroblastoma cancer cell lines, which we had previously characterized in bulk populations 3 . We used FACS to separate cells into 96-well plates (Fig. 1a , Supplementary Fig. 1a,b and Supplementary Table 1 ). DNA was separated from polyadenylated RNA, which was captured on magnetic beads coupled to single-stranded sequences of deoxythymidine (Oligo dT) primers, similarly to previous approaches 27 . DNA was subjected to exonuclease digestion, as successfully performed in bulk cell populations in the past, to enrich for circular DNA 3 , 6 , 25 (Fig. 1b ). DNA subjected to PmeI endonuclease before exonuclease digestion served as a negative control 3 . In a subset of cases, DNA was left undigested as an additional control (Fig. 1b ). The DNA remaining after the different digestion regimens was amplified. The amplified DNA was subjected to Illumina paired-end sequencing and in some cases to long-read Nanopore sequencing (Fig. 1a ). The sequence composition of circular DNAs was analyzed and genomic origin was inferred in circularized regions using previously established computational algorithms for circular DNA analysis 3 . Fig. 1: scEC&T-seq enables enrichment and detection of circular DNA in single cells. a , Schematic of the scEC&T-seq method. b , Schematic representation of the experimental conditions and expected outcomes. c , Genome tracks comparing read densities on mtDNA (chrM) in three exemplary CHP-212 cells for each experimental condition tested. Top to bottom, No digestion (purple), 1-day exonuclease digestion (light green), 5-day exonuclease digestion (dark green) and endonuclease digestion with PmeI before 5-day exonuclease digestion (gray). d , Fraction of sequencing reads mapping to mtDNA in each experimental condition in CHP-212 (red) and TR14 (blue) cells. e , Fraction of sequencing reads mapping to circular DNA regions identified by scEC&T-seq in each experimental condition in CHP-212 and TR14 cells. f , Fraction of sequencing reads mapping to circular DNA regions with the endonuclease PmeI targeting the sequence identified by scEC&T-seq in each experimental condition in CHP-212 and TR14 cells. d – f , Sample size is identical across conditions: no digestion ( n = 16 TR14 cells, n = 28 CHP-212 cells); 1-day exonuclease digestion ( n = 37 TR14 cells, n = 31 CHP-212 cells); 5-day exonuclease digestion ( n = 25 TR14 cells, n = 150 CHP-212 cells); and endonuclease digestion with PmeI before 5-day exonuclease digestion ( n = 6 TR14 cells, n = 12 CHP-212 cells). All statistical analyses correspond to a two-sided Welch’s t -test. P values are shown. In all boxplots, the boxes represent the 25th and 75th percentiles with the center bar as the median value and the whiskers representing the furthest outlier ≤1.5× the interquartile range (IQR) from the box. Source data Full size image To evaluate the performance of our scEC&T-seq method, we first assessed mitochondrial DNA (mtDNA) detection and enrichment because mtDNA is present in all cells, is digested by PmeI and, due to its circularity and extrachromosomal nature, serves as a positive control. A significantly higher percentage of reads mapping to mtDNA was detected after longer exposure of the DNA of single cells to exonuclease ( P < 2.2 × 10 −16 , two-sided Welch’s t -test; Fig. 1c,d and Supplementary Fig. 1c,d ). This was also the case for all other circular DNA elements ( P < 2.2 × 10 −16 , two-sided Welch’s t -test; Fig. 1e ), indicating significant enrichment of circular DNA. Significant enrichment of ecDNA regions, that is, large (>100 kb) circular DNAs containing oncogenes, was observed after 1-day exonuclease digestion ( P = 2.10 × 10 −5 , two-sided Welch’s t -test; Supplementary Fig. 1e ). This enrichment was not as pronounced as that of smaller circular DNAs after prolonged 5-day exonuclease digestion, suggesting that ecDNA may be less stable in the presence of exonuclease compared to smaller circular DNAs, or that small circular DNAs are more efficiently amplified by φ29 polymerase (Supplementary Fig. 1e,f ). PmeI endonuclease incubation before 5-day exonuclease digestion significantly reduced reads mapping to mtDNA by 404.8 fold ( P < 2.2 × 10 −16 , two-sided Welch’s t -test; Fig. 1c,d and Supplementary Fig. 1c ). Similar depletion was observed for reads mapping to circular DNAs containing PmeI recognition sites, confirming specific enrichment of circular DNA through our scEC&T-seq protocol ( P < 2.2 × 10 −16 , two-sided Welch’s t -test; Fig. 1f and Supplementary Fig. 1g,h ). Significant concordance between Illumina- and Nanopore-based detection of circular DNAs suggested reproducible detection independent of sequencing technology (two-sided Pearson correlation, R = 0.95, P < 2.2 × 10 −16 ; Supplementary Fig. 2a–d ). Thus, scEC&T-seq enables the isolation and sequencing of circular DNAs from single cells. The separated mRNA from the same cells was processed using Smart-seq2 (ref. 26 , 27 ) (Fig. 1a and Supplementary Note 1 ). We detected on average 9,058 ± 1,163 (mean ± s.d.) full mRNA transcripts from different genes per cell (Supplementary Fig. 3a–c and Supplementary Table 2 ). Unsupervised clustering separated both cell line populations (Supplementary Fig. 3d,e ). To test whether scEC&T-seq provided high-quality mRNA sequencing data, we assessed cell cycle signature gene expression and classified single cells into three cell cycle phases (G1, S, G2/M; Supplementary Fig 3f ). The cell cycle distributions inferred from scEC&T-seq matched those measured using FACS-based cell cycle analysis, confirming its accuracy (Supplementary Fig. 3g ). Thus, scEC&T-seq not only enables the enrichment and detection of circular DNAs, but also allows parallel measurement of high-quality, full transcript mRNA in single cancer cells. scEC&T-seq detects recurrent ecDNAs in single cells Only circular DNAs conferring a fitness advantage are expected to be clonally present in a cancer cell population 22 . We recently found that tumors on average harbor more than 1,000 individual circular DNAs, most of which are small (<100 kb), lack oncogenes and do not contribute to oncogene amplification 3 . Their intercellular differences, however, remain unexplored and it is still unclear whether small circular DNAs can confer a fitness advantage and are clonally propagated in cancer cells 10 . Consistent with our previous reports in bulk populations 3 , the average number of individual circular DNA regions identified using scEC&T-seq varied between 97 and 1,939 (median = 702) per single cell in neuroblastoma cell lines (Fig. 2a ). The circular DNA size distribution and genomic origin was similar between single cells and mirrored the distribution observed in bulk sequencing 3 (minimum = 30 bp, maximum = 1.2 Mb, median = 21,483 kb; Fig. 2a and Supplementary Fig. 4a,b ). All analyzed cells were alive at the time of sorting (Supplementary Fig. 1a,b ) and most (>95%) circular DNAs detected in single cells were larger than apoptotic circular DNAs, suggesting that most circular DNAs were not a result of apoptosis, as suggested by other reports 6 (Fig. 2a and Supplementary Fig. 4a ). Thus, each cancer cell contains a wide spectrum of individual circular DNAs from different genomic contexts. Fig. 2: Oncogene-containing ecDNAs are recurrently identified in neuroblastoma single cells. a , Heatmap displaying the number and length of individual circular DNA regions (<100 kb) identified by scEC&T-seq in CHP-212 and TR14 neuroblastoma single cells ( n = 150 CHP-212 cells, n = 25 TR14 cells; bin size = 500 bp) with density distribution for circular DNA sizes (top) and overall circular DNA counts (right). b , Heatmap of genome-wide circular DNA density in CHP-212 and TR14 neuroblastoma single cells (top: n = 150 CHP-212 cells, bin size = 3 Mb; bottom: n = 25 TR14 cells, bin size = 3 Mb), and genome tracks displaying genome-wide read density from WGS in bulk cell populations. The location of the MYCN gene in chromosome 2 is shown. c , d , Recurrence analysis in CHP-212 ( n = 150) ( c ) and TR14 ( n = 25) ( d ) cells displayed as the fraction of cells containing a detected circular DNA from each circular DNA type. ecDNA was defined as circular DNAs overlapping with copy number-amplified regions identified in bulk sequencing (green) and mtDNA or chrM (red). ‘Others’ are defined as all other small circular DNAs (blue). Data are presented as the mean ± s.e.m. Full size image As expected, most small circular DNAs did not harbor oncogenes 10 . The overall proportion of small circular DNAs detected recurrently in cells was low (Fig. 2b–d and Supplementary Fig. 4c ). This indicates that only a small subset of small circular DNAs is clonally propagated in cancer cells. In line with their known oncogenic role in cancer and the positive selective advantages, amplified, oncogene-containing ecDNAs were recurrently detected in cells (Fig. 2b–d ), which was validated by FISH (Fig. 2b and Supplementary Fig. 5a–c ). Even though the functional relevance of small circular DNAs cannot be excluded, the observed high subclonality suggests that they do not contribute to cancer cell fitness to the same extent as clonal oncogene-amplifying ecDNA. Complex multifragmented ecDNAs are detectable in single cells We and others recently showed that ecDNAs are complex structures, sometimes containing rearranged fragments from different chromosomes 23 , 28 , 29 , 30 . Considering that scEC&T-seq was able to recurrently detect megabase-sized ecDNAs harboring the oncogenes MYCN , CDK4 or MDM2 (Fig. 2b ), we asked whether scEC&T-seq could provide insights into ecDNA structures. Indeed, scEC&T-seq captured multifragment ecDNAs in almost all single cells recapitulating the previously described element structures found in bulk populations 23 , 28 (Fig. 3a,b ). At least one variant-supporting read per ecDNA breakpoint was detectable in approximately 30% of single cells (Supplementary Table 3 ). Further quantification of ecDNA junction-spanning reads and computational structural variant (SV) detection both from short- and long-read sequencing confirmed the interconnectedness of segments (Supplementary Fig. 6a–p and Supplementary Tables 4 and 5 ). Such SVs can lead to fusion transcript expression on ecDNA 3 . Indeed, fusion transcripts could be identified in single cells using scEC&T-seq (Fig. 3c and Supplementary Fig. 7 ). Thus, scEC&T-seq is sufficiently sensitive to detect ecDNA-associated SVs and resulting fusion gene expression in single cells. Fig. 3: scEC&T-seq captures the complex structure of multifragmented ecDNAs in single neuroblastoma cells. a , b , Long- and short-read-based ecDNA reconstructions derived from WGS data in bulk cell populations and read coverage over the ecDNA fragments across single cells in CHP-212 ( n = 150) ( a ) and TR14 ( n = 25) cells ( b ) as detected by scEC&T-seq. Top to bottom, ecDNA amplicon reconstruction, copy number profile, gene annotations, read density over the ecDNA region in merged single cells and coverage over the ecDNA region in single cells (rows). c , Exemplary fusion transcript detected by scEC&T-seq resulting from the rearrangement of chromosomal segments in the CDK4 ecDNA in TR14. Top to bottom, scCircle-seq read coverage over the breakpoint region in merged TR14 single cells (log-scaled), transcript annotations, scRNA-seq read coverage over the fused transcripts in merged TR14 single cells, native transcript representations and fusion transcript representation. The interconnected genomic segments in CDK4 ecDNA that give rise to the fusion gene are indicated by a red dashed line. Full size image Intercellular differences in ecDNA content drive expression differences The unequal mitotic segregation of ecDNA implies that ecDNA copy number can vary greatly between single cells 17 , 22 . In most single cells, multifragment ecDNAs did not differ in structure and composition (Fig. 3a,b ), suggesting that ecDNA is structurally stable in cultured cell lines. As predicted by their binomial mitotic segregation and the conferred strong fitness advantage 2 , 17 , most single TR14 cells contained all three independent oncogene-harboring ecDNAs also detected in bulk populations (Fig. 3b and Fig. 4a ). However, a small number of cells only contained a subset of independent ecDNAs (Fig. 4a–c ). This suggests that ecDNA content variation serves as a source of population heterogeneity. Intriguingly, MDM2 -harboring ecDNAs were detected in all single cells, whereas CDK4 - and MYCN -harboring ecDNAs were absent in some cells (Fig. 4b,c ), suggesting that yet undefined biological principles of ecDNA segregation may exist. Next, we asked whether ecDNA copy number heterogeneity influenced the expression of genes encoded on ecDNA. We confirmed that the distribution of relative ecDNA copy number was consistent with copy number distributions measured using FISH (Supplementary Fig. 8a–h ). Phasing of SNPs suggested that ecDNAs are of mono-allelic origin in each single cancer cell (Supplementary Fig. 9a,b ), confirming previous observation in bulk cell populations 3 . Consistent with copy number-driven differences in gene expression, relative ecDNA copy number was positively correlated with the mRNA read counts of genes contained on ecDNAs in the same single cells (Fig. 4d–h ). Even though enhancer interactions in clustered ecDNA may also contribute to intercellular ecDNA expression variability 23 , we provide evidence that ecDNA copy number heterogeneity is a major determinant of intercellular differences in oncogene expression. Fig. 4: Intercellular differences in ecDNA content drive gene expression differences. a , Schematic representation of the three independent ecDNAs identified in TR14: MYCN ecDNA (yellow); CDK4 ecDNA (blue); and MDM2 ecDNA (red). b , UpSet plot displaying the co-occurrence of the three ecDNAs identified in TR14 ( MDM2 , CDK4 , MYCN ) in single cells ( n = 25 TR14 cells). c , Genome tracks with read densities (log-scaled) over reconstructed ecDNA regions in three exemplary TR14 cells showing different ecDNAs detected. d , Violin plots of mRNA expression levels in TR14 and CHP-212 single cells (two-sided Welch’s t -test; P = 0.0038 ( MYCN ), P < 2.2 × 10 −16 ( LPIN1 , TRIB2 , CDK4 , MDM2 , MYT1L )); n = 171 CHP-212 cells, n = 42 TR14 cells. e , f , Pairwise comparison between ecDNA and mRNA read counts from scEC&T-seq over the reconstructed MYCN ecDNA region in CHP-212 single cells (two-sided Pearson correlation, P < 2.2 × 10 −16 , R = 0.86, n = 150 cells) ( e ) and in TR14 single cells (two-sided Pearson correlation, P = 0.0056, R = 0.54, n = 25 cells) ( f ). g , h , Pairwise comparison between ecDNA and mRNA read counts from scEC&T-seq over the reconstructed CDK4 ( g ) and MDM2 ( h ) ecDNAs in TR14 single cells (two-sided Pearson correlation, P = 0.0046, R = 0.55 for CDK4 and P = 0.0019, R = 0.59 for MDM2, n = 25 TR14 cells). Source data Full size image scEC&T-seq detects single-nucleotide variants on ecDNA and mtDNA Single-nucleotide variants (SNVs) are important drivers of intercellular heterogeneity and tumor evolution 31 . Furthermore, SNVs can be tracked in cells, allowing their use for lineage tracing applications 32 . To test whether scEC&T-seq could be used to detect SNVs, we applied SNV detection algorithms on merged single-cell scEC&T-seq data and compared the detected SNVs to those identified in the whole-genome sequences of bulk populations. Most SNVs detected using scEC&T were also detected in whole genomes (>69.5%). Because scEC&T-seq also detects mtDNA (Fig. 2c,d ), we hypothesized that heteroplasmic mitochondrial mutations may enable lineage tracing, as demonstrated in other single-cell assays in the past 32 (Fig. 1c,d and Supplementary Fig. 1c ). Indeed, unsupervised hierarchical clustering by homoplasmic mtDNA variants accurately genotyped cells (Supplementary Fig. 10a ). Heteroplasmic SNVs on mtDNA revealed high intercellular heterogeneity, and unsupervised hierarchical clustering on individual single cells grouped them, which indicates subclonality and may allow lineage tracing (Supplementary Fig. 10b and Supplementary Fig. 11a,b ). Thus, scEC&T-seq can detect heteroplasmic variants in mtDNA and ecDNA, allowing for a wide range of SNV-based applications and analyses, including lineage inference. Distinct pathways are active in cells with high small circular DNA content Whereas the origin and functional consequences of large oncogene-containing ecDNA elements has been studied in some detail in the past 33 , 34 , it is largely unclear how small circular DNAs are formed and how they influence the behavior of cells. Recent work suggests that some small circular elements are formed during apoptosis 6 . Other reports provide evidence for the involvement of aberrant DNA damage repair in their generation 35 . In line with previous reports 36 , we identified the presence of microhomology at circular breakpoints of small circular DNAs, suggesting that microhomology-mediated repair may be involved in their generation (Supplementary Fig. 12 ). The bimodal size distribution identified in single cells (Fig. 2a ) suggested that at least two types of small circular DNAs exist in cells. Very small circular DNAs (<3 kb) were found in all analyzed single cells (Fig. 2a and Fig. 5a ). No difference was observed in the fraction of very small circular DNAs between cells at different cell cycle phases (Fig. 5b ), raising the question whether such small circular DNAs can be replicated. To identify the pathways associated with the high contents of these very small circular DNAs, we compared RNA expression of cells with a high relative amount of such small circular DNAs to that of cells with low relative content (Fig. 5a ). Twenty pathways were significantly positively enriched in cell transcriptomes with high very small circular DNA content (Fig. 5c–e and Supplementary Table 6 ). In agreement with previous studies, DNA damage and repair pathways 35 , 37 , 38 , apoptosis 6 and telomere maintenance 39 were significantly enriched in cells with a high relative content of this smaller subtype of circular DNA (Fig. 5c–e ). This demonstrates that scEC&T-seq can help address long-standing questions about the origin and functional consequences of small circular DNAs. Fig. 5: High relative content of small circular DNAs is associated with DNA damage response pathway activation. a , Density plot of relative small circular DNA (<3 kb) content in CHP-212 single cells ( n = 129). For differential expression analyses, cells were divided in two categories: ‘low’ (orange area, bottom 40%) and ‘high’ (purple area, top 40%). b , Violin plot comparing the relative number of small circular DNAs (<3 kb) at different cell cycle phases in CHP-212 (red, n = 129) and TR14 (blue, n = 20) single cells. A two-sided Welch’s t -test was used among the indicated conditions. P values are shown. c , Cellular processes significantly enriched in CHP-212 cells with high relative very small circular DNA content. Adjusted P values and gene counts are shown. d , Gene set enrichment analysis (GSEA) plot of genes involved in DNA repair (adjusted P = 0.0415). e , GSEA plot of genes involved in the cellular response to the DNA damage stimulus (adjusted P = 0.0008). P values were adjusted using the Bejamini–Hochberg method. Full size image Small circular DNA breakpoints frequently overlap with CCCTC-binding factor sites Chromatin conformation and accessibility can influence DNA damage susceptibility 40 . We hypothesized that small circular DNAs may be a product of DNA damage at sites of differential chromatin accessibility or conformation. To test this hypothesis, we measured the relative enrichment of CCCTC-binding factor (CTCF) chromatin immunoprecipitation followed by sequencing (ChIP–seq) and assay for transposase-accessible chromatin using sequencing (ATAC–seq) peaks in regions of small circular DNAs compared to other sites in the genome, respectively. Small circular DNAs detected using scEC&T-seq in single CHP-212 cells and those detected using Circle-seq in the bulk cell populations were used for this analysis (Supplementary Fig. 13a–d ). Intriguingly, circular DNA breakpoints were significantly enriched at CTCF binding sites both in single cells and in bulk cell populations. This enrichment was even more striking considering that regions from which small circular DNAs originated were significantly depleted at sites of high ATAC–seq signals (Supplementary Fig. 13e ). This suggests that CTCF binding sites and non-accessible chromatin, which is abundant at CTCF bindings sites 41 , may be susceptible to breakage and circular DNA formation. To control for background ChIP–seq signals, we measured the enrichment of H3K4me1, H3K27ac and H3K27me3 ChIP–seq peaks at sites of small circular DNA formation. In all cases, small circular DNAs were found at significantly lower frequency at these sites than expected for randomly distributed regions (Supplementary Fig. 13f–h ), confirming the specificity of CTCF enrichment and indicating that sites marked by H3K4me1, H3K27ac and H3K27me3 may be protected from breakage and circularization. Considering the role of CTCF in regulating the three-dimensional structure of chromatin through mediation of chromatin loop formation 41 , our data raise the possibility that DNA breaks during CTCF-mediated loop extrusion may represent a mechanism of small circular DNA formation. scEC&T-seq profiles circular DNA in primary neuroblastomas We next applied scEC&T-seq to single nuclei from two neuroblastomas and live T cells isolated from the blood samples of two patients (Fig. 6a , Supplementary Figs. 14a,b and 15a–t and Supplementary Note 1 ). The number of individual circular DNA elements identified in cancer cells was significantly higher compared to that of normal T cells and cell line cells, suggesting that DNA circularization is more frequent in tumors than in untransformed cells or cells in culture (Fig. 6b ). Circular DNA size distributions and relative genomic content were comparable to those observed in cell lines, suggesting that scEC&T-seq reproducibly captures circular DNA regardless of the input material (Fig. 6b and Supplementary Figs. 4a and 16a ). In agreement with our observations in cell lines, the proportion of recurrently identified small circular DNAs was low (Supplementary Fig. 16b–d ). Large, oncogene-containing ecDNAs, on the other hand, were recurrently identified in tumor nuclei but not in T cells (Fig. 6c and Supplementary Fig. 16b–d ), in agreement with their oncogenic role. MYCN -containing ecDNAs were detectable in almost all cancer nuclei from both patients, which was confirmed with FISH (Supplementary Fig. 16e–g ). As observed in cell lines, intercellular differences in MYCN transcription positively correlated with relative ecDNA content (Supplementary Fig. 16h,i ). Thus, scEC&T-seq can be successfully applied to human tumors. Fig. 6: scEC&T-seq detects circular DNAs in primary neuroblastomas at the single-cell level. a , Schematic diagram describing tumor and blood sample processing. b , Number of individual circular DNA regions normalized by library size detected in primary tumor nuclei ( n = 93 nuclei patient no. 1, n = 86 nuclei patient no. 2), neuroblastoma cell line single cells ( n = 25 TR14 cells, n = 150 CHP-212 cells) and nonmalignant single T cells ( n = 38 patient no. 3, n = 41 patient no. 4). P values were calculated using a two-sided Welch’s t -test and are shown. The boxes in the boxplots represent the 25th and 75th percentiles with the center bar as the median value and the whiskers representing the furthest outlier ≤1.5× the IQR from the box. c , Heatmap of the genome-wide circular DNA density in neuroblastoma primary tumors and normal T cells ( n = 93 patient no. 1, green; n = 86 patient no. 2, purple; n = 38 patient no. 3, yellow; n = 41 patient no. 4, orange; bin sizes = 3 Mb). The location of the MYCN gene in chr2 is shown. Full size image scEC&T-seq enables inference of ecDNA structural dynamics Recent studies of cancer genomes have described structurally complex ecDNAs 3 , 11 , 18 , 19 , 28 , 29 , 42 ; however, due to the analysis of bulk cell populations, they were limited in their ability to infer structural ecDNA heterogeneity. Both analyzed neuroblastomas contained large and structurally complex MYCN -containing ecDNAs, as confirmed using long-read Nanopore sequencing of the same single nuclei and by whole-genome sequencing (WGS) of bulk cell populations (Fig. 7a and Supplementary Fig. 17a ). Whereas the ecDNA structure in patient no. 1 was so complex that it was not fully computationally reconstructed (Supplementary Fig. 17b ), the MYCN -containing ecDNA in the other patient (patient no. 2) was structurally composed of five individual genomic fragments, all derived from chromosome 2, which were connected by four SVs (nos. 1–4) in a manner that was simple enough to be reliably reconstructed in single cells (Fig. 7a ). We hypothesized that the assessment of intercellular ecDNA structural heterogeneity in this patient could facilitate the inference of ecDNA structural dynamics. Indeed, ecDNA considerably structurally differed between a subset of single cells (Fig. 7a,b ). SV no. 1 was present in all single cells, suggesting it occurred before the other SVs and may represent the initial variant leading to circularization (Fig. 7b–d ). SVs nos. 2–4, on the other hand, were not detected in a subset of cells. Moreover, SV no. 2 and SV no. 3 indicated the presence of a 6-kb deletion and SV no. 4 supported the presence of a larger deletion (approximately 180 kb) on the ecDNA, both of which were present in most but not all single cells (94.2%; Fig. 7c,d ). Analysis of split reads at the breakpoints of SV nos. 2 and 3, that is, the edges of the 6-kb deletion, and coverage across this deletion in single cells, suggested the presence of three different subclonal cell populations we termed subclone nos. 1–3. Clone no. 1 contained an intact ecDNA lacking deletions. Clone no. 2 harbored a mixed population of ecDNAs with and without deletions (Fig. 7b–e ). In clone no. 3, the detected SVs and sequencing coverage indicated the presence of a pure population of ecDNAs harboring both deletions and all SVs (Fig. 7c–e ). The simplest sequence of mutational events that would result in the observed intercellular structural ecDNA heterogeneity starts with a simple excision of an ecDNA containing MYCN and neighboring chromosomal regions, that is, SV no.1 generating ecDNA variant no. 1 found in clone no. 1 (Fig. 7e,f ). This is followed by the fusion of two simple ecDNA no. 1 variants generating a more complex rearranged ecDNA variant no. 2 that includes the small 6-kb deletion and SV nos. 2 and 3 in addition to SV no. 1 (Fig. 7e,f ). Such circular recombination is in agreement with recent models based on WGS 43 . An additional large deletion on this ecDNA would create ecDNA variant 3 with all SV nos. 1–4 and both deletions (Fig. 7e,f ). The predominance of ecDNA variant 3 in these neuroblastoma cells suggests that it may confer a positive selective advantage. Our proof-of-principle demonstration that scEC&T-seq can help infer ecDNA structural dynamics illustrates that scEC&T-seq may facilitate future studies addressing important open questions about the origin and evolution of ecDNA. Fig. 7: scEC&T-seq profiles intercellular structural ecDNA heterogeneity in neuroblastomas. a , Long read-based ecDNA reconstructions derived from WGS data in bulk populations and read coverage over the ecDNA fragments across single nuclei in patient no. 2 ( n = 86 nuclei) as detected by long-read or short-read scEC&T-seq. Top to bottom, ecDNA amplicon reconstruction (the SVs on ecDNAs are colored; SV nos. 1–4), gene annotation, read density over the ecDNA region in bulk long-read Nanopore WGS data, read density over the ecDNA region in merged single nuclei and coverage over the ecDNA region in single nuclei (rows) as detected by long-read or short-read scEC&T-seq. The 6-kb deletion is highlighted in red. The single asterisk indicates the unmappable region of the reference genome (hg19). b , Heatmap of the total number of reads (log-scaled) in a 500-bp window over the identified 6-kb deletion on ecDNA across single nuclei in patient no. 2 ( n = 86 nuclei). c , Exemplary genome tracks of the three identified clone variants in patient no. 2 based on the absence or presence of the 6-kb deletion on the ecDNA element. The log-scaled total read density is shown in blue and the circle edge-supporting read density is shown in gray. d , Detection of SV nos. 1–4 supporting the multifragmented ecDNA element in eight exemplary single cells representing the three identified clone variant groups (≥1 read supporting the SV, gray; 0 reads supporting the SV, white). e , Schematic representation of ecDNA variants 1–3 detected in d . f , Schematic interpretation of the evolution of the ecDNA structure in patient no. 2 based on the identified ecDNA variants in the scEC&T-seq data. The position of the MYCN oncogene and its local enhancer elements (e1–e5), indicated by the single asterisks, in each ecDNA variant is shown. Full size image Enhancers are coamplified with oncogenes on ecDNA in single cells Regulatory elements are commonly amplified on ecDNA, have an essential role in the transcriptional regulation of oncogenes on ecDNA and are assumed to be under strong positive selection 28 , 29 . Indeed, at least one of the recently described MYCN -specific enhancer elements 28 , 29 was recurrently detected on ecDNAs harboring MYCN in over 82.7% of neuroblastoma single cells (Fig. 7f and Supplementary Fig. 18a ). Interestingly, the deletion detected in patient no. 2, that is, ecDNA variant 3, is predicted to result in the loss of one of two MYCN gene copies, including regulatory elements e2 and e3 present on ecDNA variant 2 (Fig. 7f ). This raises the possibility that the change in enhancer:oncogene stoichiometry (6:1 in variant 3 versus 8:2 in variant 2), that is, the presence of one instead of two oncogene copies on an ecDNA, may be beneficial for oncogene expression because it may allow a more efficient use of enhancers on the ecDNA. Such mechanisms may explain the observed predominance of ecDNA variant no. 3 in the tumor cell population. Recent reports suggest that ecDNAs not harboring oncogenes but containing enhancer elements exist and can enhance transcriptional output on linear chromosomes or on other ecDNAs in trans as part of ecDNA hubs 17 , 23 . To identify such ecDNA elements, we analyzed H3K4me1, H3K27ac, H3K27me3 ChIP–seq and ATAC–seq data from neuroblastoma cells and searched for ecDNAs including these regions but not harboring oncogenes. No ecDNA only harboring enhancer elements was recurrently identified in single neuroblastoma cells. All recurrently detected ecDNAs contained at least one oncogene. However, a large set of nonrecurrent small circular DNAs were identified that only contained genomic regions with regulatory elements (Supplementary Fig. 18b ). The lack of recurrence of these circular DNA elements, however, suggests that they are not maintained in these cancer cells or do not confer positive selective advantages. Thus, scEC&T-seq allows the detection of noncoding circular DNAs and enables future investigations of their role in transcriptional regulation in cancer. Discussion We have shown that by parallel sequencing of circular DNA and mRNA from single cancer cells, scEC&T-seq not only readily distinguishes the transcriptional consequences of ecDNA-driven intercellular oncogene copy number heterogeneity, but also has the potential to uncover principles of ecDNA structural evolution. We believe that the integrated analysis of a cell’s circular DNA content and transcriptome through scEC&T-seq will enable a more complete understanding of the extent, function, heterogeneity and evolution of circular DNAs in cancer and beyond. scEC&T-seq complements recently published methods for single-cell DNA and single-cell RNA sequencing (scRNA-seq) 23 , 27 , which cannot readily distinguish linear intra- from extrachromosomal circular amplicons. Even though scEC&T-seq is compatible with automation, the elaborate circular DNA enrichment procedures only allow low throughput, which drives costs per cell and currently represents a limitation of this method. However, compared to droplet-based microfluidic single-cell technologies, plate-based scEC&T-seq generates a uniform number of reads per cell and produces independent sequencing libraries available for selection and resequencing, which is advantageous when high sequencing coverage is needed. Indeed, we showed that scEC&T can be combined with different sequencing technologies. The level of detail provided by scEC&T-seq far exceeds that of high-throughput methods. Pairing our method with other single-cell technologies, for example, Strand-seq 44 , and processing approaches, for example, single-cell tri-channel processing 45 , may increase the spectrum of somatic variation detected by scEC&T-seq. Performing scEC&T-seq in single cancer cells allowed us to profile their circular DNA content independently of copy number and circular DNA size. Small circular DNAs were identified in live single cells, suggesting that apoptosis is not the only mechanism of their generation. Whereas oncogene-containing ecDNAs were clonally present in single cells, small circular DNAs were exclusive to single cells. This not only indicates that small circular DNAs probably do not confer a selective advantage to cancer cells, but also suggests the existence of yet unknown prerequisites for selection, propagation and maintenance of these circular DNAs. The robust demonstration of integrating circular DNA and mRNA sequencing in single cancer cells indicates that the same approach can be applied to a diverse range of biological systems to further explore the diversity and invariance of circular DNA in single cells. Thus, we anticipate that our method will be a resource for future research in many fields beyond cancer biology and suggest that it has the potential to address many currently unresolved biological questions regarding circular DNA. Methods scEC&T sequencing A detailed, step-by-step protocol of scEC&T-seq is available on the Nature Protocol Exchange 46 and is described below. The duration of the protocol is approximately 8 days per 96-well plate. Cell culture Human tumor cell lines were obtained from ATCC (CHP-212) or were provided by J. J. Molenaar (TR14; Princess Máxima Center for Pediatric Oncology). The identity of all cell lines was verified by short tandem repeat genotyping (Genetica DNA Laboratories and IDEXX BioResearch); absence of Mycoplasma spp. contamination was determined with a Lonza MycoAlert Detection System. Cell lines were cultured in Roswell Park Memorial Institute 1640 medium (Thermo Fisher Scientific) supplemented with 1% penicillin, streptomycin and 10% FCS. To assess the number of viable cells, cells were trypsinized (Gibco), resuspended in medium and sedimented at 500 g for 5 min. Cells were then resuspended in medium, mixed in a 1:1 ratio with 0.02% trypan blue (Thermo Fisher Scientific) and counted with a TC20 cell counter (Bio-Rad Laboratories). Preparation of metaphase spreads Cells were grown to 80% confluency in a 15-cm dish and metaphase-arrested by adding KaryoMAX Colcemid (10 µl ml −1 , Gibco) for 1–2 h. Cells were washed with PBS, trypsinized (Gibco) and centrifuged at 200 g for 10 min. We added 10 ml of 0.075 M KCl preheated at 37 °C, 1 ml at a time, vortexing at maximum speed in between. Afterwards, cells were incubated for 20 min at 37 °C. Then, 5 ml of ice-cold 3:1 MeOH:acetic acid (kept at −20 °C) were added, 1 ml at a time followed by resuspension of the cells by flicking the tube. The sample was centrifuged at 200 g for 5 min. Addition of the fixative followed by centrifugation was repeated four times. Two drops of cells within 200 µl of MeOH:acetic acid were dropped onto prewarmed slides from a height of 15 cm. Slides were incubated overnight. FISH Slides were fixed in MeOH:acetic acid for 10 min at −20 °C followed by a wash of the slide in PBS for 5 min at room temperature. Slides were incubated in pepsin solution (0.001 N HCl) with the addition of 10 µl pepsin (1 g 50 ml −1 ) at 37 °C for 10 min. Slides were washed in 0.5× saline-sodium citrate (SSC) buffer for 5 min and dehydrated by washing in 70%, 90% and 100% cold ethanol (stored at −20 °C) for 3 min. Dried slides were stained with 10 µl Vysis LSI N-MYC SpectrumGreen/CEP 2 SpectrumOrange Probes (Abbott), ZytoLight SPEC CDK4/CEN 12 Dual Color Probe (ZytoVision) or ZytoLight SPEC MDM2/CEN 12 Dual Color Probe (ZytoVision), covered with a coverslip and sealed with rubber cement. Denaturing occurred in a ThermoBrite system (Abbott) for 5 min at 72 °C followed by 37 °C overnight incubation. The slides were washed for 5 min at room temperature in 2× SSC/0.1% IGEPAL, followed by 3 min at 60 °C in 0.4× SSC/0.3% IGEPAL (Sigma-Aldrich) and an additional wash in 2× SSC/0.1% IGEPAL for 3 min at room temperature. Dried slides were stained with 12 µl Hoechst 33342 (10 µM, Thermo Fisher Scientific) for 10 min and washed with PBS for 5 min. After drying, a coverslip was mounted on the slide and sealed with nail polish. Images were taken using a Leica SP5 Confocal microscope (Leica Microsystems). Interphase FISH CHP-212 and TR14 cells for the interphase FISH were grown in 8-chamber slides (Nunc Lab-Tek, Thermo Scientific Scientific) to 80% confluence. Wells were fixed in MeOH:acetic acid for 20 min at −20 °C followed by a PBS wash for 5 min at room temperature. The wells were removed and the slides were digested in pepsin solution (0.001 N HCl) with the addition of 10 µl pepsin (1 g 50 ml −1 ) at 37 °C for 10 min. After a wash in 0.5× SSC for 5 min, slides were dehydrated by washing in 70%, 90% and 100% cold ethanol stored at −20 °C (3 min in each solution). Dried slides were stained with either 5 µl of Vysis LSI N-MYC SpectrumGreen/CEP 2 SpectrumOrange Probes, ZytoLight SPEC CDK4/CEN 12 Dual Color Probe or ZytoLight SPEC MDM2/CEN 12 Dual Color Probe, covered with a coverslip and sealed with rubber cement. Denaturing occurred in a ThermoBrite system for 5 min at 72 °C followed by 37 °C overnight. Slides were washed for 5 min at room temperature within 2× SSC/0.1% IGEPAL, followed by 3 min at 60 °C in 0.4× SSC/0.3% IGEPAL and a further 3 min in 2× SSC/0.1% IGEPAL at room temperature. Dried slides were stained with 12 µl Hoechst 33342 (10 µM) for 10 min and washed with PBS for 5 min. After drying, a coverslip was mounted on the slide and sealed with nail polish. Images were taken with a Leica SP5 Confocal microscope. For ecDNA copy number estimation, we counted foci using FIJI v.2.1.0 with the function find maxima. Nuclear boundaries were marked as regions of interest. The threshold for signal detection within the regions of interest was determined manually and used for all images analyzed within one group. Patient samples and clinical data access This study includes tumor and blood samples of patients diagnosed with neuroblastoma between 1991 and 2022. Patients were registered and treated according to the trial protocols of the German Society of Pediatric Oncology and Hematology (GPOH). This study was conducted in accordance with the World Medical Association Declaration of Helsinki (2013 version) and good clinical practice; informed consent was obtained from all patients or their guardians. The collection and use of patient specimens was approved by the institutional review boards of Charité-medizin Berlin and the Medical Faculty at the University of Cologne. Specimens and clinical data were archived and made available by Charité-medizin Berlin or the National Neuroblastoma Biobank and Neuroblastoma Trial Registry (University Children’s Hospital Cologne) of the GPOH. The MYCN copy number was determined using FISH. Tumor samples presented at least 60% tumor cell content as evaluated by a pathologist. Isolation of nuclei Tissue samples were homogenized using a precooled glass dounce tissue homogenizer (catalog no. 357538, Wheaton) in 1 ml of ice-cold EZ PREP buffer (Sigma-Aldrich). Ten strokes with a loose pestle followed by five additional strokes with a tight pestle were used for tissue homogenization. To reduce the heat caused by friction, the douncer was always kept on ice during homogenization. The homogenate was filtered through a Falcon tube (Becton Dickinson) with a 35-µm cell strainer cap. The number of intact nuclei was estimated by staining and counting with 0.02% trypan blue (Thermo Fisher Scientific) mixed in a 1:1 ratio. Isolation of peripheral blood mononuclear cells from blood samples Peripheral blood mononuclear cells (PBMCs) were isolated using density gradient centrifugation with Ficoll-Plaque PLUS (Cytiva). Whole-blood samples were resuspended 1:1 in calcium-free PBS and slowly added to 12 ml of Ficoll-Plaque PLUS. The sample was centrifuged at 200 g for 30 min without breaking. The upper layer of PBMCs was isolated and washed into 40 ml of PBS. PBMCs were collected by centrifugation at 500 g for 5 min and resuspended in 10% dimethylsulfoxide in FCS. The PBMC suspensions were stored at −80 °C until use. FACS For single-cell sorting, 1–10 million neuroblastoma cells or PBMCs were stained with propidium iodide (PI) (Thermo Fisher Scientific) in 1× PBS; viable cells were selected based on forward and side scattering properties and PI staining. PBMC suspensions were additionally stained with a 1:400 dilution of anti-human CD3 (Ax700, BioLegend). Nuclei suspensions were stained with DAPI (final concentration 2 μM, Thermo Fisher Scientific). Viable cells, CD3 + PBMCS or DAPI + nuclei were sorted using a FACSAria Fusion Flow Cytometer (BD Biosciences) into 2.5 μl of RLT Plus buffer (QIAGEN) in low-binding 96-well plates (4titude) sealed with foil (4titude) and stored at −80 °C until processing. Genomic DNA and mRNA separation from single cells Physical separation of genomic DNA (gDNA) and mRNA was performed as described previously in the G&T-seq protocol by Macaulay et al. 27 . All samples were processed using a Biomek FXP Laboratory Automation Workstation (Beckman Coulter). Briefly, polyadenylated mRNA was captured using a modified Oligo dT primer (Supplementary Table 7 ) conjugated to streptavidin-coupled magnetic beads (Dynabeads MyOne Streptavidin C1, catalog no. 65001, Invitrogen). The conjugated beads were directly added (10 μl) to the cell lysate and incubated for 20 min at room temperature with mixing at 800 rpm (MixMate, Eppendorf). Using a magnet (Alpaqua), the captured mRNA was separated from the supernatant containing the gDNA. The supernatant containing gDNA was transferred to a new 96-well plate (4titude); the mRNA-captured beads were washed three times at room temperature in 200 μl of 50 mM Tris-HCl (pH 8.3), 75 mM KCl, 3 mM MgCl 2 , 10 mM dithiothreitol (DTT), 0.05% Tween 20 and 0.2× RNase inhibitor (SUPERase•In, Thermo Fisher Scientific). For each washing step, the beads were mixed for 5 min at 2,000 rpm in a MixTape (Eppendorf). The supernatant was collected after each wash and pooled with the original supernatant using the same tips to minimize DNA loss. Complementary DNA generation The mRNA captured on the beads was eluted into 10 μl of a reverse-transcription master mix including 10 U μl −1 SuperScript II Reverse Transcriptase (Thermo Fisher Scientific), 1 U μl −1 RNase inhibitor, 1× Superscript II First-Strand Buffer (Thermo Fisher Scientific), 2.5 mM DTT (Thermo Fisher Scientific), 1 M betaine (Sigma-Aldrich), 6 mM MgCl 2 (Thermo Fisher Scientific), 1 μM template-switching oligo (Supplementary Table 7 ), deoxynucleoside triphosphate mix (1 mM each of dATP, dCTP, dGTP and dTTP) (Thermo Fisher Scientific) and nuclease-free water (Thermo Fisher Scientific) up to the final volume (10 μl). Reverse transcription was performed on a thermocycler for 60 min at 42 °C followed by 10 cycles of 2 min at 50 °C and 2 min at 42 °C and ending with one 10-min incubation at 60 °C. Amplification of complementary DNA (cDNA) by PCR was immediately performed after reverse transcription by adding 12 μl of PCR master mix including 1× KAPA HiFi HotStart ReadyMix with 0.1 μM ISPCR primer (10 mM; Supplementary Table 7 ) directly to the 10 μl of the reverse transcription reaction mixture. The reaction was performed on a thermocycler for seven cycles as follows: 98 °C for 3 min, then 18 cycles of 98 °C for 15 s, 67 °C for 20 s, 72 °C for 6 min and finally 72 °C for 5 min. The amplified cDNA was purified using a 1:0.9 volumetric ratio of Ampure Beads (Beckman Coulter) and eluted into 20 μl of elution buffer (Buffer EB, QIAGEN). Circular DNA isolation, amplification and purification The isolated DNA was purified using a 1:0.8 volumetric ratio of Ampure Beads. The sample was incubated with the beads for 20 min at room temperature with mixing at 800 rpm (MixMate). Circular DNA isolation was performed as described previously in bulk populations 3 , 25 . Briefly, the DNA was eluted from the beads directly into an exonuclease digestion master mix (20 units of Plasmid-Safe ATP-dependent DNase (Epicentre), 1 mM ATP (Epicentre), 1× Plasmid-Safe Reaction Buffer (Epicentre)) in a 96-well plate. In a subset of samples, 1 μl of the endonuclease MssI/PmeI (20 U μl, New England Biolabs) was added. The digestion of linear DNA was performed for 1 or 5 days at 37 °C with 10 U of Plasmid-Safe DNase and 4 μl of ATP (25 mM), which was added again every 24 h to continue the enzymatic digestion. After 1 or 5 days of enzymatic digestion, the exonuclease was heat-inactivated by incubating at 70 °C for 30 min. The exonuclease-resistant DNA was purified and amplified using the REPLIg Single-Cell Kit (QIAGEN) according to the manufacturer’s instructions. For this purification step, 32 μl of polyethylene glycol buffer (18% (w/v) (Sigma-Aldrich), 25 M NaCl, 10 mM Tris-HCl, pH 8.0, 1 mM EDTA, 0.05% Tween 20) were added, mixed and incubated for 20 min at room temperature. After incubation, the beads were washed twice with 80% ethanol and the exonuclease-resistant DNA was eluted directly into the reaction mixture multiple displacement amplification with a REPLIg Single-Cell Kit (QIAGEN). Amplified circular DNA was purified using a 1:0.8 volumetric ratio of Ampure Beads and eluted in 100 μl of elution buffer (Buffer EB, QIAGEN). Library preparation and sequencing A total of 20 ng amplified cDNA or circular DNA was used for library preparation using the NEBNext Ultra II FS (New England Biolabs) according to the manufacturer’s protocol. Samples were barcoded using unique dual-index primer pairs (New England Biolabs) and libraries were pooled and sequenced on a HiSeq 4000 instrument (Illumina) or a NovaSeq 6000 instrument with 2× 150-bp paired-end reads for circular DNA libraries and 2× 75-bp paired-end reads for cDNA libraries. Genomic and transcriptomic read alignments Sequenced reads from the gDNA libraries were trimmed using TrimGalore (v.0.6.4) 47 and mapped to the human genome build 19 (GRCh37/hg19). Alignment was performed with the Burrows–Wheeler Aligner (BWA)-MEM (v.0.7.17) 48 . Following the recommendation of the Human Cell Atlas project 49 (v.2.2.1) 50 was used to align the RNA-seq data obtained from Smart-seq2 (ref. 26 ) against a transcriptome reference created from the hg19 and ENCODE annotation v.19 (ref. 51 ). Afterwards, genes and isoforms were quantified using rsem (v.1.3.1) 52 with a single cell prior. Nanopore scCircle-seq Before Nanopore sequencing, the amplified circular DNA from single cells was subjected to T7 endonuclease digestion to reduce DNA branching. Then, 1.5 µg of amplified circular DNA were incubated at 37 °C for 30 min with 1.5 µl T7 endonuclease I (10 U µl −1 , New England Biolabs) in 3 µl of NEBuffer 2 and nuclease-free water up to a final volume of 30 µl. The endonuclease-digested DNA was purified using a 1:0.7 volumetric ratio of Ampure Beads and eluted in 25 µl of nuclease-free water. Libraries were prepared using the ONT Rapid Barcoding Kit (catalog no. SQK-RBK004, Oxford Nanopore Technologies) according to the manufacturer’s instructions, and sequenced on an R9.4.1 MinION flowcell (FLO-MIN106, Oxford Nanopore Technologies). A maximum of four samples were multiplexed per run. Nanopore scCircle-seq data processing The scCircle-seq Nanopore data were base-called and demultiplexed using Guppy (v.5.0.14; running guppy_basecaller with dna_r9.4.1_450bps_hac model and guppy_barcoder with FLO-MIN106 and default parameters). The obtained reads were quality-filtered using NanoFilt 53 (v.2.8.0) (-l 100--headcrop 50--tailcrop 50) and aligned using ngmlr 54 (v.0.2.7) against the GRCh37/hg19 reference genome. To call SVs, we applied Sniffles 54 (v.1.0.12) (--min_homo_af 0.7--min_het_af 0.1--min_length 50--min_support 4); to obtain the binned coverage, we used deepTools 55 (v.3.5.1) bamCoverage. All these steps are available as a Snakemake pipeline ( ). Circle-seq in bulk populations Circle-seq in bulk populations was performed as described previously 3 . A detailed step-by-step protocol can be found on the Nature Protocol Exchange server. ChIP–seq We generated H3K27me3 ChIP–seq data for CHP-212 according to a previously described protocol 28 . Briefly, 5–10 million CHP-212 cells were fixed in 10% FCS-PBS with 1% paraformaldehyde for 10 min at room temperature. Chromatin was prepared as described previously 28 and sheared until a fragment size of 200–500 bp. H3K27me3–DNA complexes were immunoprecipitated for 15 h at 4 °C with an anti-H3K27me3 polyclonal antibody (catalog no. 07-449, Sigma-Aldrich). In total 10–15 μg of chromatin and 2.5 μg of antibody were used for immunoprecipitation. Libraries for sequencing were prepared using Illumina Nextera adapters according to the recommendations provided. Libraries were sequenced in 50-bp single-read mode in an Illumina HiSeq 4000 sequencer. FASTQ files were quality-controlled with FASTQC (v.0.11.8) and adapters were trimmed using BBMap (v.38.58). Reads were aligned to the hg19 using the BWA-MEM 48 (v.0.7.15) with default parameters. Duplicate reads were removed using Picard (v.2.20.4). Chromatin marks enrichment analyses We obtained public CHP-212 copy number variation, ChIP–seq (H3K27ac, H3K4me1, CTCF) and ATAC–seq data 28 , 56 . For further analysis, we used the processed bigwig tracks, filtered to exclude ENCODE Data Analysis Center (DAC) blacklisted regions and normalized to read counts per million (CPM) in 10-bp bins, and peak calls provided by Helmsauer et al. 28 . To assess the correlation of epigenetic marks with circle regions, we only considered circle regions that did not overlap with copy number variation in CHP-212 or ENCODE DAC blacklisted regions. For H3K27ac, H3K4me1 and H3K27me3 ChIP–seq and ATAC–seq data, we computed the mean CPM signal across all circle regions, weighted by the respective circle sizes. To test for statistical association, we created 1,000 datasets with randomized circle positions within a genome masked for copy number variation in CHP-212 and ENCODE DAC blacklisted regions using regioneR 57 (v.1.24.0). We derived an empirical P value from the distribution of mean CPM signal across the randomized circle regions. For CTCF ChIP–seq data, we calculated the percentage of circle edges overlapping with a CTCF peak and assessed statistical significance using the same randomization strategy as described above. Circle-seq analysis Extrachromosomal circular DNA analysis was performed as described previously 3 . Reads were 3′-trimmed for both quality and adapter sequences, with reads removed if the length was less than 20 nucleotides. BWA-MEM (v.0.7.15) with default parameters was used to align the reads to the human reference assembly GRCh37/hg19; PCR and optical duplicates were removed with Picard (v.2.16.0). Putative circles were classified with a two-step procedure. First, all split reads and read pairs containing an outward-facing read orientation were placed in a new BAM file. Second, regions enriched for signal over background with a false discovery rate < 0.001 were detected in the ‘all reads’ BAM file using variable-width windows from Homer v.4.11 findPeaks ( ); the edges of these enriched regions were intersected with the circle-supporting reads. The threshold for circle detection was then determined empirically based on a positive control set of circular DNAs from bulk sequencing data. Only enriched regions intersected by at least two circle-supporting reads were classified as circular regions. Quality-controlled filtering of scCircle-seq data To evaluate adequate enrichment of circular DNA, we used coverage over mtDNA as the internal control. Cells with fewer than ten reads per base pair sequence-read depth over mtDNA or fewer than 85% genomic bases captured in mtDNA were omitted from further analyses. Cutoff values were chosen based on maximal read depth values detected in endonuclease controls (with PmeI; Supplementary Fig. 1c ). For all downstream analyses, we only considered sequencing data from cells digested with exonuclease for 5 days. Because mtDNA is not present in nuclei, we filtered single-nucleus Circle-seq data only based on RNA quality control. Recurrence analysis from scCircle-seq data Read counts from putative circles were quantified using bedtools multicov ( ) from single-cell BAM files in 100-kb bins across all canonical chromosomes from genome assembly GRCh37/hg19. Counts were normalized to sequencing depth in each cell and each bin was marked positive if it contained circle read enrichment with P < 0.05 compared with the background read distribution. Bins were then classified into three groups based on genomic coordinates: (1) ecDNA if the region overlapped the amplicon assembled from the bulk sequencing data; (2) chrM; and (3) all other sites. Recurrence was then analyzed by plotting the fraction of cells containing a detected circle in each of the three categories. Phasing of SNPs in scCircle-seq data Reference phasing was used to assign each SNP to one of the two alleles based on bulk WGS data. Then, single cells were genotyped to compare if the same allele was gained in all of them. For this analysis, we used the known SNPs identified by the 1000 Genomes Project 58 and extracted coverage and nucleotide counts for each annotated position. In regions with allelic imbalance, like the high copy number gains at ecDNA loci, the B-allele frequency of a heterozygous SNP is significantly different from 0.5. Hence, we could assign each SNP in these regions to either the gained or non-gained allele. We then also genotyped all single cells at each known SNP location and visualized the resulting B-allele frequency values while keeping the allele assignment from the bulk WGS data. Relative copy number estimation (log 2 coverage) The average coverage over all annotated genes was calculated and genes were split into amplicon and non-amplicon genes based on whether their genomic location overlapped with the identified ecDNA regions per cell. The coverage of all amplicon genes was normalized by the background coverage, that is, the winsorized mean coverage of all non-amplicon genes. A winsorized mean was chosen to account for the fact that the identification of ecDNA regions might have been incomplete; thus, the top and bottom 5% of values were removed from the background coverage. The resulting values were log 2 -transformed and used as a proxy for ecDNA copy number. Identification of SVs in scCircle-seq data The SV calling for scCircle-seq was done using lumpy-sv 55 (v.0.2.14) and SvABA(v.1.1.0). To our knowledge, no dedicated SV caller for single-cell DNA data is available. However, because of high copy numbers of ecDNA, bulk methods work. Identification of SVs in WGS bulk data and merged scCircle-seq data SAMtools 59 (v.1.11) was used to merge all alignment files of the same cell line into one pseudobulk alignment. To achieve a coverage closer to standard bulk sequencing, the resulting BAM file was subsequently downsampled to 10% of its original size using SAMtools. The identification of SVs in WGS and merged scCircle-seq data for the TR14 and CHP-212 cell lines was accomplished using lumpy-sv 60 (v.0.3.1) and SvABA 61 (v.1.1.0), both with standard parameters. The preprocessing of the BAM files, which included lower size (<20 bp) and lower quality reads (MAPQ < 5) filtering, as well as supporting read counts and VAF calculations, was performed using SAMtools 59 (v.1.10). All the analysis steps were completed using the GRCh37/hg19 reference genome. The identification and counts of reads supporting the SV breakpoints were performed considering split and abnormally mapped reads and filtering out duplicated reads and secondary alignments. Identification of SNVs in bulk WGS data and merged scCircle-seq data To ensure compatibility with standard mitochondrial variation reporting 62 , each single-cell sequencing sample was realigned to GRCh37/hg19 with a substituted revised Cambridge Reference Sequence mitochondrial reference (GenBank no. NC_012920) using BWA-MEM 63 (v.0.7.17). Duplicate reads were removed using Picard (v.2.23.8). GATK4/Mutect2 64 (v.4.1.9.0) with default parameters was used to call variants in whole-genome bulk and merged scCircle-seq sequencing data (pseudobulk). Only variants on canonical chromosomes (including chrM) and passing GATK4/FilterMutectCalls were retained and subsequently filtered for the regions previously reconstructed for the respective cell lines (Fig. 3a ) using bcftools filter with flag-r. Identification of SNVs in mtDNA For mitochondrial SNV identification in single cells, we applied a custom pipeline consisting of GATK4/Mutect2 (ref. 64 ) (v.4.1.9.0) in mitochondria mode and Mutserve 65 (v.2.0.0-rc12), a variant caller optimized to detect heteroplasmic sites in mitochondrial sequencing data, with default parameters. First, variants were called by both callers for each single cell separately. Variants were then filtered in a two-step process: (1) variants were only retained if they have been called in at least two samples by the same caller; and (2) remaining variants were only kept if they were called by both callers. Variants labeled ‘blacklist’ by Mutserve were removed. To infer the allele frequency for each variant in the final set, each single cell was then subjected to genotyping using alleleCount (v.4.0.2) ( ). Only reads uniquely mapping to the mitochondrial reference and with a mapping quality ≥ 30 were kept. For each called alternate allele b at position x , the allele frequency (AF) was calculated as: $$\mathrm{AF}_{x,b} = \frac{{\left( {\mathrm{read}}\,{\mathrm{count}} \right)_{x,b}}}{{\mathrm{read}}\,{\mathrm{depth}_x}}$$ The resulting single-cell x variant AF matrix was further filtered manually and separately for each cell line. Single cells with fewer than three variants and variants with a maximum column allele frequency < 5%, mean AF (MAF) > 30% and MAF < 0.1% for CHP-212 as well as MAF > 30% and MAF < 0.1% for TR14 were considered uninformative for clustering and removed based on spot checking. Heatmap visualization of the filtered single-cell x variant AF matrix was generated using the R package ComplexHeatmap 66 (v.2.6.2). Hierarchical clustering was then applied to the single cells using the R package hclust with the agglomeration method parameter ‘complete’. Phylogenetic trees were rendered using the R package dendextend (v.1.15.2). Microhomology detection Microhomology analysis was performed using NCBI BLAST ( ) with the following parameters: blastn -task megablast -word_size = 4 -evalue = 1 -outfmt ‘6 qseqid length evalue’ -subject_besthit -reward = 1 -penalty = -2. These parameters look for a minimum microhomology length of 4 bp, and the standard reward and penalty values for nucleotide match and mismatch. In addition, we only considered significant results with an Expect value < 1. To evaluate the presence of microhomology around the circular DNA junctions, we generated files that include 100 bp around the start and end of the circle (50 bp inside the circular DNA and 50 bp of linear DNA). To be able to perform this analysis, we filtered out all the circles with a length <100 bp. Then, we compared the sequences for each start and end pair (one circle junction), evaluating and retrieving microhomologous sequences around the circular junction. This analysis was repeated for each individual circle in the CHP-212 and TR14 cell lines. Quality control filtering and clustering of scRNA-seq data Cells and nuclei were loaded into Seurat 67 (v.4.10); features that were detected in at least three cells were included. Subsequently, cells with 5,000 or more features in cell lines and 2,000 features in T cells and nuclei were selected for further analysis. Cells or nuclei with high expression of mitochondrial genes (>15% in single cells and >2.5% in nuclei) were also excluded. Data were normalized with a scale factor of 10.000 and scaled using default ScaleData settings. To account for gene length and total read count in each cell, the Smart-seq2 data were normalized using transcripts per million; then, a pseudocount of one was added and natural-log transformation was applied. The first four principal components were significant; therefore, the first five principal components were used for FindNeighbors and RunUMAP to capture as much variation as possible as recommended by the Seurat authors. The resolution for FindClusters was set to 0.5. Cell cycle analyses in scRNA-seq data Cell cycle phase was assigned to single cells based on the expression of G2/M and S phase markers using the Seurat CellCycleScoring function. Single-cell differential expression analysis Very small circular DNAs were defined as circles shorter than 3 kb. To calculate the relative number of this subtype of small circular DNAs per cell, the number of <3 kb circular DNAs was divided by the total number of circles in a cell. The cells were ranked by their relative number and grouped by taking the top and bottom 40% of the ranked list, defined as ‘high’ and ‘low’, respectively. Logarithmic fold change of gene expression between the two groups was calculated using the FindMarkers function in the Seurat R package 67 (v.4.10) without logarithmic fold change threshold and a minimum detection rate per gene of 0.05. The R package clusterProfiler 68 (v.4.0.5) was used to perform unsupervised GSEA of gene ontology terms using gseGO and including gene sets with at least three genes and a maximum of 800 genes. Correlation of scCircle-seq and scRNA-seq coverage Coverage of ecDNA amplicon regions in the scCircle-seq and scRNA-seq BAM files was calculated with bamCoverage 55 using CPM normalization. Correlation between Circle-seq and RNA-seq coverage was analyzed by fitting a linear model. Identification of fusion genes The single-cell, paired-end, RNA-seq FASTQ files were merged (96 cells for TR14 and 192 cells for CHP-212). The obtained merged data were aligned with STAR 69 (v.2.7.9a) to the reference decoy GRCh37/hs37d5, using the GENCODE 19 gene annotation, allowing for chimeric alignment (--chimOutType WithinBAM SoftClip). To call and visualize fusion genes, Arriba 70 (v.2.1.0) was applied, with the custom parameters -F 150 -U 700. The final confident call set included only fusions with (1) total coverage across the breakpoint ≥ 50× and (2) ≥30% of the mapped reads being split or discordant reads. Only fusion genes in the proximity (±10 Mb) of the amplicon boundaries were considered for the downstream analysis. ecDNA amplicon reconstruction We used the amplicon reconstructions provided by Helmsauer et al. 28 for CHP-212 and Hung et al. 23 for TR14. Briefly, these reconstructions were obtained by organizing a filtered set of Illumina WGS (CHP-212) and Nanopore WGS (TR14) SV calls as genome graphs using gGnome 71 (v.0.1) (genomic intervals as nodes and reference or SVs as edges). Then, circular paths through these graphs were identified that included the amplified oncogenes and could account for the major copy number steps observed in the respective cell line. For the two patients added to the study, patient no. 1 and patient no. 2, shallow whole-genome Nanopore data were generated as described by Helmsauer et al. 28 . Basecalling, read filtering (NanoFilt −l 300), mapping and SV calling were performed as described previously in the Methods (‘Nanopore scCircle-seq data processing’). For ecDNA reconstruction, a set of confident SV calls was compiled (variant AF > 0.2 and supporting reads ≥ 50×). As for CHP-212 and TR14, a genome graph was built using gGnome 61 (v.0.1) and manually curated. To check amplicon structure correctness for the patient samples, in silico-simulated Nanopore reads were sampled from the reconstructed amplicon using an adapted version of PBSIM2 (ref. 72 ) ( ) and preprocessed as the original patient samples. Lastly, the SV profiles between original samples and in silico simulation were compared. All reconstructed amplicons were visualized using gTrack (v.0.1.0; ), including the GRCh37/hg19 reference genome and GENCODE 19 track. ecDNAs co-occurrence analysis in TR14 single cells We used the circle classification algorithm described previously to define circular DNA-enriched regions in single cells. For each single cell, we defined whether the circular DNA-enriched regions overlapped the ecDNA amplicon ( MYNC , CDK4 , MDM2 ) assembled from TR14 bulk sequencing data using the function findOverlaps from the R package GenomicRanges 73 (v.1.44.0). Presence or absence of overlap was defined for each of the three MYNC , CDK4 , MDM2 ecDNAs independently, excluding the amplicon regions shared by MYCN and CDK4 ecDNAs. Statistics and reproducibility No statistical method was used to predetermine sample size. No data were excluded from the analyses. Experiments were not randomized and the investigators were not blinded to allocation during the experiments and outcome assessment. The FISH experiments were performed once per cell line and primary tumor. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The sequencing data generated in this study are available at the European Genome-phenome Archive under accession no. EGAS00001007026 . The ChIP–seq narrowPeak and bigwig files were downloaded from . All other data are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The data analysis code associated with this publication can be found at .
Tumors sometimes seem to take on a life of their own, growing at an unusually fast rate or suddenly developing resistance to a cancer drug. This behavior is often explained by cancer genes separating from the cell's own chromosomes and "striking out on their own" in ring shapes. So far, little has been known about how exactly these DNA rings arise and how they continue to develop as the tumor grows. An international team of researchers led by Charité–Universitätsmedizin Berlin and the Max Delbrück Center has now harnessed a new method to trace this path in neuroblastoma, a type of cancer. The results have been published in the journal Nature Genetics. Considered one of the biggest challenges in cancer research, DNA rings are tiny loops of genetic material floating around the nucleus of the cell by the hundreds, detached from the chromosomes. They were first discovered in 1965 and still pose many questions for researchers. Where do all these rings come from? What is their function? How do they affect the cells and the organism as a whole? One thing is clear: Nearly one-third of all tumors in pediatric and adult patients have DNA rings inside their cells—and those tumors are almost always highly aggressive. Ring-shaped DNA, termed extrachromosomal DNA (ecDNA), is also often implicated when a tumor develops resistance to a previously effective medication. Researchers around the world hope to identify new approaches to treating cancer by studying this specific form of genetic information. However, ecDNA does not always have a detrimental effect on cancer growth. Some of the rings also seem to be harmless. "To tell the difference between dangerous and harmless DNA rings and be able to trace their evolution within the tumor, we have to look at the tissue one cell at a time," explains the head of the study, Prof. Dr. Anton Henssen. He works at the Department of Pediatric Oncology and Hematology at Charité and does research at the Experimental and Clinical Research Center (ECRC), a joint institution of Charité and the Max Delbrück Center. Together with his team, he has now developed a technology that can read the genetic code of the existing DNA rings for each individual cell. At the same time, it tells which genes are active on the rings. "This lets us simply count how many cells in the tumor are home to a specific ring," Henssen says. "If there aren't many, then that ring is not highly relevant to the growth of the cancer. But if there are a lot of them, it evidently gives a tumor cell a selective advantage." Which DNA rings spur tumor growth? The researchers initially used the new method to take a snapshot of all DNA rings in cultured neuroblastoma cells. Neuroblastoma is a form of highly malignant cancer that is especially prevalent in very young children. The research showed that no two cancer cells are alike—where one might have 100 DNA rings floating around, the next might have 2,000. The rings also vary greatly in size, with the smallest of them consisting of only 30 genetic components and the largest comprising more than a million. "The big DNA rings are loaded with cancer genes originating in the chromosomes of the cell," explains Rocío Chamorro González, the study's first author, who also does research at the Department of Pediatric Oncology and Hematology at Charité and the ECRC. "The ring shape lets them circumvent the classic laws of genetics, so they take on a kind of autonomy. These cancer genes have struck out on their own, if you will. We are only just starting to understand the ramifications. In our study, we found the large DNA rings in many neuroblastoma cells, so they are evidently spurring cell growth. The small rings were only found in isolation, so they are probably not very relevant to the cancer cells." Evolution of an independent cancer gene To understand how an ecDNA originates in the first place and then evolves within a tumor, the second step for the research group was to analyze a pediatric neuroblastoma—cell by cell. Their findings suggest that MYCN, a known cancer gene, first detached from its chromosome of origin and formed a ring shape at the start of the tumor's growth in this case. Then two of the rings merged to form a larger one, which went on to lose a shorter segment and then a longer one. "The last ring seems to have been the first to confer a growth advantage, because it is the only one that appears in many of the neuroblastoma cells," Henssen says. "This shows that the cancer gene not only became independent through these processes, but also continued to 'improve.'" This kind of insight into the evolution of DNA rings within a tumor would have been impossible if not for the newly developed method. The team of researchers now plans to use the same method to reconstruct the stages of development in further cases of cancer. The researchers hope this will allow them to distinguish better between dangerous and harmless DNA rings. "Our hope is that in the future, we will be able to see in an individual case whether or not that tumor is especially aggressive, just from looking at the DNA rings," Henssen says. "And then we could adjust the treatment. That's why testing the predictive power of specific DNA rings is the next target for our research."
10.1038/s41588-023-01386-y
Computer
Novel process extracts rare earth elements from waste
Yaguang Zhu et al, Supercritical carbon dioxide/nitrogen/air extraction with multistage stripping enables selective recovery of rare earth elements from coal fly ashes, RSC Sustainability (2023). DOI: 10.1039/D2SU00033D
https://dx.doi.org/10.1039/D2SU00033D
https://techxplore.com/news/2023-03-rare-earth-elements.html
Abstract Rare earth elements (REEs) are widely used in electronic devices and renewable energy technology, but their supply is geopolitically-limited and they are extracted by environmentally unsustainable mining practices. Coal fly ash (CFA), which is mostly discarded as waste, has recently gained attention as a potential low-grade REE source, motivating the development of greener and highly specific processes for recovering and enriching REEs. Here we present a proof-of-concept for a novel REE extraction process in which supercritical fluid enhances the ability of tributyl phosphate (TBP) to selectively extract REEs directly from solid CFA matrices. For the first time, we show that supercritical nitrogen and supercritical air can work like supercritical carbon dioxide for selective extraction. Moreover, using a prototype multistage stripping process with an aqueous solution, we collected REEs with concentrations up to 21.4 mg L −1 from the extractant. Our final products contain up to 6.47% REEs, whereas the coal fly ash source initially contained only 0.0234% REEs. Using supercritical fluid, our novel process can recover valuable and critical resources from materials previously considered to be waste. Sustainability spotlight Large amounts of coal fly ash (CFA) deposited in landfills and wet impoundments are considered a threat to the local environment due to possible toxic element leaching. Recently, CFA has been found to be a potential source of rare earth elements (REEs), but current extraction technologies are challenged by low selectivity, organic waste production, and high energy consumption. Here, we report the use of supercritical fluids (carbon dioxide, nitrogen, and air) as greener solvents assisting a phosphonate extractant in directly and selectively extracting REEs, without energy- and material-intensive leaching. Our work shows promise to recover valuable resources from waste materials. Therefore, our work can help to realize the “Responsible Consumption and Production” of the Sustainable Development Goals (SDGs). Introduction Rare earth elements (REEs) are a group of 17 chemical elements in the periodic table, specifically the 15 lanthanides plus scandium and yttrium. The wide application of REEs in computer memory, rechargeable batteries, cell phones, and fluorescent lighting manifests their indispensable roles in our daily life. 1 Moreover, they are also critical to a variety of high tech applications, such as clean energy generation and catalysis, and their production is closely linked to the speed of technology development and implementation. 2–4 However, due to their geopolitically-constrained supply, environmentally-unsustainable mining practices, and rapidly growing demand, 3 both the United States (US) and the European Union have classified REEs as “critical materials”. 5,6 To address such a limited supply, alternative domestic sources will be most welcome. 7–10 Recently, coal fly ash (CFA) has emerged as a promising REE resource. 11,12 The average total REE concentration in CFAs has been characterized as 200–1220 ppm, and the potential annual value of the REEs that can be extracted from CFAs in the US is estimated to be $4.3 billion. 12 According to American Coal Ash Association's 2019 production and use survey, approximately 79 million metric tonnes (t) of CFAs are generated annually in the US, with only 52% beneficially used and the rest discarded. 13 The remaining CFAs, deposited in landfills or wet impoundments, are considered as a threat to local environment due to possible leaching of toxic elements. 14,15 Notably, obtaining REEs from CFAs is less environmentally destructive and capital intensive than extraction from traditional mineral ores, because it does not generate large quantities of waste rock that is typically radioactive. 11,16,17 In this regard, recovering REEs from CFAs turns waste into valuable resources with impactful environmental and societal benefits. To successfully obtain high purities of individual REEs from mineral ores, current industrial REE extraction operations include many processes, such as alkaline roasting, acid leaching, fractional separation, ion exchange, and solvent extraction. 18–20 In the initial attempt to recover REEs from CFAs, these methods were adopted first. Although previous studies have applied different methods to extract REEs from CFAs, 21–23 these processes still present many challenges. First, they all require a high temperature alkaline roasting process (>400 °C), followed by an acid leaching process (using strong acid) to obtain REE-containing leachate. Their high energy and chemical demands have proven burdensome in the commercial extraction of REEs from mineral ores, and these burdens will be more severe for low grade REEs resources like CFAs. 24 Notably, a strong acid is indispensable in all the REEs extraction processes. Second, an extractant that selectively complexes with REE 3+ is also necessary for the extraction. For example, in the solvent extraction, di-2-ethylhexylphosphoric acid (DEHPA) was dispersed in kerosene, and together they can selectively extract REEs from the aqueous solutions. 21 In addition, DEHPA-dispersed mineral oil inside a membrane was used for selectively transferring REEs from a CFA leachate to a highly acidic solution. 22 However, these processes all use toxic organic solvent to disperse the extractant, and thus it is highly desirable to find environmentally-friendly solvents to replace the organic solvent. Third, and most importantly, CFAs have extremely low concentrations of REEs (<0.2%) and more than 90% major impurities (Ca, Fe, Al, Mg), so the REEs purity in the final products is only 0.5–0.7%. 22 Overcoming these drawbacks requires a novel REEs extraction process that is environmentally-benign and highly selective for REEs over impurities. Supercritical fluid (SCF) extraction has emerged as a promising option because SCFs have little environmental impact, are non-flammable, and facilitate the mass transfer of extractants. 25 Applying SCF can reduce the usage of organic solvent, and we also expect that it can improve the selective recovery of REEs from CFAs. To selectively extract REEs from a solid matrix, studies have explored using extractants to complex with REE 3+ ions under supercritical carbon dioxide (scCO 2 ). 26,27 Tributyl phosphate–nitric acid (TBP–HNO 3 ) has shown selective extraction of REEs. This extractant was prepared by contacting pure TBP with concentrated HNO 3 . A current hypothesis for the extraction mechanism in a scCO 2 system is that TBP selectively chelates with the neutral salt formed by REE 3+ and NO 3 − . 27,28 Although scCO 2 with TBP has successfully and selectively extracted REEs from high concentration REE resources (such as pure REE oxides), 26,29–31 REE-rich sources ( e.g. , bastnaesite, monazite, NiMH batteries, and NdFeB magnets), 28,32,33 and phosphogypsum (REE concentration up to 0.6%), 34 its performance has not been studied with CFAs, which have extremely low REE concentrations (<0.2%). In addition, studies have used scCO 2 extraction with a flow-through setup to remove toxic heavy metals from CFAs, 35,36 but they did not show the capability to selectively separate REE 3+ from other ions to recover valuable resources. Also, how much impurity can be extracted was not provided. Thus, separation of REEs from impurities during or after SCF extraction with TBP–HNO 3 needs more systematic investigations. Furthermore, previous studies notably tested only CO 2 as the supercritical fluid. In these studies, scCO 2 (critical temperature ( T c ) = 31 °C, critical pressure ( P c ) = 73.8 bar) offers several advantages, such as safety, abundance, and low cost. An outstanding question is whether the supercritical state of more accessible gases, such as nitrogen ( T c = −147 °C and P c = 34.0 bar) or air ( T c = −141 °C and P c = 37.9 bar), can also be used in the extraction and whether they can achieve a similar efficiency to scCO 2 . Herein, we present a novel extraction process that uses SCF to directly and selectively extract REEs from a solid CFA matrix. This proof-of-concept study aims to investigate the feasibility of selective extraction of REEs from CFAs using SCF with little interference from impurities. We achieved excellent extraction efficiencies, between 66 and 79%, for all REEs, and found that scCO 2 can decrease the concentrations of impurities in the final product, especially Ca, Mg, and Al. In previous studies, much emphasis was placed on the flow and heat properties of supercritical nitrogen and supercritical air, 37,38 but they have not been tested as green solvents. Moving beyond CO 2 , our work is the first report to demonstrate that more common and accessible SCF sources, such as nitrogen and air, can also assist TBP to extract REEs with high efficiency and separate impurities. Moreover, we applied a multistage stripping process to collect REEs and further separate REEs and impurities to increase the purity of REEs in our collected solutions. Our extraction process replaces the toxic organic solvent used in most existing techniques with SCF to make the process “greener”. In addition, by combining SCF extraction with the multistage stripping, our process showed a higher selectivity for REEs over impurities than achieved by conventional organic solvent extraction methods. This study offers new promising SCF choices (nitrogen and air) and provides useful insights into selectivity in SCF extraction, enabling future greener processes in REE recovery from unconventional resources. Experimental Materials The coal fly ash in our study came from a power plant in Missouri, burning coal from the Powder River Basin (PRB). Deionized water (DI water, resistivity ≥ 18.2 MΩ cm) was obtained from a Barnstead Ultrapure Water System (D11931, Thermo Scientific). ACS grade tributyl phosphate (TBP) and nitric acid were purchased from VWR. Supercritical fluid extraction and multistage stripping process Equal volumes of TBP and 70% nitric acid were mixed and allowed to react and settle. The upper layer, the extractant TBP–HNO 3 used in this work, was pipetted off (see Fig. 1 ). To determine the molar ratio of TBP and HNO 3 in the extractant, acid–base titration was used. The molar ratio was TBP : HNO 3 = 1 : 1.67, and this value is close to the molar ratios of the TBP–HNO 3 complex reported elsewhere. 28,29 We loaded 2 g of CFA, along with 20 mL TBP–HNO 3 , into a reactor (250 mL, Parr Instrument Co., IL). The CO 2 , N 2 , and air used in our study were purchased from Airgas USA, LLC, MO. The gases were pressurized to 150 bar by a syringe pump (Teledyne ISCO, Inc., Lincoln, NE) and then injected into the reactor, whose temperature was controlled at 50 °C. After 2 h of extraction, the reactor was cooled to room temperature and then depressurized. The reacted TBP–HNO 3 was obtained by filtering out the solid residues. These residues were then rinsed with ethanol and DI water to remove any remaining solution and prepared for further characterization. Triplicate experiments were conducted for each condition. Fig. 1 Overview of processes for supercritical fluid extraction of REEs from solid CFAs. (a) Prepare extractant TBP–HNO 3 . (b) Extract REEs from CFAs using TBP–HNO 3 under SCF conditions and obtain the REEs-containing reacted TBP–HNO 3 . (c) Collect REEs and separate them from major impurities through a multistage stripping process. A multistage stripping process using 1% nitric acid was applied to selectively collect the REEs and separate them from the impurities. Specifically, 1% nitric acid was added to the reacted TBP–HNO 3 in a 1 : 10 v/v ratio that have been experimentally determined to be the best for concentrating REEs. After 10 s of vigorous mixing, the REEs and impurities dissociated from the TBP and dissolved in the acid. After being collected by gravity separation, the diluted nitric acid containing REEs and impurities was called the stripped solution. The remaining reacted TBP–HNO 3 was mixed with fresh 1% nitric acid to conduct a new stripping stage. In total, a six-stage stripping process was conducted to recover essentially all the REEs from the reacted TBP–HNO 3 . Characterization of solid samples The sizes, morphologies, and elemental distributions of CFAs were characterized by SEM-EDX (ThermoFisher Quattro S Environmental Scanning Electron Microscope). We identified the mineral phase in CFA by high-resolution X-ray diffraction (HRXRD, Bruker D8 Advance X-ray diffractometer with Cu Kα radiation ( λ = 1.5418 Å)). CFA and solid residues were digested by two methods, one to obtain the total elemental compositions and the other to obtain the acid-extractable REEs element compositions. In addition, the solid residue from the extraction was digested to obtain the total elemental composition and calculate the leaching efficiency, using eqn (1) . (1) where wt% u is the mass percentages of metal ions in the unreacted CFA, m u is the mass of unreacted CFA, wt% r is the mass percentage of metal ions in the reacted CFA, and m r is the mass of the reacted CFA. Unreacted and reacted CFA solids were sequentially digested by HF–HNO 3 and HNO 3 –H 2 O 2 . Then the mass percentages of metal ions were obtained by measuring their concentrations in the digested solutions. To quantify the total elemental composition, 23 coal fly ash samples (34 ± 1 mg) were digested in a microwave digestor for 8 h at 90–100 °C in a 1 : 1 mixture of 2 mL concentrated HF and 2 mL concentrated HNO 3 . Then, after complete drying, the acid-digested samples were re-digested for 8 h at 90–100 °C in a mixture of 1 mL concentrated HNO 3 , 1 mL 30–32% H 2 O 2 , and 5 mL DI water. After re-digestion, the samples were diluted with 1% HNO 3 for further analysis. To quantify the acid-extractable REEs content, 23 CFA samples (0.1–0.5 g) were digested in 10 mL concentrated HNO 3 at 85–90 °C for 4 h. The digested samples were diluted with 1% HNO 3 for further analysis. Triplicate digestion experiments were conducted. The concentrations of the REEs and impurities in the digested solutions were analyzed by inductively coupled plasma optical emission spectroscopy (ICP-OES, PerkinElmer Optima 7300 DV). Characterization of liquid samples The concentration of HNO 3 in the TBP–HNO 3 complex was determined by acid–base titration with 0.1 M NaOH until the pH equaled 7. Then, to quantify the REE and impurity concentrations in each stripped solution collected from the six-stage stripping process under scCO 2 , scN 2 , scAir, and the heating only condition, we diluted them with 1% nitric acid and measured them using ICP-OES. The REEs purity was calculated using eqn (2) : (2) where c total element is the sum of all the measured elements in the stripping solution. To study the mechanism by which SCF enhances selective extraction of REEs, the reacted TBP–HNO 3 samples obtained from the extraction were digested to quantify the amounts of REEs and impurities that had complexed with TBP. Triplicate experiments were conducted. The digestion of liquid TBP–HNO 3 samples was performed according to Anil et al. (2004). 39 TBP–HNO 3 solutions were mixed with 1 mL DI water, 2 mL concentrated HNO 3 , 0.4 mL 30–32% H 2 O 2 , and 0.4 mL concentrated HF. Then, an eight-step digestion was performed, lasting 1 h in total at 100 °C. After the digestion, the samples were diluted by 1% HNO 3 and prepared for ICP-OES analysis. Results and discussion Chemical nature of coal fly ash samples CFA samples were obtained from a power plant in Missouri that burned coal from the Powder River Basin (PRB). Fig. 2a shows dark brownish particles of the CFA used in this study. The chemical compositions of CFA samples were characterized by ICP-OES after HF–HNO 3 and HNO 3 –H 2 O 2 sequential digestions ( Fig. 1b ), and by X-ray fluorescence spectroscopy (Tables S1 and S2 in ESI † ). As shown in Fig. 2b , the total REEs contents of the samples were 234 ± 2 ppm, values which are within the reported range of total REEs contents in US-based coal fly ashes. 12 Cerium (Ce) was present at 60 ppm, the highest concentration among all REEs. In addition, the sample had high concentrations of Y and Nd, important elements projected to be in severely short supply by 2035. 3 However, the ICP-OES results after the digestion also showed that our CFAs had a variety of high concentration impurities, including Ca (138 710 ppm), Fe (54 943 ppm), Al (66 149 ppm), and Mg (23 306 ppm). The relatively abundant alkaline oxides (27.5% CaO and 6.7% MgO, in weight percentages) indicate that the CFAs in this study are Class C CFAs, which have been previously reported to exhibit higher REE extractability. 40 The large differences in the concentrations of REEs and impurities, 2–3 orders of magnitude, clearly emphasize the outstanding challenge in selectively extracting REEs from the CFA samples. Fig. 2 Characterization of the CFAs used in this study. (a) Photograph of the CFAs. (b) Elemental characterization of the CFAs. Upper: the total concentrations of major impurities (Ca, Fe, Al, and Mg), and REEs. The salmon-colored number in the upper plot indicates the concentrations of total REEs. Error bars represent the standard deviations from triplicate digestion experiments. Lower: the concentrations of representative REEs. (c–e) SEM images of representative morphologies of the CFAs. Scale bar: (a) 1 cm; (c), 5 μm; (d), 20 μm; and (e), 10 μm. In general, during coal combustion, heating above 1400 °C and rapid cooling in the post-combustion stage cause a diverse size distribution and morphology of fly ash, 41,42 such as solid spheres, layered particles, and aggregated particles, as shown in Fig. 2c–e . Based on energy-dispersive X-ray (EDX) analyses, the predominant elements in the fly ash samples were silicon, calcium, aluminum, iron, and magnesium (Fig. S1 † ), which was consistent with ICP-OES results. The REEs' concentrations were lower than the detection limit for EDX. The CFA in our study had a complex morphology, with quartz, anhydrite, gehlenite, tricalcium aluminate, lime, and periclase being identified (Fig. S2 † ). A broad bump at around 20–30° 2 θ suggested the presence of amorphous aluminosilicate glass. The absence of REE mineral phases indicated that REEs may adsorb or incorporate into other minerals. Thus, the degree to which these minerals can be dissolved by TBP–HNO 3 under SCF affected our REEs extraction process. Notably, quartz and amorphous aluminosilicate glass are barely dissolved even under acidic conditions, 40 and thus they remain as solid residues after our extraction process. Selective extraction of REEs with scCO 2 , scN 2 , and scAir Although scCO 2 -enabled extraction has been implemented for highly pure REEs oxides and post-consumer products with high concentrations of REEs, 28,32,33 little is known about whether this mechanism will still work when the impurities' concentration are overwhelmingly high compared to the REEs' concentration, as in the case of CFAs. In our experiment, CFA solid samples and prepared TBP–HNO 3 were loaded into a reactor, and then the SCF was injected at 50 °C and 150 bar. We found that CO 2 , N 2 , air, or their mixtures are all applicable, as long as the gas is supercritical phase. The critical temperatures and critical pressures for CO 2 , N 2 , and air are respectively 31 °C and 73.8 bar, −147 °C and 34.0 bar, and −141 °C and 37.9 bar. Because most other scCO 2 extraction studies used reaction times between 1.5 and 3 h, we chose 2 h as our reaction time for appropriate comparison of achieved efficiencies. After reacting the CFAs with TBP–HNO 3 under SCF conditions as Fig. 1b depicts, we calculated the concentration factor using eqn (3) . (3) The concentrations of metal ions (including REEs) in the reacted TBP–HNO 3 and CFA were obtained by digestions and ICP-OES measurements. The concentration factor reflects the selectivity of REEs over other impurities during the SCF extraction processes. To evaluate the effect of SCF on the extraction selectivity for REEs, we conducted a control experiment in which the CFAs were reacted with TBP–HNO 3 at 50 °C in the absence of SCF (the “without SCF” condition). As shown in Fig. 3a , the concentration factor is 1.49 ± 0.06 for the “without SCF” condition. In a clear comparison, involving supercritical N 2 and supercritical air into the extraction system can increase the concentration factor to 2.04 ± 0.11, 1.91 ± 0.13. Further, involving supercritical CO 2 in the extraction system can significantly increase the concentration factor to 3.23 ± 0.30. This result demonstrates that supercritical fluids can effectively extract REE from complex CFA and can enhance the selectivity of REEs over impurities ( Fig. 3a ) compared with the “without SCF” condition. Fig. 3 SCFs enable selective extraction of REEs over impurities. (a) Concentration factors show that the presence of SCFs can enhance the selectivity for REEs. (b) Leaching efficiencies for extracting REEs from CFAs under scCO 2 conditions. (c) Leaching efficiencies for extracting Ca, Fe, Al, Mg, and REEs from CFAs under different conditions. (d) TBP-complexed impurities under the heating-only, scCO 2 , scN 2 , and scAir conditions. Error bars for a and d represent the standard deviations of digested TBP–HNO 3 results from triplicate extraction experiments. Error bars for b and c represent the standard deviations of digested solid residue results from triplicate extraction experiments. Statistical analyses between different conditions were calculated in (a), (c), and (d): *** indicates a p value < 0.001, ** indicates p < 0.01, and * indicates p < 0.05. No “*” means there is no statistic difference between two conditions. The extraction can be considered a two-step reaction. The first step of the reaction is that metal ions, including REEs and impurities, leach from CFA and react with HNO 3 to form metal nitrates (metal nitrate formation in Fig. 4 ). 33 To calculate the leaching efficiencies for all REEs (scandium, yttrium, and 17 lanthanides) with scCO 2 , scN 2 , and scAir, we have used eqn (1) and shown the results in Fig. 3b and S3. † The leaching efficiencies from the CFA sample fall in the range of 65–78%. Here, it is noteworthy that although our CFA samples contained only 0.0234% REEs and high concentrations of impurities coexisted with the REEs, the leaching efficiencies (∼70%) in this study were comparable to the leaching efficiencies (40–99%) for high purity materials containing 7–100% REEs. 26,29–31 The result clearly shows that we achieved good leaching efficiency of REEs from CFAs with TBP–HNO 3 , even though large quantities of impurities remain. In addition to REEs, we also calculated the leaching efficiencies of major impurities, including Ca, Fe, Al, and Mg. As shown in Fig. 3c , we did not observe significant change of the leaching efficiencies for all metal ions among different conditions. Based on this result, the higher selectivity of REEs at SCF conditions was not due to the selective leaching. Then, there must be an enhanced selective complexation between REE ions and the extractant to result the enhanced selectivity under supercritical fluids. Fig. 4 Schematic illustration of SCF-enhanced selectivity for REEs over major impurities (Ca, Mg, Al). First, REEs and impurities leach from CFAs to form metal nitrates. Then, the presence of SCF affects the reactivity of TBP. In the heating-only condition (left path), all metal nitrates preferentially form complexes with TBP. But under SCF conditions (right path), only REEs and Fe still preferentially form complexes with TBP. The number of symbols for metal-TBP complexes shown in the figure represents the extents of preferential complex formation, not their quantities. Therefore, we investigated the second step of the extraction reaction: metal nitrates react with TBP (complex formation in Fig. 4 ). To quantify how many REEs/impurity metals (Ca, Fe, Al, and Mg) nitrates had complexed with TBP under different conditions, we digested the reacted TBP–HNO 3 and then measured the digestion products by ICP-OES. The concentrations of complexed REE were similar among different conditions: 19.86 ± 0.45 mg L −1 (scCO 2 ), 19.69 ± 0.62 mg L −1 (scN 2 ), 19.24 ± 0.71 mg L −1 (scAir), and 20.23 ± 0.55 mg L −1 (without SCF). In contrast, the presence of SCF significantly affects the complexation between major impurities (Ca(NO 3 ) 2 , Al(NO 3 ) 3 , and Mg(NO 3 ) 2 ) and TBP. As Fig. 3d shows, Ca, Al, and Mg nitrates less favorably with TBP under SCF conditions than the condition without SCF, but the complexation between Fe with TBP is less affected by SCF. Wendlt and Bryant (1956) reported that the complexation capability of metal nitrates with TBP followed the series: Fe > REEs ≫ Ca > Mg > Al. 43 This sequence suggests that REEs nitrates or Fe nitrate can easily complex with TBP, while calcium nitrate, magnesium nitrate, and aluminum nitrate are less reactive with TBP. Here, we observed an interesting change in TBP behavior in SCF. One possible explanation is that SCF, as a solvent, dispersed the 20 mL of TBP throughout the entire 200 mL reactor, lowering the effective concentration of TBP. However, considering the high temperature and pressure in the supercritical phase extraction, real-time measurements of extractant interactions with SCF in situ are highly challenging. Thus, while we could not provide an exact mechanisms supported by direct evidence, here we provide experimental data regarding the impacts of effective TBP concentrations on its selectivity. To provide additional data, we conducted extraction experiments using 6 g of CFA and 20 mL of TBP–HNO 3 in the absence of any SCF, which was a three times lower effective TBP concentration than the original condition without SCF (2 g of CFA and 20 mL of TBP–HNO 3 ). In other words, by increasing the amount of CFA, we decreased the effective TBP concentration. After the extraction, with a 3 times higher concentration of CFA, the concentrations of Fe and REEs were increased by more than 3 times over the original condition. In contrast, the concentrations of Ca, Mg, and Al, which are considered to weakly complex with TBP, increased by less than 2.5 times compared to the original experiment. These results suggest that lowering the effective TBP concentration could make TBP more selective for REEs and Fe over Ca, Mg, and Al. We note that, due to high solid-to-liquid ratio, it is experimentally challenging to mix 6 g CFA and 20 mL of TBP and recover the reacted TBP–HNO 3 by vacuum filtration. But with SCF, a 10 times dilution can be achieved. Also, the concentration factor under high solid-to-liquid ratio is lower than the concentration factors under SCF conditions. For all the SCFs tested in our experiment, we speculated that the reactivity of the TBP might be lowered by the dilution, so that TBP would complex with highly reactive REE(NO 3 ) 3 and Fe(NO 3 ) 3 , but other, less reactive metal nitrates would not complex with TBP due to its low reactivity ( Fig. 5 , bottom right column). We expect that future dedicated computational and spectroscopic studies can provide direct evidence of the impacts of SCF on extractants. Fig. 5 Concentrations of total REEs (top) and major impurities (Ca, Fe, Al, and Mg, bottom) in stripped solutions from different stripping stages. Error bars represent the standard deviations of stripping results from triplicate experiments. A 1% HNO 3 solution was used to strip the REEs and impurities from TBP–HNO 3 . The major impurities' concentrations significantly decrease with the number of stripping steps. The top right y -axis is the recovery efficiency of REEs in each stripping stage, calculated by dividing the collected REEs in 1 mL of stripped solution by the total amount of REEs amount in 2 g of CFA. In the top plot, the calculated recovery efficiency for each stripping stage is shown below the REEs concentration symbol. Multistage stripping process collects REEs with high concentrations and purities To collect REEs extracted in TBP–HNO 3 , we designed a multistage stripping process using 1% nitric acid, as depicted in Fig. 1c . In each stage, we added 1% nitric acid to the reacted TBP–HNO 3 in a 10 : 1 v/v ratio. This volume ratio was experimentally determined to be the best for concentrating REEs (detailed information is in ESI S1 † ). After vigorous mixing, the REEs and impurities have dissociated from the TBP and are dissolved into the diluted nitric acid. The 1% nitric acid, containing REEs and impurities, is then collected by gravity separation and is called “stripped solution”. The remaining reacted TBP–HNO 3 is mixed with fresh 1% nitric acid for a new stripping stage. This process is repeated for five more stages, during which REEs and impurities continually dissociate from TBP and are collected by 1% nitric acid. In total, a six-stage stripping process is applied to recover essentially all the REEs from the reacted TBP–HNO 3 . As Fig. 5 shows, REEs gradually dissociate from TBP and are then collected by 1% nitric acid ( i.e. , the stripping solution) in the first stage through the sixth stage. The total REEs concentrations in our first through sixth stages ranged from 11 to 35 mg L −1 under SCF conditions for the three gases ( Fig. 5 ), values which are much higher than the reported concentrations of REEs extracted from CFAs (0.3–5.5 mg L −1 ) in previous studies. 22,23 Interestingly, in addition to collecting REEs, we noticed that our multistage stripping process can achieve a substantial partial separation between REEs and impurities. For impurities Mg and Al, majority of them were separated from REEs during the SCF extraction. During our multistage stripping process, we only collected Mg and Al in the first and second stripping processes. Moreover, because Ca and Fe have much higher water affinity than REEs, 44 and thus 94.5% of Ca and 96.7% of Fe were are preferentially removed from reacted TBP–HNO 3 in the first and second stripping stages. Therefore, we could collect much lower concentrations of impurities in the remaining stripping stages to achieve a higher REE purity percentage ( Table 1 ). As shown in Fig. 5 , the first and second stripping stages are sacrificial stages in which we lost 13.7% and 13.1%, respectively, of the REEs from the CFA. We believe the two sacrificial stages are important for two reasons. First, the first and second stripping stages allowed us to significantly decrease the impurity concentrations, so in the subsequent stages we obtained more than 30% of the REEs from the CFA with much higher purity. Second, a major challenge in extracting REEs from CFA is their extremely low purity, which limits the benefits of conducting further processing. Considering that 79 million metric tonnes (t) of CFAs are generated annually in the U.S., considerable amounts of high purity REEs are available for further separation, even though 26.8% of them will be sacrificed. There is always a tradeoff between product purity and recovery efficiency. In general, by sacrificing some REEs in the first and second stripping process, we subsequently collected 31.2% of the REEs from CFA. As summarized in Table 1 , these REEs had much higher concentrations and purities than the products of previous works using acid leaching, solvent extraction, and selective membrane processes to extract REEs from CFA. 22,23 During the SCF extraction, ∼32% of the REEs were non-acid-extractable; thus, they did not leach from the CFA. After the multistage stripping process, <10% of the REEs remained complexed with TBP. We expect future work to further optimize the multistage stripping process to maximize recovery efficiency with high REE purity. Table 1 Concentrations of major impurities and total REEs, and REEs purity in final products of the liquid emulsion membrane process, supported liquid membrane process, conventional organic solvent extraction process, and our novel SCF extraction process. Triplicate experiments have been conducted and standard deviations from triplicates were within 10%. The results for all stripping stages are available in Tables S4–S7 in ESI Liquid emulsion membrane final liquid a Supported liquid membrane final liquid a Conventional extraction final liquid a Our work scCO 2 fourth stripping solution Our work scCO 2 fifth stripping solution Our work scCO 2 sixth stripping solution Na (μg L −1 ) 333 000 27 900 4220 0 0 0 Mg (μg L −1 ) 8320 152 320 0 0 0 Al (μg L −1 ) 149 000 1770 919 000 0 0 0 Fe (μg L −1 ) 522 551 2100 313 802 200 196 132 632 Ca (μg L −1 ) 107 000 968 42 700 246 556 138 580 74 175 Si (μg L −1 ) 28 900 5340 3450 0 0 0 REEs (μg L −1 ) 4635 303 5587 21 374 16 088 11 441 REEs purity (%) 0.73 0.79 0.57 3.43 6.47 6.26 a Results from a previous study by Smith et al. 22 REEs purity values were calculated from eqn (2) . Given the increasing demand for and importance of REEs, alternative sources to ore-extracted products are being sought, such as CFAs. The biggest challenge here is that REEs concentrations in CFAs are much lower than the impurity concentrations. To selectively extract REEs, separation technologies, such as conventional organic solvent extraction and novel liquid membrane processes, have been explored. 21–23 In the organic solvent extraction process, DEHPA was dissolved to a concentration of 10% (v/v) into kerosene, and it selectively extracted REEs from CFA leachate. 21 Then, 5 M HNO 3 strippant was used to recover the REEs. To enhance the kinetics during the REE stripping process, Smith et al. (2019) synthesized a strippant-in-kerosene liquid emulsion membrane system by mixing DEHPA, Span 80, 5 M HNO 3 , and kerosene. 22 Then, to increase the selectivity for heavy REEs over light REEs, a supported liquid membrane was prepared by using vacuum filtration to impregnate a 47 mm 0.22 μm PVDF membrane with a 10% (v/v) solution of DEHPA in mineral oil. 22 This membrane was used as a separator between CFA leachate and 5 M HNO 3 leachate to make REEs selectively transfer through the membrane. However, the final products of these separation technologies contained less than 6000 μg L −1 REEs, and the purity was less than 1%, as shown in Table 1 . In contrast, without using an organic solvent, our novel SCF extraction process can directly obtain REEs from solid phase CFAs, and it yields REEs aqueous solutions with concentrations of up to 21 374 μg L −1 and purities up to 6.47% ( Table 1 ). Currently, extracting REEs from mineral ores requires preprocessing steps (alkaline roasting and acid leaching) to turn the REE from a solid matrix into an aqueous solution. It also involves separation processes (fractional crystallization, fractional precipitation, ion exchange, and solvent extraction) to obtain high-purity individual REEs. 18–20 Our study shows that our SCF extraction process could be an alternative to the acid leaching process and turn the CFA matrix into a solution containing ∼3–6% REEs. The process serves a function similar to that of acid leaching but can provide a higher purity of REEs. Notably, the REEs purity in our study is even comparable to the purity of some commercially available REEs ores. 45 We expand the sources of REEs from mineral ores to previously neglected CFA, which was regarded as a waste and environmental threat. Moreover, the REEs-containing solutions obtained from our study can undergo further separation processes to produce even higher purity REEs. For example, a previous study designed a fractional precipitation method using oxalic acid to selectively precipitate REEs as REE oxalates. Fractional precipitation achieved REE purities exceeding 60%. 44 This method could also be applied to selectively precipitate the REEs in our stripping solution. Thus our suggested process could be combined with fractional precipitation, ion exchange, and solvent extraction to produce higher purity REEs. Beyond showing that SCF can enhance the selective extraction of REEs from CFAs, we believe that our novel process can perform well in extracting REEs from other low-grade REEs sources, including nickel-metal hydride batteries, neodymium magnets, and acid mine drainage. 28,33,46 In addition, considering that TBP has strong complexation with actinides, especially uranium and thorium, 43 our extraction process can potentially be applied to recover actinides from nuclear products. In addition to REEs, our process may extract and recover heavy metals from CFA. As shown in Fig. S6A, † the extraction efficiencies for Cr, Cu, Mn, and Zn are 11.9%, 9.0, 30.9%, and 62.0%, respectively. Further, owing to their relatively weak complexation with TBP, most heavy metals are collected in the first and second stripping stages. Moreover, as shown in Fig. S6B, † the remaining heavy metal concentrations in stripping stages 4–6 are low (0–0.22 mg L −1 ). Their concentrations are much lower than the REEs' concentrations (11.4–21.4 mg L −1 ). Thus, the collected heavy metals had little impacts on the purity of the REEs collected in stripping stages 4–6. Conclusions Herein we show that supercritical fluids, i.e. , scCO 2 , scN 2 , and scAir, can enhance the selective extraction of REEs directly from solid coal fly ash matrix. Our exploratory study is the first to demonstrate the direct application of an SCF, which both replaces harmful organic solvent and efficiently recovers valuable resources from CFA, previously considered a waste material or even an environmental threat. Although major impurities in CFAs, such as Ca, Fe, Al, and Mg, have several magnitudes higher concentrations than REEs, SCF-enhanced extraction allows us to extract REEs with greatly decreased impurity amounts in the final products. Beyond scCO 2 , our work also shows scN 2 and scAir can be applied in the extraction process for REEs. In addition, based on chemical analysis, we found that the presence of SCFs can decrease the complexation between impurities and TBP to enhance the selectivity of REEs. After SCF extraction, we applied multistage stripping process, which can collect REEs meanwhile further decrease the impurities concentrations. Ultimately, our novel process successfully obtained final products contain up to 6.47% REEs purity from coal fly ashes, which are traditionally considered as waste and contain only 0.0234% of REEs initially. Conflicts of interest A patent application has been submitted for the process reported here. Acknowledgements The project was supported by Washington University's Consortium for Clean Coal Utilization. The Nano Research Facility and the Institute of Materials Science and Engineering at Washington University in St. Louis provided their facilities for the experiments. We thank Prof. Daniel Giammar to share the CFA samples. We thank Prof. James Ballard for carefully reviewing the manuscript.
Rare earth elements (REE), a group of 17 metallic elements, are in nearly every piece of technology, including cell phones, televisions, computers and almost every part of a vehicle. The demand for these elements increases annually, however the supply is limited geopolitically and is mined with environmentally unsustainable practices. Young-Shin Jun, professor of energy, environmental & chemical engineering in the McKelvey School of Engineering at Washington University in St. Louis, and her team have created a proof-of-concept solution: extracting REEs from coal fly ash, a fine, powdery waste product from the combustion of coal. "We wanted to use a greener process to extract REEs than traditionally more harmful processes," Jun said. "Since the coal has already been used, this process is ultimately a pathway toward reduction and remediation of waste products." Jun and her former doctoral student, Yaguang Zhu, now a postdoctoral scholar at Princeton University, developed this novel extraction process using supercritical fluid, commonly used to decaffeinate coffee, to recover these critically needed REEs from material that would have otherwise been discarded in a landfill. Supercritical fluid is a substance at a temperature and pressure above its critical point with properties between a liquid and a gas. With more than 79 million metric tons of coal fly ash generated in the U.S. annually, Jun's team reported that the potential value of the REEs that could be extracted from coal fly ash in the U.S. is estimated at more than $4 billion annually. Their work, which appears in RSC Sustainability, is the first to show that common and accessible supercritical fluids, including carbon dioxide, nitrogen and air, were able to extract REEs and separate impurities very efficiently. In addition, through experiments using coal fly ash, they found that supercritical carbon dioxide decreased the concentrations of impurities in the final REE product. Ultimately, their final products contained up to 6.47% REEs, compared with 0.0234% in the initial coal fly ash source. "The uniqueness of our work is not only using the supercritical CO2, but also showing that supercritical air and nitrogen, with much lower temperature and pressure than those required for CO2, can extract REE effectively," said Jun, who leads the Environmental NanoChemistry Laboratory. "We can use lower temperatures and pressures with nitrogen or air to extract the rare earth elements from coal fly ash, which means lower energy cost. Of course, the supercritical CO2 works best, but supercritical air or nitrogen could do a much better job compared with traditional high temperature boiling with acids and organic solvents for REE extraction." Jun's team's extraction process involved two steps: First, metal ions in the coal fly ash, including REEs and impurities, leach from the coal fly ash and react with nitric acid to form metal nitrates; and second, the metal nitrates react with tributyl phosphate (TBP). They found that with supercritical carbon dioxide, nitrogen or air, the REEs formed complexes that could be extracted from the coal fly ash. After extraction, their multistage stripping process collected REEs and decreased the concentration of impurities. The nitric acid and TBP used in the process can be fully recycled multiple times without sacrificing efficiency, which minimizes their disposal concerns. Jun's method also eliminates the need to roast raw materials at extremely high temperatures, or greater than 500 C, and the need to extract the REEs with strong acids and a large quantity of toxic organic solvents, which also become a waste product in traditional extraction processes. "Supercritical fluid is considered as a greener solvent, is less invasive to the environment and allows us to extract REE directly from solid waste without leaching and roasting raw materials, so less energy is required for our new process, which also produces less waste," Jun said. "We are seeking a more environmentally benign process for critical element recycling and recovery from materials previously considered to be waste."
10.1039/D2SU00033D
Physics
Physicists build fractal shape out of electrons
Design and characterization of electrons in a fractal geometry, Sander N. Kempkes, Marlou R. Slot, Saoirsé E. Freeney, Stephan J.M. Zevenhuizen, Daniël Vanmaekelbergh, Ingmar Swart, Cristiane Morais Smith, Nature Physics, 12 November 2018, DOI: 10.1038/s41567-018-0328-0 Journal information: Nature Physics
http://dx.doi.org/10.1038/s41567-018-0328-0
https://phys.org/news/2018-11-physicists-fractal-electrons.html
Abstract The dimensionality of an electronic quantum system is decisive for its properties. In one dimension, electrons form a Luttinger liquid, and in two dimensions, they exhibit the quantum Hall effect. However, very little is known about the behaviour of electrons in non-integer, or fractional dimensions 1 . Here, we show how arrays of artificial atoms can be defined by controlled positioning of CO molecules on a Cu (111) surface 2 , 3 , 4 , and how these sites couple to form electronic Sierpiński fractals. We characterize the electron wavefunctions at different energies with scanning tunnelling microscopy and spectroscopy, and show that they inherit the fractional dimension. Wavefunctions delocalized over the Sierpiński structure decompose into self-similar parts at higher energy, and this scale invariance can also be retrieved in reciprocal space. Our results show that electronic quantum fractals can be artificially created by atomic manipulation in a scanning tunnelling microscope. The same methodology will allow future studies to address fundamental questions about the effects of spin–orbit interactions and magnetic fields on electrons in non-integer dimensions. Moreover, the rational concept of artificial atoms can readily be transferred to planar semiconductor electronics, allowing for the exploration of electrons in a well-defined fractal geometry, including interactions and external fields. Main Fractals have been investigated in a wide variety of research areas, ranging from polymers 5 , porous systems 6 , electrical storage 7 and stretchable electronics 8 down to molecular 5 , 9 , 10 , 11 and plasmonic 12 fractals. On the quantum level, fractal properties emerge in the behaviour of electrons under perpendicular magnetic fields; for example, in the Hofstadter butterfly 13 and in quantum Hall resistivity 14 , 15 . In addition, a multi-fractal behaviour has been observed for the wavefunctions at the transition from a localized to delocalized regime in disordered electronic systems 16 , 17 , 18 . However, these systems do not allow one to study the influence of non-integer dimensions on the electronic properties. Geometric electronic fractals, in which electrons are confined to a self-similar fractal geometry with a dimension between one and two, have been studied only from a theoretical perspective. For these fractals, a recurrent pattern in the density of states as well as extended and localized electronic states were predicted 19 , 20 , 21 , 22 . Recently, simulations of quantum transport in fractals revealed that the conductance fluctuations are related to the fractal dimension 23 , and that the conductance in a Sierpiński fractal shows scale-invariant properties 24 , 25 , 26 . Here, we report how to construct and characterize, in a controlled fashion, a fractal lattice with electrons: the electrons that reside on a Cu(111) surface are confined to a self-similar Sierpiński geometry through atomic manipulation of CO molecules on the Cu(111) surface. The manipulation of surface-state electrons by adsorbates has been pioneered by Crommie et al. 27 and has been used to create electronic lattices ‘on demand’, such as a molecular graphene 2 , an electronic Lieb lattice 3 , 28 , a checkerboard and stripe-shaped lattice 29 , and a quasiperiodic Penrose tiling 4 . We characterized the first three generations of an electronic Sierpiński triangle by scanning tunnelling microscopy and spectroscopy, acquiring the spatially and energy-resolved electronic local density of states (LDOS). These results were corroborated by muffin-tin calculations as well as tight-binding simulations based on artificial atomic s -orbitals coupled in the Sierpiński geometry. The Sierpiński triangle with Hausdorff dimension log(3)/log(2) = 1.58 is presented in Fig. 1a . We define atomic sites at the corners and in the centre of the light blue triangles, as shown in Fig. 1b for the first generation G (1) 10 , 30 . G (1) has three inequivalent atomic sites, indicated in red, green and blue, which differ by their connectivity. A triangle of generation G ( N ) consists of three triangles G ( N −1), sharing the red corner sites. The surface-state electrons of Cu(111) are confined to the atomic sites by adsorbed CO molecules, acting as repulsive scatterers. Figure 1c shows the experimental realization of the first three generations of the Sierpiński triangle and Fig. 1d shows the relation with the artificial atomic sites. The distance between neighbouring sites is 1.1 nm, such that the electronic structure of the fractal will emerge in an experimentally suitable energy range 2 . Fig. 1: Geometry of the Sierpiński triangle fractal. a , Schematic of Sierpiński triangles of the first three generations G (1)– G (3). G (1) is an equilateral triangle subdivided into four identical triangles, from which the centre triangle is removed. Three G (1) ( G (2)) triangles are combined to form a G (2) ( G (3)) triangle. b , Geometry of a G (1) Sierpiński triangle with red, green and blue atomic sites. t and t ′ indicate nearest-neighbour and next-nearest-neighbour hopping between the sites in the tight-binding model. c , Constant-current STM images of the realized G (1)– G (3) Sierpiński triangles. The atomic sites of one G (1) building block are indicated as a guide to the eye. Imaging parameters: I = 1 nA, V = 1 V for G (1) and G (2) and 0.30 V for G (3). Scale bar, 2 nm. d , The configuration of CO molecules (black) on Cu(111) to confine the surface-state electrons to the atomic sites of the Sierpiński triangle. e , Normalized differential conductance spectra acquired above the positions of red, blue and green open circles in c (and equivalent positions). f , LDOS at the same positions, simulated using a tight-binding model with t = 0.12 eV, t ′ = 0.01 eV and an overlap s = 0.2. a.u., arbitrary units. Full size image Figure 1e presents the experimental LDOS at the red, blue and green atomic sites in the G (3) Sierpiński triangle (indicated by the open circles in Fig. 1c ). The differential conductance (d I /d V ) spectra were normalized by the average spectrum taken on the bare Cu(111) surface, similar to ref. 2 . The onset of the surface-state band is located at V = −0.45 V. We focus on the bias window between −0.4 V and 0.3 V. Around V = −0.3 V the LDOS on the red, green and blue sites is nearly equal, whereas slightly above V = −0.2 V, the red sites exhibit a distinct minimum, while the green and blue sites show a considerably higher LDOS. At V = −0.1 V, the blue sites show a minimum, whereas the red and green sites exhibit a pronounced maximum in the LDOS. At V = +0.1 V, the blue sites show a larger peak in the differential conductance, whereas the green and red sites exhibit a smaller peak. The experimental LDOS is in good agreement with both the tight-binding (see Fig. 1f ) and muffin-tin simulations (see Supplementary Information ). This finding corroborates that our design leads to the desired confinement of the two-dimensional electron gas to the atomic sites of the Sierpiński geometry. In addition, it allows us to characterize the wavefunctions of the chosen Sierpiński geometry in detail. Figure 2 shows experimental wavefunction maps obtained at different bias voltages and a comparison with simulations using a tight-binding and muffin-tin model. In a thought experiment, we will discuss how electrons can be transported across the set-up between a source and a drain at arbitrary positions. At a bias voltage of −0.325 V, the red (R), green (G) and blue (B) sites all have a high LDOS, and this also holds between the sites. Hence, from a chemical perspective, this wavefunction has strong bonding character, yielding an excellent conductivity from source to drain along (R–B–G–B–R)-pathways. At V = −0.2 V, the red sites that connect the G (1) triangles have a low amplitude: the wavefunction of the G (3) triangle partitions into nine parts, each corresponding to a G (1) triangle. The self-similar Sierpiński geometry thus leads to a subdivision of a fully bonding wavefunction delocalized over the G (3) Sierpiński triangle at −0.325 V in self-similar G (1) parts at −0.2 V, demonstrating self-similar properties of the LDOS itself. At the latter bias voltage, the conductivity along (R–B–G–B–R)-pathways suffers from the lower amplitude on the red sites (except the red corner sites). At V = −0.1 V, the LDOS shows a marked minimum on the blue sites and a peak at the green and red sites. From the tight-binding calculation, we find that the wavefunction has nodes on the blue sites, corresponding to a non-bonding molecular orbital from a chemical perspective. It is clear that the conductivity along the (R–B–G–B–R)-pathway mediated by nearest-neighbour hopping has vanished, and that electrons have to perform next-nearest-neighbour hopping between the red and green sites to propagate. These results connect with the theoretically calculated transmission of a Sierpiński carpet on a hexagonal lattice, which exhibits a gap in the conductivity although there is a high DOS in the system 23 . Finally, at V = +0.1 V, all blue sites in the G (3) Sierpiński structure have a high amplitude, whereas the red and green sites exhibit a low amplitude. Again, the conductivity between source and drain is suppressed. We note that the LDOS maps of the three generations G (1)− G (3) show the same features (see Supplementary Information ), which is a consequence of the self-similarity of the geometry. We study this scale-invariance of the wavefunction in more detail with the box-counting method. Fig. 2: Wavefunction mapping. a – d , Differential conductance maps acquired above a G (3) Sierpiński triangle at bias voltages −0.325 V, −0.200 V, −0.100 V and +0.100 V. Scale bar: 5 nm. e – h , LDOS maps at these energies calculated using the tight-binding model. i – l , LDOS maps simulated using the muffin-tin approximation. As a guide to the eye, a G (1) building block is indicated, in which a larger radius of the circles corresponds to a larger LDOS at an atomic site, whereas no circle indicates a node in the LDOS. Full size image To determine whether the electronic wavefunctions inside the Sierpiński structure inherit the scaling properties of the Sierpiński geometry, we determine the fractal dimension of the wavefunction maps at different energies following the procedure in ref. 31 . We calculate the box-counting dimension (also called the Minkowski–Bouligand dimension) for both the experimental and simulated muffin-tin LDOS maps using \(D = \mathop {\mathrm{lim}}\nolimits_{r \to 0} \frac{{\mathrm{log}N(r)}}{{\mathrm{log}(1/r)}}\) , with N the number of squares on a square grid required to cover the contributing LDOS and r the side length of these squares. In this procedure the wavefunction maps are positioned on a 360 × 360 pixel square grid. The parts of the wavefunction maps that are in the regions containing an agglomeration of CO molecules are excluded by applying a mask. Further, we choose a threshold (55%, 65% and 75% of the maximum LDOS) above which the LDOS contributes to define a binary image. Finally, the box-counting dimension of this image is calculated. The number of boxes N is counted for 21 box sizes ranging from 1 to 90 pixels. Subsequently, the fractal dimension is given by the slope of the log–log plot for N ( r ). Details can be found in the Supplementary Information. A typical log–log plot is presented in Fig. 3a , where the inset shows the binarized experimental LDOS map at V = −0.325 V. The red slope is calculated for the largest 11 boxes and the blue slope for the smallest 10 boxes. Figure 3b shows the box-counting dimension obtained from these red slopes for the experimental (dark red) and theoretical (light red) wavefunction maps acquired at various energies. For comparison, we also show the dimension obtained for the wavefunction maps of a square lattice (dark and light green, for the experiment and theory, respectively), realized in the same way and measured in the same energy window 3 . The difference between the experimental and simulated maps is ascribed to a more gradual contrast in the simulation, where also contributions of the tip density of states do not play a role. It can be clearly seen that the box-counting dimension of the Sierpiński triangle is close to the theoretical Hausdorff dimension 1.58 (red solid line), whereas the square lattice has a dimension close to 2 (green solid line). When calculating the box-counting dimension for the blue slopes for the Sierpiński triangle, we find the results shown in Fig. 3c . When using the 10 smallest boxes, the analysis yields higher values of the dimensions for the fractal, but they are still well below 2 and well below the values obtained for the square lattice. This behaviour is well understood (see Supplementary Information ). From these results and an additional scaling analysis shown in the Supplementary Information , we conclude that the wavefunctions inherit the fractal dimension and therefore the scaling properties of the geometry to which they are confined, and that this dimension can be non-integer. Fig. 3: Fractal dimension of the Sierpiński wavefunction maps. a , The box-counting dimension of the experimental wavefunction map acquired at V = −0.325 V is obtained from the slope of log( N ) versus log( r − 1 ), where either the first 11 datapoints starting from the left (largest boxes, red slope) or the last 10 datapoints (smallest boxes, blue slope) are taken into account. Inset: binarized and masked map of the experimental LDOS obtained with a binarization threshold of 65%. b , Determination of the fractal dimensions of the LDOS of the G (3) Sierpiński triangle (red) using the 11 largest boxes (red slopes) and comparison with the 2D square lattice from ref. 3 (green) for the experimental (dark) and muffin-tin (light) wavefunction maps with a threshold of 65%. The error bars display the error due to the choice of the binarization threshold, indicating the fractal dimension at LDOS thresholds of 55% (top) and 75% (bottom). The solid lines show the geometric Sierpiński Hausdorff dimension ( D = 1.58) and that of the square lattice ( D = 2). c , The box-counting dimension of the wavefunction maps when the last 10 datapoints are taken into account for the Sierpiński triangle (blue). Full size image Finally, we show how the self-similarity of the wavefunction maps is reflected in momentum space. The Fourier-transformed wavefunction map at V = −325 mV (Fig. 4a ) exhibits distinct maxima at k = 1.9 nm −1 (turquoise), k = 1.0 nm −1 (red) and k = 0.5 nm −1 (yellow). These maxima correspond to the next-nearest-neighbour distances between the artificial atomic sites (see Fig. 1 ), the side of a G (1) triangle, and the side of a G (2) triangle in real space, respectively. We then transform parts of the Fourier map back into real space (Fig. 4b–d ). The data inside the turquoise circle recover the full G (3) Sierpiński triangle, as shown in Fig. 4b . Transforming the values inside the red circle, however, results in a Sierpiński triangle of generation 2, while the size is retained (see Fig. 4c ). Analogously, transforming the data inside the yellow circle yield a first-generation Sierpiński triangle (Fig. 4d ). This shows that the G (3) wavefunction contains Fourier terms of the previous generations. The self-similar features of the Sierpiński triangle are thus inherently encoded in momentum space. Fig. 4: Fourier analysis of wavefunction maps. a , Fourier transform of the experimental differential conductance map at −0.325 V. The k -values outside the circles are excluded from the Fourier-filtered images in b – d . Scale bar: k = 3 nm −1 . b – d , Wavefunction map at −0.325 V after Fourier filtering, including merely the k -values within the turquoise ( b ), red ( c ) and yellow ( d ) circles indicated in a . Scale bar: 5 nm. Full size image We have demonstrated a rational concept of building electronic wavefunctions with a fractional dimension from artificial atomic sites that couple in a controlled way. We discussed the wavefunctions that form by coupling the s -orbitals of artificial atoms in the single-electron regime. Although this study represents the simplest case, it already exhibits several aspects of fractal confinement. The emergent fractionalization of the wavefunction at the single-particle level has profound implications and opens a series of interesting questions for future investigation: Do electrons in D = 1.58 behave like Luttinger liquids? Do they exhibit the fractional quantum Hall effect in the presence of a strong perpendicular magnetic field, or is the behaviour hybrid between 1D and 2D? How does charge fractionalization manifest when the wavefunction is itself already fractional? Recent theoretical work already addresses parts of these questions and corroborates the potential of electrons in fractal lattices, showing that the Sierpiński carpet and gasket host topologically protected states in the presence of a perpendicular magnetic field 32 . Furthermore, the design of artificial-atom quantum dots coupled in a fractal geometry can also be implemented in semiconductor technology, thus making it possible to perform spectroscopy and transport experiments under controlled electron density. This would form a versatile platform to explore fractal electronics with several internal degrees of freedom, such as orbital type, Coulomb and spin–orbit interactions, as well as external electric and magnetic fields. Methods Scanning tunnelling microscope experiments The scanning tunnelling microscopy and spectroscopy experiments were performed in a Scienta Omicron LT-STM system at a temperature of 4.5 K and a base pressure around 10 −10 –10 −9 mbar. A clean Cu(111) crystal, prepared by multiple cycles of Ar + sputtering and annealing, was cooled down in the scanning tunnelling microscope head. Carbon monoxide was leaked into the chamber at p ≈3.10 −8 mbar for 3 min and adsorbed at the cold Cu(111) surface. A Cu-coated tungsten tip was used for both the assembly and the characterization of the fractal. The CO manipulation was performed in feedback at I = 60 nA and V = 50 mV, comparable to previously reported values 33 , 34 , and was partly automated using an in-house-developed program. Scanning tunnelling microscopy was performed in constant-current mode. A standard lock-in amplifier was used to acquire differential conductance spectra ( f = 973 Hz, modulation amplitude 5 mV r.m.s.) and maps ( f = 273 Hz, modulation amplitude 10 mV r.m.s.) in constant-height mode. The contrast of the experimental wavefunction maps as displayed in Fig. 2 was adjusted using the software Gwyddion. For the box-counting analysis of the experimental wavefunction maps (Fig. 3 ), Fourier-smoothed images were used with no further adjustments of the contrast. The Fourier analysis (Fig. 4 ) was performed using the same software. Tight-binding calculations The atomic sites in the first three generations of the Sierpiński triangle 35 are modelled as s -orbitals, for which electron hopping between nearest-neighbour and next-nearest-neighbour sites is defined. The parameters used are e s = −0.1 eV for the on-site energy, t = 0.12 eV for the nearest-neighbour hopping and t ′/ t = 0.08 for the next-nearest-neighbour hopping, similar to previously reported values 2 . Furthermore, we included an overlap integral s = 0.2 between nearest-neighbours and solved the generalized eigenvalue equation H | ψ 〉 = E \({\cal S}\) | ψ 〉, where \({\cal S}\) is the overlap-integral matrix. The LDOS is calculated at each specific atomic site and a Lorentzian energy-level broadening of Γ = 0.8 eV is included to account for bulk scattering. For the simulation of the LDOS maps, the same energy-level broadening was used and the LDOS at each site was multiplied with a Gaussian wavefunction of width σ = 0.65 a , where a = 1.1 nm is the distance between two neighbouring sites. Muffin-tin calculations The surface-state electrons of Cu(111) are considered to form a 2D electron gas confined between the CO molecules, which are modelled as filled circles with a repulsive potential of 0.9 eV and radius R = 0.55 a /2. The Schrödinger equation is solved for this particular potential landscape, and a Lorentzian broadening of Γ = 0.8 eV is used to account for the bulk scattering. Box-counting method The Minkowski–Bouligand 36 or box-counting method is a useful tool to determine the fractal dimension of a certain image, but has to be handled and interpreted with care. In particular, as has been shown previously 31 , the size of the boxes needs to be chosen within certain length scales. More specifically, the largest box should not be more than 25% of the entire image side and the smallest box is chosen to be the point at which the slope starts to deviate from the linear regime in the log( N ) versus log(1/ r ) plot. Due to experimental limitations, it is not always possible to fully ‘block’ certain areas using the CO/Cu(111) platform. Redundant features that are not part of the fractal set, such as the Friedel oscillations surrounding the Sierpiński triangle and the LDOS between the closely packed CO molecules in the centres of the G (2) and G (3) Sierpiński triangles, were removed by applying a mask (see Supplementary Information ). The masks serve as a proxy for the areas that should be excluded from future experiments using other platforms (for example, by etching or gating those areas). Furthermore, the wavefunction maps are not binary, and therefore it is necessary to specify the LDOS threshold value above which the pixels are part of the fractal set. This binarization threshold is a certain percentage of the maximum LDOS of the masked wavefunction map at a specific energy. The error introduced by the choice of the threshold is accounted for by performing the calculation procedure for the threshold percentages of 55%, 65% and 75% indicated by the top, centre and bottom of the error bar, respectively (see Supplementary Information ). In addition to the box sizes chosen in the main text, the results for other box sizes are presented in the Supplementary Information . Data availability All data is available from the corresponding authors upon reasonable request. The experimental data can be accessed using open-source tools. Change history 09 July 2021 A Correction to this paper has been published:
In physics, it is well-known that electrons behave very differently in three dimensions, two dimensions or one dimension. These behaviours give rise to different possibilities for technological applications and electronic systems. But what happens if electrons live in 1.58 dimensions – and what does it actually mean? Theoretical and experimental physicists at Utrecht University investigated these questions in a new study that will be published in Nature Physics on 12 November. It may be difficult to imagine 1.58 dimensions, but the idea is more familiar to you than you think at first glance. Non-integer dimensions, such as 1.58, can be found in fractal structures, such as lungs. A fractal is a self-similar structure that scales in a different way than normal objects: If you zoom in, you will see the same structure again. For example, a small piece of Romanesco broccoli typically looks similar to the whole head of broccoli. In electronics, fractals are used in antennas for their properties of receiving and transmitting signals in a large frequency range. A relatively new topic in fractals is the quantum behaviour that emerges if you zoom in all the way to the scale of electrons. Using a quantum simulator, Utrecht physicists Sander Kempkes and Marlou Slot were able to build such a fractal out of electrons. The researchers made a 'muffin tin' in which the electrons would confine to a fractal shape, by placing carbon monoxide molecules in just the right shape on a copper background with a scanning tunneling microscope. The resulting triangular fractal shape in which the electrons were confined is called a Sierpiński triangle, which has a fractal dimension of 1.58. The researchers observed that the electrons in the triangle actually behave as if they live in 1.58 dimensions. The results from the study show how bonding (left image) and non-bonding Sierpiński (right image) triangles are separated in energy, yielding nice opportunities for transmitting currents through these fractal structures. In the bonding case, the electrons are connected and can easily go from one place to another (high transmission), whereas in the non-bonding case they are not connected and need to "jump" to another place (low transmission). Also, by calculating the dimension of the electronic wavefunction, the researchers observed that the electrons themselves are confined to this dimension and the wavefunctions inherit this fractional dimension. "From a theoretical point of view, this is a very interesting and groundbreaking result," says theoretical physicist Cristiane de Morais Smith, who supervised the study together with experimental physicists Ingmar Swart and Daniel Vanmaekelbergh. "It opens a whole new line of research, raising questions such as: what does it actually mean for electrons to be confined in non-integer dimensions? Do they behave more like in one dimension or in two dimensions? And what happens if a magnetic field is turned on perpendicularly to the sample? Fractals already have a very large number of applications, so these results may have a big impact on research at the quantum scale."
10.1038/s41567-018-0328-0
Medicine
New osteoarthritis genes discovered
Genome-wide analyses using UK Biobank data provide insights into the genetic architecture of osteoarthritis, Nature Genetics (2018). nature.com/articles/doi:10.1038/s41588-018-0079-y Journal information: Nature Genetics
http://nature.com/articles/doi:10.1038/s41588-018-0079-y
https://medicalxpress.com/news/2018-03-osteoarthritis-genes.html
Abstract Osteoarthritis is a common complex disease imposing a large public-health burden. Here, we performed a genome-wide association study for osteoarthritis, using data across 16.5 million variants from the UK Biobank resource. After performing replication and meta-analysis in up to 30,727 cases and 297,191 controls, we identified nine new osteoarthritis loci, in all of which the most likely causal variant was noncoding. For three loci, we detected association with biologically relevant radiographic endophenotypes, and in five signals we identified genes that were differentially expressed in degraded compared with intact articular cartilage from patients with osteoarthritis. We established causal effects on osteoarthritis for higher body mass index but not for triglyceride levels or genetic predisposition to type 2 diabetes. Main Osteoarthritis is the most prevalent musculoskeletal disease and the most common form of arthritis 1 . The hallmarks of osteoarthritis are degeneration of articular cartilage, remodeling of the underlying bone and synovitis 2 . A leading cause of disability worldwide, osteoarthritis affects 40% of individuals over the age of 70 and is associated with an elevated risk of comorbidity and death 3 . The rising health economic burden of osteoarthritis is commensurate with rising longevity and obesity rates, and there is currently no curative therapy. The heritability of osteoarthritis is ~50%, and previous genetic studies have identified 21 loci in total, traversing hip, knee and hand osteoarthritis with limited overlap 3 . Here, we conducted a large osteoarthritis genome-wide association study (GWAS), using genotype data across 16.5 million variants from UK Biobank. We defined osteoarthritis on the basis of both self-reported status and linkage to Hospital Episode Statistics data, as well as the joint specificity of the disease (knee and/or hip) (Supplementary Fig. 1 ). Results Disease definition and power to detect genetic associations We compared and contrasted the hospital-diagnosed ( n = 10,083 cases) and self-reported ( n = 12,658 cases) osteoarthritis GWAS drawn from the same UK Biobank dataset (with selection of approximately four times more nonosteoarthritis controls than cases to preserve power for common alleles while avoiding case–control imbalance that might cause association tests to misbehave for low-frequency variants 4 ) (Supplementary Tables 1 – 3 , Supplementary Figs. 2 – 4 and Methods ). We found power advantages with the self-reported dataset, thus indicating that the higher sample size overcame the limitations associated with phenotype uncertainty. When evaluating the accuracy of disease definition, we found that self-reported osteoarthritis had a modest positive predictive value (PPV; 30%) and sensitivity (37%), but high negative predictive value (95%) and specificity, correctly identifying 93% of individuals who did not have osteoarthritis (Supplementary Table 4 ). In terms of power to detect genetic associations, the self-reported-osteoarthritis dataset had clear advantages commensurate with its larger sample size (Fig. 1 ). For example, for a representative complex-disease-associated variant with a minor allele frequency (MAF) of 30% and an allelic odds ratio (OR) of 1.10, the self-reported and hospital-diagnosed osteoarthritis analyses had 80% and 56% power, respectively, to detect an effect at genome-wide significance (i.e., P < 5.0 × 10 −8 ; Supplementary Table 5 ). Fig. 1: Power to detect association in the discovery stage. OR and 95% CI values are shown as a function of MAF. Diamonds, newly reported variants; circles, known variants. The curves indicate 80% power at the genome-wide-significance threshold of P < 5.0 × 10 −8 for the number of cases and controls of each trait at the discovery stage (likelihood ratio test). OA, osteoarthritis. Full size image We found nominally significant evidence of concordance between the direction of effect at previously reported osteoarthritis loci and the discovery analyses for hospital-diagnosed-osteoarthritis definitions (Supplementary Tables 6 and 7 , and Supplementary Note ), thus indicating that a narrower definition of disease may provide better effect-size estimates despite being limited by power to identify robust statistical evidence of association. Heritability estimates across osteoarthritis definitions We found that common-frequency variants explained 12% of osteoarthritis heritability when using self-reported status and explained 16% of osteoarthritis heritability when using hospital records (19% of hip-osteoarthritis and 15% of knee-osteoarthritis heritability) (Supplementary Table 8 ). The heritability estimates from self-reported and hospital records were not significantly different (Supplementary Table 9 ). The concordance between self-reported and hospital-diagnosed osteoarthritis was further substantiated by the high genetic-correlation estimate of the two disease definitions (87%, P = 3.14 × 10 −53 ) (Supplementary Table 10 ). We found strong genome-wide correlation between hip osteoarthritis and knee osteoarthritis (88%, P = 1.96 × 10 −6 ), even though the previously reported osteoarthritis loci are predominantly not shared between the two osteoarthritis joint sites. From this new observation of a substantial shared genetic etiology, we sought replication of association signals across joint sites. Identification of novel osteoarthritis loci We used 173 variants with P <1.0 × 10 −5 and MAF >0.01 for replication in an Icelandic cohort of up to 18,069 cases and 246,293 controls (Supplementary Fig. 1 , Supplementary Tables 11 – 15 and Methods ). Given the number of variants, the replication significance threshold was P <2.9 × 10 −4 . After meta-analysis in up to 30,727 cases and 297,191 controls, we identified six genome-wide-significant associations at novel loci and three further replicating signals just below the corrected genome-wide-significance threshold (Table 1 and Fig. 2 ). Table 1 Association summary statistics for the nine signals Full size table Fig. 2: Regional association plots for the nine novel osteoarthritis loci. The y axis represents the negative logarithm (base 10) of the variant P value (likelihood ratio test), and the x axis represents the position on the chromosome (chr), with the names and location of genes and nearest genes shown at the bottom. The variant with the lowest P value in the region after combined discovery and replication is marked by a purple diamond. The same variant is marked by a purple dot showing the discovery P value. The colors of the other variants indicate their r 2 with the lead variant. Full size image We identified association between rs2521349 and hip osteoarthritis (OR 1.13 (95% confidence interval (CI) 1.09–1.17), P = 9.95 × 10 −10 , effect-allele frequency (EAF) 0.37). rs2521349 resides in an intron of MAP2K6 on chromosome 17. MAP2K6 encodes an essential component of the p38 MAP kinase–mediated signal-transduction pathway, which is involved in various cellular processes in bone, muscle, fat-tissue homeostasis and differentiation 5 . The MAPK signaling pathway is closely associated with osteoblast differentiation 6 , chondrocyte apoptosis and necrosis 7 , and has been reported to be differentially expressed in osteoarthritis synovial-tissue samples 6 , 7 , 8 , 9 , 10 , 11 , 12 . In animal-model studies, p38 MAP kinase activity has been found to be important in maintaining cartilage health, and it has been proposed as a potential osteoarthritis diagnosis and treatment target 10 , 13 , 14 . rs11780978 on chromosome 8 is also associated with hip osteoarthritis with a similar effect size (OR 1.13 (95% CI 1.08–1.17), P = 1.98 × 10 −9 , EAF 0.39). This variant is located in the intronic region of PLEC (plectin gene). We found rs11780978 to be nominally associated with the radiographically derived endophenotype of minimal joint-space width (β –0.0291, s.e.m. 0.0129, P = 0.024) (Table 2 and Methods ). The direction of the effect was consistent with the established clinical association between joint-space narrowing and osteoarthritis. PLEC encodes plectin, a structural protein that interlinks components of the cytoskeleton. Functional studies in mice have shown an effect on skeletal-muscle tissue correlated with low body weight, small size and slow postnatal growth 15 . Table 2 Association of the nine osteoarthritis loci with radiographically derived osteoarthritis endophenotypes Full size table rs2820436, an intergenic variant located 24 kb upstream of the long-noncoding-RNA gene RP11-392O17.1 and 142 kb downstream of ZC3H11B (zinc-finger CCCH-type containing 11B pseudogene), is associated with osteoarthritis across any joint site (OR 0.93 (95% CI 0.91–0.95), P = 2.01 × 10 −9 , EAF 0.65). It also resides within a region with multiple metabolic- and anthropometric-trait-associated variants, with which it was found to correlate ( r 2 0.18–0.88). rs375575359 resides in an intron of ZNF345 (zinc-finger-protein 345 gene) on chromosome 19. It was prioritized on the basis of osteoarthritis at any joint site and was more strongly associated with knee osteoarthritis in the replication dataset (OR 1.21 (95% CI 1.14–1.30), P = 7.54 × 10 −9 , EAF 0.04). Similarly, rs11335718 on chromosome 4 was associated with osteoarthritis in the discovery stage and with knee osteoarthritis in the replication stage (OR 1.11 (95% CI 1.07–1.16), P = 4.26 × 10 −8 , EAF 0.10). We note that Bonferroni correction for the effective number of traits tested caused rs11335718 to no longer reach genome-wide significance, with a meta-analysis P = 4.26 × 10 −8 . rs11335718 is an intronic variant in ANXA3 , the annexin A3 gene. Through meta-analysis of the any-site-osteoarthritis phenotype across the discovery and replication datasets, we determined P = 2.6 × 10 −5 and P = 1.32 × 10 −7 for rs375575359 and rs11335718, respectively (Supplementary Table 11 ). A recent mouse-model study supports the involvement of expression of a similar-motif zinc-finger-protein (ZFP36L1) with osteoblastic differentiation 16 . rs3771501 (OR 0.94 (95% CI 0.92–0.96), P = 1.66 × 10 −8 , EAF 0.53) is associated with osteoarthritis at any site and resides in an intron of TGFA (transforming growth factor alpha gene). TGFA encodes an epidermal-growth-factor-receptor ligand and is an important integrator of cellular signaling and function. We detected association of rs3771501 with minimal joint-space width (β –0.0699, s.e.m. 0.0127, P = 3.45 × 10 −8 ) (Table 2 and Methods ); i.e., the osteoarthritis-risk-increasing allele was also associated with lower joint-cartilage thickness in humans. A perfectly correlated variant in this gene has previously been associated with cartilage thickness and with hip osteoarthritis; moreover, this variant has been found to be differentially expressed in osteoarthritis cartilage lesions compared with nonlesioned cartilage 17 . Functional studies have shown that TGFA regulates the conversion of cartilage to bone during the process of endochondral bone growth, and that it is a dysregulated cytokine present in degrading cartilage in osteoarthritis and a strong stimulator of cartilage degradation upregulated by articular chondrocytes in experimentally induced and human osteoarthritis 18 , 19 , 20 , 21 . The function of TGFA has also been associated with craniofacial development, palate closure and small body size 22 . rs864839 resides in the intronic region of JPH3 (junctophilin 3 gene) on chromosome 16 and was discovered in the any-joint-site osteoarthritis analysis. It was more strongly associated with hip osteoarthritis in the replication dataset (OR 1.08 (95% CI 1.05–1.11), P = 2.1 × 10 −8 , EAF 0.71). Through meta-analysis of the any-site-osteoarthritis phenotype across the discovery and replication datasets, we determined P = 7.02 × 10 −6 (Supplementary Table 11 ). JPH3 is involved in the formation of junctional membrane structure, and it regulates neuronal calcium flux and has been reported to be expressed in pancreatic beta cells and in the regulation of insulin secretion. rs116882138 was most strongly associated with hip and/or knee osteoarthritis in the discovery dataset and with knee osteoarthritis in the replication dataset (OR 1.34 (95% CI 1.21–1.49), P = 5.09 × 10 −8 , EAF 0.02). It is an intergenic variant located 11 kb downstream of MOB3B (kinase activator 3B gene) and 16 kb upstream of EQTN (equatorin sperm-acrosome-associated gene) on chromosome 9. We found rs116882138 to be nominally associated with acetabular dysplasia, as determined by the center-edge angle (β –1.1388, s.e.m. 0.5276, P = 0.031) (Table 2 and Methods ). Finally, rs6516886 was prioritized on the basis of the hip and/or knee osteoarthritis-discovery analysis and was more strongly associated in the hip-osteoarthritis replication dataset (OR 1.10 (95% CI 1.06–1.14), P = 5.84 × 10 −8 , EAF 0.75). rs6516886 is situated 1 kb upstream of RWDD2B (RWD-domain-containing 2B gene) on chromosome 21. LTN1 (listerin E3 ubiquitin protein ligase 1 gene), which is located 28 kb from the variant, has been reported to affect musculoskeletal development in a mouse model 23 . Functional analysis Using molecular phenotyping through quantitative proteomics and RNA sequencing, we tested whether coding genes within 1 Mb of the novel osteoarthritis-associated variants were differentially expressed at 1% false discovery rate (FDR) in chondrocytes extracted from intact compared with degraded cartilage from patients with osteoarthritis undergoing total-joint-replacement surgery (Table 3 and Methods ). Table 3 Genes in the osteoarthritis-associated signals with significantly different gene expression and/or protein abundance in intact versus degraded articular cartilage Full size table PCYOX1 , located 209 kb downstream of rs3771501, showed significant evidence of differential expression (1.21-fold higher postnormalization in degraded cartilage at the RNA level, q = 0.0047; and 1.17-fold lower abundance at the protein level, q = 0.0042). This discrepancy may indicate potential clinical relevance, because the gene product is a candidate biomarker for osteoarthritis progression. Prenylcysteine oxidase 1, the protein product of this gene, is a secreted protein that catalyzes the degradation of prenylated proteins 24 and has been identified in urinary exosomes 25 . Further investigation into the chondrocyte and peripheral secretome is warranted to assess the potential of this molecule as a biomarker for osteoarthritis progression. PCYOX1 has been reported to be overexpressed in human dental-pulp-derived osteoblasts compared with osteosarcoma cells 26 . FAM136A , located 188 kb upstream of the same variant (rs3771501), showed 1.13-fold-lower transcriptional levels in chondrocytes from degraded articular cartilage ( q = 0.0066). BACH1 and MAP3K7CL , located in the vicinity of rs6516886, showed evidence of differential transcription (1.26-fold higher, q = 0.0019, and 1.37-fold higher, q = 0.0021, respectively, in degraded tissue). BACH1 is a transcriptional repressor of heme oxygenase-1. Studies in Bach1-deficient mice have independently suggested inactivation of Bach1 as a novel target for the prevention and treatment of meniscal degeneration 27 and of osteoarthritis 28 . Finally, PLAA and ZNF382 , located proximal to rs116882138 and rs375575359, respectively, showed higher transcription levels in degraded compared with intact cartilage (1.15-fold, q = 0.0027, and 1.31-fold, q = 0.0031, respectively). BOP1 , located 451 kb downstream of rs11780978, showed 1.17-fold lower levels of transcription in degraded tissue ( q = 0.003). We examined evidence for expression quantitative trait loci (eQTLs) in the Genotype-Tissue Expression GTEx tissues and found that none of the eQTLs identified at 5% FDR overlapped with the genes identified as differentially expressed between osteoarthritis intact and degraded cartilage ( Supplementary Note and Supplementary Table 16 ). Fine mapping indicates noncoding variants at all loci For five of the new loci, the sum of probabilities of causality of all variants in the fine-mapped region was ≥0.95 ( >0.99 for two signals) and was >0.93 for two further loci (Supplementary Table 17 and Methods ). Most variants within each credible set had marginal posterior probabilities, whereas only a small number of variants had a posterior probability of association (PPA) > 0.1; these accounted for 25–92% of PPA across the different regions. The credible set of four signals was narrowed down to three variants, one signal to two variants, and one signal to one variant, with a probability of causality >0.1. For all nine regions, the variant identified as most likely to be causal was noncoding (Supplementary Table 18 , Supplementary Note and Supplementary Fig. 5 ). Gene-based analyses Gene-set analysis identified UQCC1 and GDF5 , located close to each other on chromosome 20, as key genes with consistent evidence of significant association with osteoarthritis across phenotype definitions (Supplementary Table 19 and Supplementary Note ). UQCC1 and GDF5 were significantly associated with four and three of the five osteoarthritis definitions, respectively. GDF5 encodes growth differentiation factor 5, a member of the TGFβ superfamily, and accruing evidence indicates that it plays a central role in skeletal health and development 29 , 30 , 31 , 32 . Pathway analyses identified significant associations between self-reported osteoarthritis and anatomical-structure morphogenesis ( P = 4.76 × 10 −5 ) or ion-channel transport ( P = 8.98 × 10 −5 ); hospital-diagnosed hip osteoarthritis and activation of MAPK activity ( P = 1.61 × 10 −5 ); hospital-diagnosed knee osteoarthritis and histidine metabolism ( P = 1.02 × 10 −5 ); and hospital-diagnosed hip and/or knee osteoarthritis and recruitment of mitotic centrosome proteins and complexes ( P = 8.88 × 10 −5 ) (Supplementary Table 20 and Supplementary Fig. 6 ). Genetic links between osteoarthritis and other traits Established clinical risk factors for osteoarthritis include old age, female sex, obesity, occupational exposure to high levels of joint loading activity, previous injury, smoking status and family history of osteoarthritis. We estimated the genome-wide genetic correlation between osteoarthritis and 219 other traits and diseases and identified 35 phenotypes with significant (5% FDR) genetic correlation with osteoarthritis across definitions, with large overlap between the identified phenotypes (Supplementary Fig. 7 , Fig. 3 , Supplementary Table 21 and Methods ). Fig. 3: Heat map of genetic correlations between osteoarthritis phenotypes in UK Biobank and 35 traits grouped in ten categories from GWAS consortia. Symbols and hues depict the two-tailed Benjamini–Hochberg FDR q values and strength of the genetic correlation (darker shade denotes stronger correlation), respectively. Red and blue indicate positive and negative correlations, respectively. RP, reproductive; SL, sleep; OA, osteoarthritis. Full size image The phenotypes with significant genetic correlations (rg) fell into the following broad categories: obesity, body mass index (BMI) and related anthropometric traits (rg >0); type 2 diabetes (rg >0); educational achievement (rg <0); neuroticism, depressive symptoms (rg >0) and sleep duration (rg <0); mother’s, father’s or parents’ age at death (rg <0); reproductive phenotypes, including age at first birth (rg <0) and number of children born (rg >0); smoking, including age of smoking initiation (rg <0) and having ever smoked (rg >0), and lung cancer (rg >0) (Fig. 3 , Supplementary Table 21 ). The four phenotypes with significant genetic correlation in all analyses were number of years of schooling, waist circumference, hip circumference and BMI. We found a nominally significant positive genetic correlation with rheumatoid arthritis, which did not pass multiple-testing correction for self-reported and hospital-diagnosed osteoarthritis (rg = 0.14–0.19, FDR 10–12%). Among musculoskeletal phenotypes, lumbar-spine bone mineral density showed a positive genetic correlation with hospital-diagnosed hip and/or knee osteoarthritis (rg = 0.2, FDR = 3%) but did not reach significance in other analyses. Disentangling causality We undertook Mendelian randomization (MR) analyses 33 to strengthen causal inference regarding modifiable exposures that might influence osteoarthritis risk (Supplementary Tables 22 – 25 and Methods ). Each kg/m 2 increment in body mass index was predicted to increase the risk of self-reported osteoarthritis by 1.11 (95% CI 1.07–1.15, P = 8.3 × 10 −7 ). This result was consistent across MR methods (OR 1.52–1.66) and disease definition (OR 1.66–2.01). Consistent results were also observed for other obesity-related measures, such as waist circumference (OR 1.03 per cm increment; 95% CI 1.02–1.05, P = 5 × 10 −4 ) and hip circumference (OR 1.03 per cm increment; 95% CI 1.01–1.06, P = 0.021). The OR values for type 2 diabetes liability and triglycerides were consistently small across estimators and osteoarthritis definitions; given that the analyses involving those traits were well powered (Supplementary Table 26 ), these results are compatible with either a weak or no causal effect. The results for years of schooling were not consistent across estimators, and there was evidence of directional horizontal pleiotropy, thus hampering any causal interpretation (Fig. 4 ). For lumbar-spine bone mineral density, there was evidence of a causal effect with OR per s.d. increment of 1.28 (95% CI 1.11–1.47, P = 0.002) for hip and/or knee osteoarthritis. This effect appeared to be site specific, with OR of 1.29 (95% CI 1.06–1.57, P = 0.014) for knee osteoarthritis, whereas the OR for hip osteoarthritis ranged from 0.71 to 1.57. There was also some evidence of a site-specific causal effect of height on knee osteoarthritis (OR 1.13 per s.d. increment; 95% CI 1.02–1.25, P = 0.023), which was consistent across estimators. One-sample MR analyses corroborated these findings, and obesity-related phenotypes presented strong statistical evidence after multiple-testing correction (Supplementary Table 27 ). These analyses did not detect reliable effects of smoking or reproductive traits on osteoarthritis (Supplementary Tables 28 and 29 ). Fig. 4: Two-sample MR estimates and 95% CI values of the effects of obesity-related measures, triglyceride levels, years of schooling and type 2 diabetes liability on different definitions of osteoarthritis. All values are shown in s.d. units except for type 2 diabetes liability, which is shown in ln(OR) units. HD, hospital diagnosed; IVW, inverse-variance weighting; MBE, mode-based estimate; MBE (1), tuning parameter φ = 1; MBE (0.5), tuning parameter φ = 0.5. Full size image Discussion To improve understanding of the genetic etiology of osteoarthritis, we conducted a study combining genotype data in up to 327,918 individuals. We identified six novel, robustly replicating loci associated with osteoarthritis, three of which fell just under the corrected genome-wide-significance threshold. These loci provide a substantial increase in the number of known osteoarthritis loci. Together, all established osteoarthritis loci accounted for 26.3% of trait variance (Supplementary Fig. 8 ). The key attributes of this study were the large sample size and the homogeneity of the UK Biobank dataset, coupled with independent replication, independent association with clinically relevant radiographic endophenotypes and functional genomics follow-up in primary osteoarthritis tissue. We further capitalized on the wealth of available genome-wide summary statistics across complex traits to identify genetic correlations between osteoarthritis and multiple molecular, physiological and behavioral phenotypes, and we performed formal MR analyses to assess causality and disentangle complex cross-trait epidemiological relationships. Most novel signals were at common frequency variants and conferred small to modest effects, in line with a highly polygenic model underpinning osteoarthritis risk. We identified one low-frequency variant associated with osteoarthritis (MAF 0.02) with a modest effect size (combined OR 1.34). Even though our study was well powered to detect such variants, we found no evidence of a role of low-frequency variation of large effect in osteoarthritis susceptibility (Supplementary Table 5 ). The power of this study was very limited for low-frequency variants with OR <1.50 and for rare variants. We estimate a requirement of up to 40,000 osteoarthritis cases and 160,000 controls to recapitulate the effects identified in this study at genome-wide significance, on the basis of the sample-size-weighted effect-allele frequencies and replication-cohort odds-ratio estimates (Table 1 and Supplementary Table 30 ). We integrated functional information with statistical evidence for association to fine map the locations of likely causal variants and genes. All the predicted most likely causal variants resided in noncoding sequence: six were intronic, and three were intergenic. We were able to refine the association signal to a single variant in one instance, and to variants residing within a single gene in three instances, although the mechanisms of action may be mediated through other genes in the vicinity. We empirically found self-reported osteoarthritis definition to be a powerful tool for genetic association studies, as evidenced, for example, by the genome-wide significance reached for the established GDF5 osteoarthritis locus in only the self-reported disease-status analyses. Published epidemiological studies investigating osteoarthritis via self-reporting 34 , 35 and validation of self-reported status against primary-care records has yielded similar conclusions 34 . We also found very high genetic correlation between self-reported and hospital-diagnosed osteoarthritis, as well as similar variant-based heritability estimates, thus corroborating the validity of self-reported osteoarthritis status for genetic studies. However, we also note that the hospital-diagnosed-osteoarthritis analyses had higher heritability and yielded stronger evidence of effect-direction concordance at established loci, thus indicating that larger sample sizes would afford the power required to convincingly detect the established loci. Hospital-diagnosed-osteoarthritis data may potentially capture a different patient demographic than self-reported data ( Supplementary Note ). From the results of this study, we deduce that there is no gold standard for osteoarthritis definition in genetics studies, and we identified advantages in using both methods of defining disease to broadly maximize discovery power. We identified strong genome-wide correlation between hip and knee osteoarthritis, thus indicating a substantial shared genetic etiology that has been hitherto overlooked. We therefore sought to replicate signals across these highly correlated phenotypes and to identify multiple instances of signals detected in the larger discovery analysis of osteoarthritis and independently replicated in joint-specific definitions of disease. Indeed, when examining the replication phenotypes, we found no instances of confirmed replication in which the replication phenotype was not captured within the accompanying discovery-phenotype definition. Further analysis in larger sample sets with precise phenotyping should help distinguish signal specificity. Two of the newly identified signals, indexed by rs11780978 and rs2820436, resided in regions with established metabolic- and anthropometric-trait associations. Osteoarthritis is epidemiologically associated with high BMI, and the association is stronger for knee osteoarthritis. In line with this finding, we observed higher genetic correlation between BMI and knee osteoarthritis (rg = 0.52, P = 2.2 × 10 −11 ) compared with hip osteoarthritis (rg = 0.28, P = 4 × 10 −4 ). BMI is also known to be genetically correlated with education phenotypes, depressive symptoms, and reproductive and other phenotypes; hence, some of the genetic correlations for osteoarthritis observed here may be mediated through BMI. However, for the education and personality/psychiatric phenotypes, the strength of the genetic correlations observed here for osteoarthritis was substantially higher than that observed for BMI (for example, hospital-diagnosed osteoarthritis and years of schooling had rg = –0.45, P = 5 × 10 −27 , whereas BMI and years of schooling had rg = −0.27, P = 9 × 10 −32 ; hospital-diagnosed osteoarthritis and depressive symptoms had rg = 0.49, P = 6 × 10 −7 , whereas BMI and depressive symptoms had rg = 0.10, P = 0.023). Epidemiologically, lower educational levels are known to be particularly associated with risk of knee osteoarthritis, even with adjustment for BMI 36 . MR provided further insight into the nature of the genetic correlations that we observed. In the case of BMI and other obesity-related measures, there was evidence of a causal effect of those phenotypes on osteoarthritis. This result corroborated findings from conventional observational studies 37 , which are prone to important limitations (such as reverse causation and residual confounding) regarding causal inference 38 . For all other exposure phenotypes, there was no convincing evidence of a causal effect on osteoarthritis risk, thus suggesting that the genetic correlations detected by linkage disequilibrium (LD)-score regression may be mostly due to horizontal pleiotropy, although for some phenotypes the MR analyses were underpowered (Supplementary Table 26 ). In the case of triglycerides and liability to type 2 diabetes, the MR analyses had sufficient power to rule out nonsmall causal effects, thus suggesting that these phenotypes have at most weak effects on osteoarthritis risk. Importantly, structural changes in joints usually precede the onset of osteoarthritis symptoms. Articular cartilage is an avascular, aneural tissue. It provides tensile strength, compressive resilience and a low-friction articulating surface. Chondrocytes are the only cell type in cartilage. The mode of function of noncoding DNA is linked to context-dependent regulation of gene expression, and identification of the causal variants and the genes that they affect requires experimental analysis of genome regulation in the proper cell type. Our functional analysis of genes in osteoarthritis-associated regions and pathways identified differentially expressed molecules in chondrocytes extracted from degraded compared with intact articular cartilage. Cartilage degeneration is a key hallmark of osteoarthritis pathogenesis, and regulation of these genes may be implicated in disease development and progression. Osteoarthritis is a leading cause of disability worldwide, and it imposes a substantial public-health and health economic burden. Here, we gleaned novel insights into the genetic etiology of osteoarthritis and implicated genes with translational potential 10 , 13 , 14 , 27 , 28 . The cohorts contributing to this study were composed of European-descent populations. In the future, large-scale whole-genome-sequencing studies of well-phenotyped individuals across diverse populations should capture the full allele frequency and variation type spectrum, and afford further insights into the causes of this debilitating disease. URLs Quanto, ; genotyping and quality control of UK Biobank, ; genotype imputation of UK Biobank, ; GRCh38 cDNA assembly release 87, . Methods Accuracy of self-reported data We evaluated the classification accuracy of self-reported disease status by estimating the sensitivity, specificity, PPV and negative predictive values (NPV) in the self-reported and hospital-diagnosed disease-definition datasets. We performed a sensitivity test to evaluate the true-positive rate by calculating the proportion of individuals diagnosed with osteoarthritis that were correctly identified as such in the self-reported analysis, then performed a specificity test to evaluate the true-negative rate by calculating the proportion of individuals not diagnosed with osteoarthritis that were correctly identified as such in the control set. The number of individuals overlapping between the self-reported ( n SR = 12,658) and hospital-diagnosed ( n HD = 10,083) datasets was n OVER = 3,748. The total number of individuals was n TOT = 138,997. \({\rm{Sensitivity}}=\frac{{n}_{{\rm{OVER}}}}{{n}_{{\rm{HD}}}}\) ; \({\rm{specificity}}=\frac{{n}_{{\rm{TOT}}}-({n}_{{\rm{HD}}}+{n}_{{\rm{SR}}}-{n}_{{\rm{OVER}}})}{{n}_{{\rm{TOT}}}-{n}_{{\rm{HD}}}}\) ; \({\rm{PPV}}=\frac{{n}_{O{\rm{VER}}}}{{n}_{{\rm{SR}}}}\) ; \({\rm{NPV}}=\frac{{n}_{{\rm{TOT}}}-({n}_{{\rm{HD}}}+{n}_{{\rm{SR}}}-{n}_{{\rm{OVER}}})}{{n}_{{\rm{TOT}}}-{n}_{{\rm{SR}}}}\) . Discovery GWAS UK Biobank’s scientific protocol and operational procedures were reviewed and approved by the North West Research Ethics Committee (REC reference no. 06/MRE08/65). The first UK Biobank release of genotype data included ~150,000 volunteers between 40 and 69 years old from the UK, genotyped at approximately 820,967 SNPs. 50,000 samples were genotyped with the UKBiLEVE array, and the remaining samples were genotyped with the UK Biobank Axiom array (Affymetrix; URLs). The UK Biobank Axiom is an update of UKBiLEVE, and the two arrays share 95% content. In total, after sample and SNP quality control (QC), which was carried out centrally, 152,763 individuals and 806,466 directly typed SNPs remained. Phasing, imputation and derivation of principal components were also carried out centrally. Briefly, the combined UK10K/1000 Genomes Project haplotype reference panel was used to impute untyped variants through the IMPUTE3 program (URLs). After imputation, the number of variants reached 73,355,667 in 152,249 individuals. We performed additional QC checks and excluded samples with call rate ≤97%. We checked samples for sex discrepancies, excess heterozygosity, relatedness and ancestry, and removed possibly contaminated and withdrawn samples. After QC, the number of individuals was 138,997. We excluded 528 SNPs that had been centrally flagged as being subject to exclusion due to failure in one or more additional quality metrics. To define osteoarthritis cases, we used the self-reported status questionnaire and the Hospital Episode Statistics data (Supplementary Table 3 and Supplementary Note ). We conducted five osteoarthritis-discovery GWAS and one sensitivity analysis. The case strata were as follows: self-reported osteoarthritis at any site, n = 12,658; sensitivity analysis (a random subset of the self-reported cohort equal to the sample size of the hospital-diagnosed cohort), n = 10,083; hospital-diagnosed osteoarthritis at any site, on the basis of ICD10 and/or ICD9 hospital-record codes, n = 10,083; hospital-diagnosed hip osteoarthritis, n = 2,396; hospital-diagnosed knee osteoarthritis, n = 4,462; and hospital-diagnosed hip and/or knee osteoarthritis, n = 6,586. We applied exclusion criteria to minimize misclassification in the control datasets to the extent possible (using approximately four times the number of cases for each definition) (Supplementary Table 2 and Supplementary Fig. 1 ). We restricted the number of controls used and did not use the full set of available genotyped control samples from UK Biobank to guard against association test statistics behaving anticonservatively in the presence of stark case–control imbalance for alleles with minor allele count (MAC) <400 (ref. 4 ) (analogous to MAF of ~0.02 in the self-reported and hospital-diagnosed osteoarthritis datasets). For the control set, we excluded all participants diagnosed with any musculoskeletal disorder or having relevant symptoms or signs, such as pain and arthritis, and we selected older participants to ensure that we minimized the number of controls that might be diagnosed with osteoarthritis in the future, while keeping the number of males and females balanced (Supplementary Table 1 ). At the SNP level, we further filtered for Hardy–Weinberg-equilibrium P ≤ 10 −6 , MAF ≤0.001 and info score <0.4 (Supplementary Fig. 1 ). We tested for association by using the frequentist likelihood ratio test (LRT) and method ml in SNPTEST v2.5.2 (ref. 39 ) with adjustment for the first ten principal components to control for population structure. Power calculations were carried out in Quanto v1.2.4 (URLs). Replication Two hundred independent and novel variants with P <1.0 × 10 −5 in the discovery analyses were used for in silico replication in an independent cohort from Iceland (deCODE) through fixed-effects inverse-variance-weighted meta-analysis in METAL 40 . One hundred seventy three variants were present in the replication cohort. The remaining 27 variants had ambiguous alleles (i.e., those incompatible because of alignment issues) and were not included in further analyses. The significance threshold for association in the replication study was hence 0.05/173 = 2.9 × 10 −4 . The deCODE dataset comprised four osteoarthritis phenotypes: any osteoarthritis site (18,069 cases and 246,293 controls), hip osteoarthritis (5,714 cases and 199,421 controls), knee osteoarthritis (4,672 cases and 172,791 controls) and hip and/or knee osteoarthritis (9,429 cases and 199,421 controls). We performed meta-analyses (across osteoarthritis definitions), using summary statistics from the UK Biobank osteoarthritis analyses and deCODE. We used P ≤ 2.8 × 10 −8 as the threshold corrected for the effective number of traits to report genome-wide significance. Replication cohort The information on hip, knee and vertebral osteoarthritis was obtained from Landspitali University Hospital electronic health records, from Akureyri Hospital electronic health records and from a national Icelandic hip or knee arthroplasty registry 41 . Individuals with secondary osteoarthritis (for example, Perthes disease and hip dysplasia), post-trauma osteoarthritis (for example, ACL rupture) or a diagnosis of rheumatoid arthritis were excluded from these lists. Only those diagnosed with osteoarthritis after the age of 40 were included. Subjects with hand osteoarthritis were drawn from a database of 9,000 patients with hand osteoarthritis that was initiated in 1972 (ref. 42 ). The study was approved by the Data Protection Authority of Iceland and the National Bioethics Committee of Iceland. Informed consent was obtained from all participants. Association with osteoarthritis-related endophenotypes The nine replicating genetic loci were examined for association in radiographic osteoarthritis endophenotypes. This examination was done for three phenotypes: minimal joint-space width (mJSW) and two measures of hip-shape deformities known to be strong predictors of osteoarthritis: acetabular dysplasia (measured with center-edge (CE) angle) and cam deformity (measured with alpha angle). For mJSW, association statistics for the variants were looked up in a previously published GWAS, which performed joint analysis of data from Rotterdam Study I (RS-I), Rotterdam Study II (RS-II), TwinsUK, SOF and MrOS by using standardized age-, sex- and population stratification (four principal components)-adjusted residuals from linear regression 17 . For the two hip-shape phenotypes, CE angle and alpha angle were measured as previously described. CE angle was analyzed as a continuous phenotype. We conducted GWAS on a total of 6,880 individuals from RS-I, RS-II, Rotterdam Study III (RS-III) and CHECK 43 datasets, using standardized age- and sex-adjusted residuals from linear regression. For cam deformity, individuals with an alpha angle >60° were defined as cases ( n = 639), and all others were defined as controls (4,339). The GWAS was done on individuals from RS-I, RS-II and CHECK, by using age, sex and principal components to adjust for population stratification as covariates. The results for the separate cohorts were combined in a meta-analysis using inverse-variance weighting with METAL 40 . Genomic-control correction was applied to the standard errors and P values before meta-analysis. Functional genomics Patients and samples We collected cartilage samples from 38 patients undergoing total-joint-replacement surgery: 12 patients with knee osteoarthritis (cohort 1; 2 women, 10 men, age 50–88 years); 17 patients with knee osteoarthritis (cohort 2; 12 women, 5 men, age 54–82 years); and 9 patients with hip osteoarthritis (cohort 3; 6 women, 3 men, age 44–84 years). We collected matched intact and degraded cartilage samples from each patient ( Supplementary Note ). Cartilage was separated from bone, and chondrocytes were extracted from each sample. From each isolated chondrocyte sample, we extracted RNA and protein. All patients provided full written informed consent before participation. All sample collection and RNA- and protein-extraction steps were as described in detail in ref. 44 . This work was approved by Oxford NHS REC C (10/H0606/20), and samples were collected under Human Tissue Authority license 12182, Sheffield Musculoskeletal Biobank, University of Sheffield, UK. Samples were also collected under National Research Ethics approval reference 11/EE/0011, Cambridge Biomedical Research Centre Human Research Tissue Bank, Cambridge University Hospitals, UK (additional information in Supplementary Note). Proteomics Proteomics analysis was performed on intact and degraded cartilage samples from 24 individuals (15 from cohort 2; 9 from cohort 3). LC–MS analysis was performed on a Dionex Ultimate 3000 UHPLC system coupled with an Orbitrap Fusion Tribrid mass spectrometer. To account for protein loading, abundance values were normalized by the sum of all protein abundances in a given sample, then log 2 transformed and quantile normalized. We restricted the analysis to 3,917 proteins that were quantified in all samples. We tested proteins for differential abundance by using limma 45 in R, on the basis of a within-individual paired sample design. Significance was defined at 1% Benjamini–Hochberg FDR to correct for multiple testing. Of the 3,732 proteins with unique mapping of gene name and Ensembl ID, we used 245 proteins with significantly different abundance between intact and degraded cartilage at 1% FDR. RNA sequencing We performed a gene expression analysis on samples from all 38 patients. Multiplexed libraries were sequenced on the Illumina HiSeq 2000 platform (75-bp paired-end read length), thus yielding bam files for cohort 1 and cram files for cohorts 2 and 3. The cram files were converted to bam files in samtools 1.3.1 (ref. 46 ) and then to fastq files in biobambam 0.0.191 (ref. 47 ), after exclusion of reads that failed QC. We obtained transcript-level quantification by using salmon 0.8.2 (ref. 48 ) (with --gcBias and --seqBias flags to account for potential biases) and the GRCh38 cDNA assembly release 87 downloaded from Ensembl (URLs). We converted transcript-level to gene-level count estimates, with estimates for 39,037 genes, on the basis of Ensembl gene IDs. After quality control, we retained expression estimates for 15,994 genes with counts per million of 1 or higher in at least ten samples. Limma-voom 49 was used to remove heteroscedasticity from the estimated expression data. We tested genes for differential expression in limma 45 in R (with lmFit and eBayes), on the basis of a within-individual paired sample design. Significance was defined at 1% Benjamini–Hochberg FDR to correct for multiple testing. Of the 14,408 genes with unique mapping of gene name and Ensembl ID, we used 1,705 genes with significantly different abundance between intact and degraded cartilage at 1% FDR. Fine mapping We constructed regions for fine mapping, by taking a window of at least 0.1 cM to either side of each index variant. The region was extended to the furthest variant with r 2 >0.1 with the index variant within a 1-Mb window. LD calculations for extending the region were based on whole-genome-sequenced EUR samples from the combined reference panel from UK10K 50 and the 1000 Genomes Project 51 . For each region we, implemented the Bayesian fine-mapping method CAVIARBF 52 , which uses association summary statistics and correlations among variants to calculate Bayes’ factors and the posterior probability of each variant being causal. We assumed a single causal variant in each region and calculated 95% credible intervals, which contained the minimum set of variants jointly having at least 95% probability of including the causal variant. We also applied the extended CAVIARBF method, which uses functional annotation scores to upweigh variants according to their predicted functional scores. To this end, we downloaded precalculated CADD 53 and Eigen 54 scores from their equivalent websites. We observed better separation of severe-consequence genic variants with the CADD score and better separation of regulatory variants with the Eigen score, and we therefore created a combined score in which splice-acceptor, splice-donor, stop-loss, stop-gain, missense and splice-region variants were assigned their CADD-Phred scores, and the rest were assigned their Eigen-Phred scores. Functional enrichment analysis We used genome-wide summary statistics to test for enrichment of functional annotations. We used GARFIELD 55 with customized functional annotations, making use of the functional genomics data that we generated in primary articular chondrocytes by using RNA sequencing and quantitative proteomics. We defined differentially expressed genes separately at the RNA (transcriptional) level and at the protein level when comparing intact and degraded cartilage (1% FDR). We extended each differentially regulated gene by 5 kb on each side. Using GARFIELD’s approach, we calculated the effective number of independent annotations to be 1.995, which led to an adjusted- P -value significance level of 0.025. We tested for enrichment by using variants with P < 1.0 × 10 −5 , and no analysis surpassed the corrected significance threshold. LD regression We used LDHub 56 (accessed 23–27 January 2017) to estimate the genome-wide genetic correlation between each of the osteoarthritis definitions and 219 other human traits and diseases. In each analysis, we extracted variants with rsIDs (range 11999363–15561966) and uploaded the corresponding association summary statistics to LDHub; the analysis yielded 896,076–1,172,130 variants overlapping with LDHub. We corrected for multiple testing by defining significance at 5% Benjamini–Hochberg FDR for each of the five osteoarthritis analyses. Mendelian randomization analysis We used MR to assess the potential causal role of the phenotypes identified in the LD-score regression analysis on osteoarthritis. We also included birth weight and height (Supplementary Table 22 ). In all analyses, the primary outcome variable was self-reported osteoarthritis. We used data from hospital records (which were available for a much smaller number of individuals) as sensitivity analyses and to identify potential site-specific effects. Data sources Genetic instruments were identified from publicly available summary GWAS results through the TwoSampleMR R package, which allows for extraction of the data available in the MR-Base database 57 . Only results that combined both sexes were extracted. Preference was given to studies restricted to European populations to minimize the risk of bias due to population stratification; however, for several traits, those results were either not available or corresponded to much smaller studies (Supplementary Table 22 ). However, this aspect was unlikely to substantially bias the results, because all studies used correction methods, and even multiancestral studies were composed of mostly European populations. The exception was for number of children born and age of the individual at birth of the first child: because the GWAS of reproductive traits by Barban and colleagues 58 was not available in MR-Base, we extracted summary association results for the variants that achieved genome-wide significance directly from the paper and used coefficients from each sex in sex-specific analyses. The search was performed on 19 June, 2017. For each trait, all genetic instruments achieved the conventional levels of genome-wide significance (i.e., P < 5.0 × 10 −8 ) and were mutually independent (i.e., r 2 < 0.001 between all pairs of instruments). Two-sample MR For the exposure phenotypes with at least one genetic instrument available, we used two-sample MR analysis to evaluate their causal effects on osteoarthritis risk. The exceptions were smoking and reproductive traits, which were performed with one-sample MR only, because of the need to perform the analysis within specific subgroups. All summary association results used for two-sample MR are shown in Supplementary Table 23 , and Supplementary Table 24 provides an overall description of each set of genetic instruments. We applied the following methods: Ratio method. For exposure phenotypes with only one genetic instrument available, MR was performed with the ratio method, which consists of dividing the instrument-outcome regression coefficient by the instrument-exposure regression coefficient. The standard error of the ratio estimate can be calculated by dividing the instrument-outcome standard error by the instrument-exposure regression coefficient. Confidence intervals and P values were calculated with the normal approximation. Inverse-variance weighting (IVW). This method allows for combination of the ratio estimates from multiple instruments into a single pooled estimate. We used a multiplicative random-effects version of the method, which incorporates between-instrument heterogeneity in the confidence intervals. MR–Egger regression. This method yields consistent causal-effect estimates even if all instruments are invalid, provided that horizontal pleiotropic effects are uncorrelated with instrument strength (i.e., the instrument strength independent of direct effects InSIDE) assumption holds). Weighted median. This method allows for consistent causal-effect estimation even if the InSIDE assumption is violated, provided that up to (but not including) 50% of the weights in the analysis come from invalid instruments. Mode-based estimate (MBE). The weighted MBE relies on the zero modal pleiotropy assumption (ZEMPA), which postulates that the largest subgroup (or the subgroup carrying the largest amount of weight in the analysis) of instruments that estimate the same causal-effect estimate is composed of valid instruments. This procedure allows for consistent causal-effect estimation even if most instruments are invalid. The stringency of the method can be regulated by the φ parameter. We tested two values of φ : φ = 1 (i.e., the default) and φ = 0.5 (half the default, or twice as stringent). For exposure phenotypes with more than one but fewer than ten genetic instruments, only the IVW method was applied, because the remaining methods are typically less powered and require a relatively large number of genetic instruments to provide reliable results. The degree of weak instrument bias (which corresponds to regression-dilution bias in two-sample MR) for the IVW and MR–Egger methods was quantified with the \(\frac{{F}_{XG}-1}{{F}_{XG}}\) and \({I}_{GX}^{2}\) statistics, respectively. Both range from 0% to 100%, and \(100\left(1-\frac{{F}_{XG}-1}{{F}_{XG}}\right) \% \) and \(100\left(1-{I}_{GX}^{2}\right) \% \) can be interpreted as the amount of dilution in the corresponding causal-effect estimates. Given that only genome-wide-significant variants were selected as instruments, the \(\frac{{F}_{XG}-1}{{F}_{XG}}\) statistic was necessarily high (at least ~95%). However, the \({I}_{GX}^{2}\) statistic depends on both instrument strength and heterogeneity between instrument-exposure associations, thus suggesting that regression dilution bias in MR–Egger can be substantial even if instruments are individually strong. Indeed, for some traits, the \({I}_{GX}^{2}\) statistic was very low (Supplementary Table 24 ). Therefore, all MR–Egger regression analyses were corrected for regression dilution with a simulation extrapolation (SIMEX) approach. Horizontal pleiotropy tests We additionally assessed the robustness of our findings to potential violations of the assumption of no horizontal pleiotropy by applying two tests of horizontal pleiotropy. One test was the MR–Egger intercept, which can be interpreted as the average instrument-outcome coefficient when the instrument-exposure coefficient is zero. If there is no horizontal pleiotropy, the intercept should be zero. Therefore, the intercept provides an indication of overall unbalanced horizontal pleiotropy. The second test was Cochran’s Q test of heterogeneity, which relies on the assumption that all valid genetic instruments estimate the same causal effect. Power calculations We performed power calculations to estimate the power of our two-sample MR analysis to detect odds ratios of 1.2, 1.5 and 2.0 ( Supplementary Note ). One-sample MR UK Biobank data were used to perform one-sample MR with the same genetic instruments as in the two-sample MR ( Supplementary Note ). Life Sciences Reporting Summary Further information on experimental design is available in the Life Sciences Reporting Summary. Data availability All RNA-sequencing data have been deposited in the European Genome/Phenome Archive (cohort 1, EGAD00001001331 ; cohort 2, EGAD00001003355 ; cohort 3, EGAD00001003354 ).
In the largest study of its kind, nine novel genes for osteoarthritis have been discovered by scientists from the Wellcome Sanger Institute and their collaborators. Results of the study, published today (19 March) in Nature Genetics, could open the door to new targeted therapies for this debilitating disease in the future. Almost nine million people in the UK suffer from osteoarthritis, a degenerative joint disease in which a person's joints become damaged, stop moving freely and become painful. Osteoarthritis is the most prevalent musculoskeletal disease and a leading cause of disability worldwide. There is no treatment for osteoarthritis. The disease is managed with pain relief and culminates in joint replacement surgery, which has variable outcomes. In the largest study of its kind, scientists from the Wellcome Sanger Institute and their collaborators investigated the genetics behind osteoarthritis, as well as the diseases and traits that are linked to it. To understand more about the genetic basis of osteoarthritis, the team studied 16.5 million DNA variations from the UK Biobank resource. Following combined analysis in up to 30,727 people with osteoarthritis and nearly 300,000 people without osteoarthritis in total—the controls—scientists discovered nine new genes that were associated with osteoarthritis, a significant result for this disease. Professor Eleftheria Zeggini, senior author from the Wellcome Sanger Institute, said: "Osteoarthritis is challenging to study because the disease can vary among people, and also between the different joints affected, for example knee, hip, hand and spine. Using data from the UK Biobank resource, we have undertaken the largest genetic study of osteoarthritis to date and uncovered nine new genes associated with the disease." Researchers then investigated the role of the nine new genes in osteoarthritis, by studying both normal cartilage and diseased cartilage from individuals who had a joint replacement. The team looked for genes that were active in the progression of the disease by extracting the relevant cells from healthy and diseased tissue, studying the levels of proteins in the tissue and sequencing the RNA—the messenger that carries instructions from DNA for controlling the production of proteins. Of the nine genes associated with osteoarthritis, researchers identified five genes in particular that differed significantly in their expression in healthy and diseased tissue. The five genes present novel targets for future research into therapies. Ms Eleni Zengini, joint first author from the University of Sheffield and Dromokaiteio Psychiatric Hospital in Athens, said: "These results are an important step towards understanding the genetic causes of osteoarthritis and take us closer to uncovering the mechanism behind the disease. Once we know that, it opens the door to developing new therapies for this debilitating disease." The team also explored genetic correlations between osteoarthritis and obesity, bone mineral density, type 2 diabetes, and raised blood lipid levels. Researchers applied a statistical technique known as causal inference analysis to uncover which traits and diseases cause osteoarthritis, and which do not. Within the limits of their study, scientists discovered that type 2 diabetes and high levels of lipids in the blood do not have causal effects on osteoarthritis, but reaffirm that obesity does. Dr Konstantinos Hatzikotoulas, joint first author from the Wellcome Sanger Institute, said: "Using genetic data, we have shown that type 2 diabetes and increased blood lipid levels do not appear to be on the causal path to osteoarthritis. We also reconfirmed that obesity is on the causal path to osteoarthritis." Dr Natalie Carter, Head of research liaison & evaluation at Arthritis Research UK, who did not fund the study, said: "The discovery of these genes is positive news for the 8.5 million people in the UK living with osteoarthritis. People living with this debilitating condition currently have limited treatment options. Meanwhile, they can struggle to do the day-to-day things most of us take for granted, like going to work or getting dressed independently. By revealing how these genes contribute to osteoarthritis, this research could open the door for new treatments to help millions of people live the pain free life they deserve."
nature.com/articles/doi:10.1038/s41588-018-0079-y
Medicine
Big data provides clues for characterizing immunity in Japanese
Genetic and phenotypic landscape of the major histocompatibilty complex region in the Japanese population, Nature Genetics (2019). DOI: 10.1038/s41588-018-0336-0 , www.nature.com/articles/s41588-018-0336-0 Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-018-0336-0
https://medicalxpress.com/news/2019-01-big-clues-characterizing-immunity-japanese.html
Abstract To perform detailed fine-mapping of the major-histocompatibility-complex region, we conducted next-generation sequencing (NGS)-based typing of the 33 human leukocyte antigen (HLA) genes in 1,120 individuals of Japanese ancestry, providing a high-resolution allele catalog and linkage-disequilibrium structure of both classical and nonclassical HLA genes. Together with population-specific deep-whole-genome-sequencing data ( n = 1,276), we conducted NGS-based HLA, single-nucleotide-variant and indel imputation of large-scale genome-wide-association-study data from 166,190 Japanese individuals. A phenome-wide association study assessing 106 clinical phenotypes identified abundant, significant genotype–phenotype associations across 52 phenotypes. Fine-mapping highlighted multiple association patterns conferring independent risks from classical HLA genes. Region-wide heritability estimates and genetic-correlation network analysis elucidated the polygenic architecture shared across the phenotypes. Main Genetic variants of the major histocompatibilty complex (MHC) region at 6p21.3 confer the largest number of associations that explain substantial phenotypic variations of a wide range of complex human diseases and quantitative traits 1 . The MHC region is one of the most polymorphic sites in the human genome and is characterized by population-specific complex linkage disequilibrium (LD) structure and long-range haplotypes 2 , 3 , 4 , 5 . Among the >200 genes densely contained in the MHC region 6 , 7 , human leukocyte antigen (HLA) genes are considered to explain most of the genetic risk of MHC. Fine-mapping efforts to identity causal variants within the MHC region reported many HLA alleles and amino acid polymorphisms associated with complex human traits 8 . In particular, development of the HLA imputation method and construction of population-specific reference panels have successfully accelerated the identification of causal variants that should be useful for personalized medicine 9 , 10 , 11 , 12 . However, several points have yet to be implemented in genetic and phenotypic studies of MHC. The first point is the use of NGS for fine-mapping MHC risk. Compared with traditional HLA typing methods, such as sequence-specific oligonucleotide hybridization (SSO) and sequencing-based typing, HLA typing by NGS could provide higher resolution of alleles for a wider spectrum of HLA and HLA-related genes beyond a limited number of classical HLA genes 13 , 14 , 15 , 16 . Population-specific whole-genome sequencing (WGS) data contribute to imputing functional rare variants with high accuracy 17 . Given that variants of the nonclassical HLA genes are responsible for disease risk, as well as those of the classical HLA genes, and that functional variants of non-HLA genes within the MHC region affect clinical phenotypes 18 , 19 , MHC risk analyses using the NGS-based reference panel are warranted to achieve more accurate fine-mapping of the causal variants. The second point is the application of the HLA imputation method to large-scale genome-wide association study (GWAS) data that represent all the participants of population-level cohorts. Many nation-wide biobanks have recently been launched to capture the genetic and phenotypic variation of these populations. To date, large-scale GWAS data from >100,000 samples have been publicly released from several biobanks (for example, >500,000 from UK Biobank 17 , 20 and >170,000 from BioBank Japan Project (BBJ) 21 , 22 ). Although HLA imputation of such big genotype data needs further tuning in the analytic pipeline, achievement of this task should enhance the knowledge of the genetic landscape of MHC in these populations. The third point is a phenome-wide assessment of risk variants in the MHC region. Cross-phenotype analysis has identified shared genetic correlations among human traits, which are represented as pleiotropic associations of the variants and cross-phenotype network that are linked to disease biology 23 , 24 , 25 , 26 . Phenome-wide association studies (PheWASs) that use electronic medical records or medical information collected throughout a cohort have successfully identified clinically useful genotype–phenotype correlations 27 , 28 . MHC is one of the most pleiotropic sites in the genome 1 , and thus application of the PheWAS approach should elucidate the phenotypic landscape of the MHC variants as well 29 . Here we report a comprehensive analysis that characterizes the genetic and phenotypic landscape of MHC in the Japanese population. We newly constructed an HLA imputation reference panel of Japanese individuals ( n = 1,120) through high-resolution NGS typing of both classical and nonclassical HLA genes ( n = 33). Together with accurate imputation of single-nucleotide variants (SNVs) and indels in a broad allele-frequency spectrum by using the population-specific deep-WGS reference data ( n = 1,276) 30 , HLA imputation of the 166,190 Japanese individuals from the BBJ genotype data was conducted to apply a PheWAS of 106 complex human diseases and quantitative traits extracted from clinical records. Results NGS typing of HLA genes in the Japanese population For the 1,120 unrelated Japanese individuals, we conducted high-resolution typing of 33 HLA-related genes with up to six-digit-level allele information (study design in Supplementary Fig. 1 ). We adopted target-capture technique and sequencing with relatively longer read lengths (350 base pairs (bp) and 250 bp for paired-end, an average depth of 260.1×) 31 , 32 . By conducting validation with the traditional SSO method for some individuals ( n = 182), we observed higher accuracy in classical HLA allele typing than that in previous NGS-based reports (<0.56% potentially inaccurate typing). NGS-based HLA typing was able to update allele information that was incorrectly assigned by traditional typing methods (for example, HLA-DRB1*14:01 by SSO was corrected as HLA-DRB1*14:54 by NGS 33 ; details in Supplementary Table 1 ). Among the 33 sequenced HLA genes, 9 are classical HLA genes (3 for class I and 6 for class II), and 24 are nonclassical HLA genes (Supplementary Table 2 ; HLA gene classification criteria in Methods ). Whereas alleles of classical HLA genes were highly polymorphic (on average, there were 9.7, 20.1 and 21.6 alleles per gene for two-digit, four-digit and six-digit-level allele information, respectively), those of nonclassical HLA genes showed lower variations (1.4, 3.1 and 4.0 alleles per gene, respectively; Fig. 1a and Supplementary Tables 2 and 3 ). Of these, HLA-B , HLA-DRB1 and MICA had the largest numbers of alleles for class I and II classical HLA genes and nonclassical HLA genes, respectively ( n = 39, 33 and 15 in four-digit-level allele information). Because there was inconsistent definition of the registered sequences for one of the nonclassical HLA genes of TAP2 , it was difficult to consistently define the four-digit (and also six-digit) alleles of TAP2 (details in Supplementary Table 4 ). Although elucidation of six-digit allele distribution is one of the topics that was finally achieved by introduction of NGS, we found that increments of HLA allele variations from four to six digits (+1.4 and +0.9 for classical and nonclassical HLA alleles, respectively) were limited as compared with those from two to four digits (+10.4 and +1.7 alleles, respectively). Fig. 1: High-resolution allele-frequency spectra and linkage disequilibrium of HLA genes. a , Cumulative frequency (freq.) spectra of two-digit, four-digit and six-digit HLA alleles obtained by using NGS-based typing. Genes with the largest numbers of alleles are labeled separately for classical HLA genes (class I and class II) and nonclassical HLA genes. b , Pairwise evaluation of LD measurement, ε, among the HLA genes. ε uses normalized entropy of the haplotype frequency, and a higher ε value represents stronger LD. LD blocks ( ε > 0.15) are highlighted with white boundaries. Full size image High-dimensional compression elucidates HLA-variant patterns Systematic visualization of LD patterns among HLA genes contributes to the understanding of population-specific LD structure of genetic variants within MHC 4 . Thus, we introduced an entropy-based LD-measurement index (ε) to assess distributions of the four-digit HLA alleles and to quantify pairwise LD between the HLA genes. Within MHC, there exist four major LD blocks of the HLA genes ( ε > 0.15): HLA-G , HLA-H , HLA-K and HLA-A for block 1; HLA-C , HLA-B , MICA and MICB for block 2; HLA-DRA , HLA-DRB family genes, HLA-DQA1 , HLA-DQB1 and HLA-DOB for block 3; and HLA-DPA1 and HLA-DPB1 for block 4 (Fig. 1b ), thus demonstrating that classical and nonclassical HLA genes together constitute the LD patterns within MHC. One challenge in HLA-polymorphism characterization in personalized regenerative medicine or organ transplantation is an optimized classification of the haplotypes based on HLA typing data 34 . Classifying haplotypes according to simple combinations of multiple HLA alleles and genes is likely to subdivide samples into clusters that are too segmented. Thus, we introduced a machine-learning-based clustering approach. We adopted t -distributed stochastic neighbor embedding (tSNE), a machine-learning method for high-dimensionality compression and visualization 35 , 36 , to the HLA typing data. We then performed unsupervised clustering of the haplotypes by using tSNE components (tSNE 1 and tSNE 2 ) and the DBSCAN algorithm 37 . For classical HLA alleles, 3, 10 and 11 clusters were constructed for two-digit, four-digit and six-digit alleles, respectively (frequency >0.01; Fig. 2a ). Although haplotypes of higher- and lower-digit alleles were clustered separately, clusters of the higher-digit alleles were subsets of those of the lower-digit alleles, corresponding to the original definition of HLA allele nomenclature (Fig. 2b ) 5 . The clusters of the six-digit classical HLA alleles had lower increments in variations than those of the four-digit classical HLA alleles (+1 cluster), whereas variations substantially increased from two-digit to four-digit alleles (+7 clusters). Given that the highly polymorphic nature of the HLA alleles is derived from balancing selection such as heterozygosity advantage, four-digit alleles (that is, amino acid polymorphisms) of the HLA genes might be main targets of the selection pressure rather than two-digit or six-digit alleles. Fig. 2: Machine-learning-based clustering of haplotypes by using HLA allele information. a , Unsupervised clustering results by machine learning (ML) using NGS-based HLA-typing data as inputs. Haplotypes are plotted on the basis of the two components of tSNE and clustered according to the DBSCAN algorithm. Clustering was separately conducted for each digit of classical or nonclassical HLA genes. b , Connections between machine-learning-based clusters of haplotypes. Each rectangle corresponds to the clusters identified in a . Rectangle height reflects the number of haplotypes included in each cluster. Full size image However, haplotype clusters of nonclassical HLA alleles had different patterns than those of classical HLA alleles (Fig. 2a ), and parsimonious correspondences of the clusters between classical and nonclassical HLA alleles seemed to be difficult to define (Fig. 2b ). This result suggests that nonclassical HLA genes have independent genetic landscapes in their variations compared with those of classical HLA genes, and that risk assessments of nonclassical HLA-gene variants should additionally contribute to fine-mapping efforts to identify causal functional variants in the MHC region. NGS-based HLA and SNV imputation of Japanese GWAS data Motivated by the newly identified genetic architecture of both classical and nonclassical HLA genes, we constructed a new HLA imputation reference panel of the Japanese population ( n = 1,120). Whereas previous studies have focused primarily on the core MHC region for risk fine-mapping (around 29–33 Mb on chromosome 6, NCBI Build 37), we extended the target region into the MHC and its flanking region (24–36 Mb), which we define as the ‘entire MHC’ herein. Together with genotyping of the SNPs in the entire MHC region, we incorporated sequenced variants of the HLA genes and constructed the reference panel by using SNP2HLA 9 . The imputation accuracy of the constructed HLA imputation reference panel was empirically evaluated by a cross-validation approach 12 . Whereas previous studies have reported limited accuracy of NGS-based HLA typing 14 , 38 , the newly constructed reference panel achieved high imputation accuracy (96.4 and 99.1% for the four-digit classical and nonclassical HLA alleles, respectively; Supplementary Table 3 ). This concordance was even better than that of the previously constructed SSO-method-based reference panel of Japanese individuals (95.9% for the four-digit classical HLA alleles, n = 908 for independent samples) 4 . Using the constructed reference panel, we densely imputed the HLA variants of the GWAS genotype data of the Japanese population constructed by BBJ ( n = 166,190) 21 , 22 . To apply HLA imputation to such large-scale GWAS data, we updated the protocol to incorporate multiple software for genotype phasing and imputation (SNP2HLA, Eagle and minimac3; details in Methods ). Furthermore, to complement SNP-microarray-based incomplete coverage of the variants, we densely imputed SNV and indels within the entire MHC region by using the deep-WGS data of the Japanese population as a reference ( n = 1,276, average depth = 24.6×) 30 . After application of strict postimputation variant filtering (minor allele frequency (MAF) ≥ 0.5% and imputation score Rsq ≥ 0.7), we obtained genotype dosages of 108 two-digit, 184 four-digit and 200 six-digit alleles and 2,273 amino acid polymorphisms of classical and nonclassical HLA genes, as well as 62,030 SNV and 4,203 indels in the entire MHC region (68,998 variants in total). PheWAS identifies pleiotropy of MHC with human phenotypes Using the NGS-based HLA, SNV and indel imputation data of the BBJ GWAS, we conducted PheWAS to comprehensively elucidate the genetic and phenotypic landscapes of the entire MHC. We incorporated data on 106 phenotypes collected from medical records of nationwide hospitals belonging to BBJ (Supplementary Table 5 ). Of these, 46 were complex diseases classified into four categories (immune related, metabolic and cardiovascular, cancers and other diseases) 21 , 22 , and 60 were quantitative traits classified into ten categories (anthropometric, metabolic, protein, kidney related, electrolyte, liver related, other biochemical, hematological, blood pressure and echocardiographic) 25 , 26 . In the PheWAS, we evaluated associations of the entire MHC region with all of the 106 phenotypes. Approximately half of the phenotypes ( n = 52; 16 diseases and 36 quantitative traits) indicated the association signals that satisfied the genome-wide-significance threshold ( P < 5.0 × 10 −8 ; ref. 39 ; Table 1 and Supplementary Fig. 2 ), thus demonstrating substantial pleiotropic roles of MHC in a wide range of human phenotypes. Furthermore, stepwise conditional analysis identified multiple independent association signals in as many as 20 phenotypes (Supplementary Table 6 ). On average, 2.0 independent signals per phenotype were observed, with the largest number of seven signals observed for adult height and alkaline phosphatase. This result suggests that the genetic risk in MHC may reflect polygenic combinations of multiple functional and biological origins. Applying a multivariate regression model fitting nonadditive effects of the HLA alleles, we found significant nonadditive effects of HLA-DPB1*05:01 and HLA-DPB1*02:02 alleles on the risk of Graves’ disease ( P < 3.7 × 10 −16 ; Supplementary Figure 3 ). Despite limited increments in allele variations from four-digit to six-digit alleles, several six-digit HLA alleles indicated more significant associations than those observed for the ancestral four-digit alleles (for example, odds ratio = 1.32 and P = 4.0 × 10 −28 at HLA-DRB4*01:03:02 but odds ratio = 1.13 and P = 8.4 × 10 −11 at HLA-DRB4*01:03 with asthma). Table 1 Significant association signals in the entire MHC region identified by PheWAS Full size table PheWAS-based classifications of MHC-association patterns Although our PheWAS approaches identified abundant association signals, their association patterns could be classified according to the types of the responsible genes (Fig. 3 ). (i) Associations of classical HLA genes were most evident (28 of the 52 top association signals and 52 of the 97 independent association signals). We observed that a series of quantitative traits, including hematological and blood pressure traits, were enriched in associations with the class I classical HLA gene variant (for example, P = 6.7 × 10 −24 at HLA-C Tyr116 with basophil count and P = 5.0 × 10 −40 at HLA-B amino acid position 116 with eosinophil count). As for the class II classical HLA genes, associations with diseases such as immune-related diseases and cancers were more evident than those with quantitative traits (for example, P = 3.3 × 10 −43 at HLA-DQβ1 amino acid position 57 with chronic hepatitis B and P = 1.1 × 10 −16 at rs9273367 in LD with HLA-DQβ1 Ile185 ( r 2 = 0.81) with type 1 diabetes). (ii) Nonclassical HLA gene variants showed significant associations as well (for example, P = 4.0 × 10 −28 at HLA-DRB4*01:03:02 with asthma and P = 6.7 × 10 −10 at rs2844726 in LD with HLA-E amino acid position 107 ( r 2 = 0.76) with red-blood-cell count). (iii) Associations of non-HLA gene variants were observed within each class of the MHC region (for example, P = 9.5 × 10 −14 at rs2233965 at C6orf15 with type 2 diabetes in the class I region, P = 2.2 × 10 −20 at rs3830041 at NOTCH4 with aspartate aminotransferase in the class III region, and P = 2.6 × 10 −13 at rs3864302 at C6orf10 with atopic dermatitis in the class II region). Such top association signals observed at the non-HLA gene variant within MHC still remained significant when conditioned on nearby HLA gene variants with the strongest association, thus confirming their independent phenotypic effects from the HLA genes. (iv) Non-HLA genes in the extended MHC region showed associations (for example, P = 5.7 × 10 −18 at rs1799945 at HFE with mean corpuscular hemoglobin and P = 4.4 × 10 −29 at rs2762353 at SLC17A1 with uric acid). (v) Furthermore, non-HLA genes in the region flanking the MHC also showed associations (for example, P = 1.9 × 10 −12 at rs73743323 at IP6K3 with phosphorus and P = 5.4 × 10 −77 at rs139458943 at GPLD1 with alkaline phosphatase). In pattern 5, we observed the contribution of rare SNVs (MAF <0.01) on several traits (for example, GPLD1 for alkaline phosphatase and GRM4 and HMGA1 for adult height and estimated glomerular filtration rate). Our NGS-based HLA, SNV and indel imputation enabled us to detect such independent association signals from classical HLA genes. (vi) Population-specific long-range haplotypes characterize the LD structure of MHC 4 . Here, we show that a long-range haplotype that spans the entire MHC region specific to the Japanese population 2 , 4 had pleiotropic effects on multiple phenotypes ( P = 3.6 × 10 −23 with estimated glomerular filtration rate and P = 7.3 × 10 −17 with triglyceride). (vii) Analogously to the identification of multiple independent association signals for a single phenotype, several traits confer combinations of multiple association patterns (for example, associations with adult height in the class II classical HLA variant ( P = 6.7 × 10 −17 at HLA-DRβ1 74Ala, pattern 2), the extended MHC region ( P = 2.0 × 10 −39 at rs9379833 at HIST1H2BE , pattern 5) and the region flanking the MHC ( P = 5.7 × 10 −72 at rs4713762 at HMGA1 , pattern 4)). Fig. 3: Genotype–phenotype association patterns identified by PheWAS with NGS-based HLA, SNV and indel imputation. Regional association plots of the entire MHC region in the PheWAS on the large-scale GWAS of the BBJ. Horizontal bar represents significance threshold. NGS-based HLA, SNV and indel imputation enabled classification of the association patterns of genetic risk factors within MHC (from top to bottom): (1) classical HLA gene (class I and II), (2) nonclassical HLA gene, (3) non-HLA gene in the MHC region, (4) non-HLA gene in the extended MHC region, (5) non-HLA gene in the region flanking MHC, (6) long-range haplotype spanning the entire MHC region and (7) combinations of the multiple association patterns. Two-tailed P values calculated with logistic or linear regression are indicated without adjustment ( n = 166,190 independent Japanese individuals). Dotted horizontal lines indicate genome-wide-significance threshold of P = 5.0 × 10 −8 . MCH, mean corpuscular hemoglobin; eGFR, estimated glomerular filtration rate. Full size image Cluster visualization of the observed association patterns could help illustrate the overall genetic and phenotypic landscape within the entire MHC region (Fig. 4 ). Significant MHC associations with 11 traits were newly identified by our study (that is, pollinosis, hyperlipidemia, myocardial infarction, stable angina, type 2 diabetes, liver cancer, liver cirrhosis, nephrotic syndrome, total protein, potassium and creatine kinase). In addition, we newly identified trait-associated signals on previously unreported HLA variants or other MHC variants in 37 phenotypes (Supplementary Table 7 ). Fig. 4: Matrix plot of gene and phenotype associations in the entire MHC region. Significantly associated gene and phenotype pairs identified by PheWAS are plotted in the matrix. In addition to the top association signals of the phenotypes, independent associations identified by conditional analysis are indicated. The bars at the right and bottom show the number of association signals per phenotype and gene, respectively. Two-tailed P values calculated with logistic or linear regression are indicated without adjustment ( n = 166,190 independent Japanese individuals). MCV, mean corpuscular volume; MCH, mean corpuscular hemoglobin; MCHC, mean corpuscular hemoglobin concentration; sig. thres., significance threshold. Full size image Our NGS-based MHC fine-mapping efforts were able to refine responsible risk variants that had not been identified earlier (Supplementary Table 6 ). For example, previous studies on hepatitis B in Japanese individuals have suggested that HLA-DRB1 , HLA-DQB1 and HLA-DPB1 allele haplotypes can explain the risk embedded within the MHC class II region 40 . However, our study shows that the amino acid polymorphisms of HLA-DQβ1 (position 57), HLA-DPα1 (position 111) and HLA-DQα1 (position 160) independently explained the risk. Although a contribution of the HLA-C allele was originally suggested for monocyte count 41 , our study additionally identifies risk at MICB (rs2395040), which would support the roles of monocytes in disease pathophysiology 42 . Genetic correlation within MHC highlights phenotype networks Another approach to infer genetic and phenotypic overlap is to estimate genetic correlation 24 , 25 , 26 . Contrary to the PheWAS approach that assesses point-by-point connections between single variants and phenotypes, genetic correlation could account for shared polygenic architecture across the phenotypes. To that end, we estimated region-wide polygenic heritability of phenotypes that was explained by variants within the entire MHC (Fig. 5a and Supplementary Table 5 ). As reported previously 4 , 18 , 40 , 43 , immune-related diseases such as type 1 diabetes, rheumatoid arthritis, Graves’ disease, chronic hepatitis B and asthma showed the highest region-wide heritability (9.8, 9.5, 4.6, 3.5 and 1.1%, respectively). Although single-variant associations were not significant, possibly because of the small sample size of the cases ( n = 547), uterine cervical cancer showed relatively high heritability among the phenotypes (1.6%). When the proportions of the heritability explained by classical HLA gene variants and other MHC variants not in LD with them ( r 2 < 0.1) were quantified, immune-related diseases showed the largest proportions of heritability derived from classical HLA gene variants (on average 0.69), whereas metabolic and cardiovascular diseases showed the smallest proportions (on average 0.32). Fig. 5: Region-wide heritability and genetic-correlation networks across phenotypes. a , Heritability estimates of phenotypes, based on variants within the entire MHC region. Phenotypes with >0.1% of the explained heritability are indicated at left. Right, heritability proportions and empirically estimated standard errors (s.e.) between classical HLA gene variants and other MHC variants ( n = 166,190 independent Japanese individuals). b , Region-wide genetic-correlation networks across phenotypes depicted from classical HLA gene variants (top) and other MHC variants (bottom). Genetically correlated phenotypes are clustered close together as circles, and each edge represents a significant genetic correlation. Positive and negative genetic correlations are indicated by color according to the legend, and thicker edges correspond to more significant correlations. Abbreviations are provided in Supplementary Table 5 . Full size image Finally, we estimated genetic correlations of the entire MHC region across the phenotypes and visualized cross-phenotype networks reflecting shared polygenic architecture and embedded biological information. As suggested by single-phenotype heritability analysis, the genetic-correlation network of classical HLA gene variants and that of other MHC variants showed different patterns of connections (Fig. 5b ). In the former, several tight connections among the phenotypes belonging to the same categories (for example, immune related, metabolic and cardiovascular, hematological and protein) together configure the entire network. In the latter, the entire network was divided into subnetworks constituted separately by diseases and quantitative traits. As an example of a specific trait, rheumatoid arthritis showed positive correlations with asthma, type 1 and 2 diabetes mellitus, and total bilirubin but a negative correlation with body mass index in classical HLA gene variants, whereas it showed negative correlations with hyperlipidemia, stable angina, myocardial infarction, lactate dehydrogenase and eosinophil count in other MHC variants. These results indicate that polygenic architecture of the entire MHC region confers pleiotropic diversity according to the phenotypes, phenotype categories and functional categories of the responsible genes. Discussion Through NGS-based typing of high-resolution HLA gene polymorphisms and implementation of the imputation reference panel in the Japanese population, our PheWAS approach using large-scale GWAS data successfully fine-mapped the genetic risk embedded in the entire MHC region and excavated the cross-phenotype genetic-correlation network. Our study highlights several new findings. First, we constructed a catalog of NGS-based high-resolution frequency spectra of both classical and nonclassical HLA alleles. Our resources should contribute to the understanding of the biological and clinical roles of nonclassical HLA genes, a challenging area of MHC yet to be investigated 13 . Our next steps will include (i) direct construction of a highly accurate HLA imputation reference panel from WGS data without target sequencing of HLA and (ii) application of long-read sequencing technology to copy number variants and other complex genomic regions such as killer cell immunoglobulin-like receptor 44 . The strategy in this study to separately impute HLA and SNV by using different reference panels could disrupt LD among the variants, and imputation of all the variants of interest by using a single panel is warranted. Second, application of a high-dimensional compression technique to the HLA data, such as tSNE originally applied for epigenetic data 45 , 46 , effectively configured unbiased clustering of the haplotypes. The result is notable because machine-learning-based unsupervised clustering successfully recaptured the original definition of HLA-allele nomenclature and identify the independent genetic landscapes of classical and nonclassical HLA genes without prior biological or genetic knowledge. This finding indicates that trans-omics sharing of analytical methods between genomics and epigenomics fields may yield innovative findings 47 . Third, NGS-based HLA, SNV and indel imputation followed by the PheWAS approach successfully demonstrated a wide range of genotype–phenotype correlations in complex human traits. Approximately half of the phenotypes examined in our PheWAS showed significant associations; this proportion was larger than we expected on the basis of similar previous approaches 27 , 29 . Our study indicates the value of PheWAS focusing on large-scale genotype data on sites with pleiotropic features. Further accumulation of genotype and clinical data is warranted to achieve larger study scales. Fourth, dense fine-mapping efforts highlighted several patterns of association signals within the entire MHC region. In particular, we confirmed independent phenotype risk from classical HLA genes, namely nonclassical HLA genes and non-HLA genes within the core MHC, extended MHC and flanking regions. Finally, MHC-region-wide heritability and genetic-correlation estimates depicted cross-phenotype networks in a manner complementing those obtained from single-variant and multiple-phenotype associations such as PheWAS. As an intermediate approach between single-variant analysis and genome-wide polygenic assessments, region-wide or locus-based approaches may be promising as well 48 . In conclusion, our study comprehensively elucidated the genetic and phenotypic landscapes of MHC in the Japanese population. URLs The BioBank Japan Project (BBJ), ; Japan Biological Informatics Consortium (JBIC), ; Omixon Target software, ; BWA, ; GATK, ; IPD-IMGT/HLA database, ; OptiType, ; POLYSOLVER, ; HLA-HD, ; Kourami, ; eLD, ; Rtsne R package, ; DBSCAN R package, ; Alluvial R package, ; SNP2HLA, ; Eagle, ; Minimac3, ; R statistical software, ; GCTA, ; Igraph R package, . Methods Cohort To construct the NGS-based HLA typing data, we enrolled 1,120 unrelated individuals of Japanese ancestry. Genomic DNA was obtained from Epstein–Barr virus–transformed B-lymphoblast cell lines of unrelated Japanese individuals established by the Japan Biological Informatics Consortium (JBIC) 12 . In the PheWAS, 166,190 individuals were enrolled from BBJ, and participants were affected with any of the 45 target diseases defined by the project (Supplementary Table 5 ) 21 , 22 . As for the WGS-based SNV imputation reference panel, 1,276 independent individuals of BBJ were enrolled (patients with myocardial infarction, drug eruption, colorectal cancer, breast cancer, prostate cancer or gastric cancer) 30 . Individuals determined to be of non-Japanese origin either by self-reporting or by principal component analysis were excluded, as described 12 , 25 , 26 , 30 . All the BBJ individuals provided written informed consent, as approved by the ethical committees of RIKEN Yokohama Institute and the Institute of Medical Science, University of Tokyo. This study was approved by the ethical committee of Osaka University Graduate School of Medicine. NGS-based HLA typing of Japanese individuals We conducted high-resolution allele typing (two-digit, four-digit and six-digit alleles) of 33 HLA and HLA-related genes, of which 9 were classical HLA genes ( HLA-A , HLA-B and HLA-C for class I; HLA-DRA , HLA-DRB1 , HLA-DQA1 , HLA-DQB1 , HLA-DPA1 and HLA-DPB1 for class II) and 24 were nonclassical HLA genes ( HLA-E , HLA-F , HLA-G , HLA-H , HLA-J , HLA-K , HLA-L , HLA-V , HLA-DRB2 , HLA-DRB3 , HLA-DRB4 , HLA-DRB5 , HLA-DRB6 , HLA-DRB7 , HLA-DRB8 , HLA-DRB9 , HLA-DOA , HLA-DOB , HLA-DMA , HLA-DMB , MICA , MICB , TAP1 and TAP2 ; Supplementary Table 2 ). Although current definitions of the HLA gene classifications are ambiguous (for example, classical HLA gene, nonclassical HLA gene, HLA-like gene or pseudo-HLA gene) 6 , 7 , in this study, we defined the major classical HLA genes as classical HLA genes and other genes as nonclassical HLA genes. We also defined alleles of classical HLA genes as classical HLA alleles and those of nonclassical HLA genes as nonclassical HLA alleles for simplicity. Entire HLA gene sequencing with the sequence-capture method was used for high-resolution HLA typing 13 . The sequence-capture method was based on hybridization between DNA of an adapter-ligated library (KAPA Hyper Prep Kit, Roche) and a biotinylated DNA probe (SeqCap EZ choice kit, Roche) custom designed on the basis of target sequences of 33 HLA genes (length of total target regions = 236,885 bp; Supplementary Table 8 ). Paired-end sequence reads (read 1, 350 bp; read 2, 250 bp) were obtained by using a MiSeq sequencer (Illumina). Typing of two-digit, four-digit and six-digit HLA alleles was conducted in Omixon Target software version 1.9.3 (Omixon) with IPD-IMGT/HLA Database release 3.21.0. Phase-defined HLA gene analysis was also used to resolve the phase ambiguity 31 , 32 . In parallel, to complement the HLA allele information that was specific to the Japanese population and not correctly implemented in Omixon Target software, we obtained SNV genotypes in PCR-amplified regions according to the variant-calling pipeline 31 , and partially updated the HLA typing results on the basis of those obtained according to the sequencing-based typing method. The sequence reads were aligned to the reference human genome with the contig sequences of the MHC region (GRCh37 (human_g1k_v37.fasta), hap2_cox contig and hap5_mcf contig) using BWA (version 0.7.15). Variant calling was conducted with GATK HaplotypeCaller and UnifiedGenotyper (version 3.6). HLA allele sequences were obtained from the IPD-IMGT/HLA database 5 . We empirically confirmed the accuracy of HLA typing by evaluating concordance rates of the four-digit HLA alleles with those additionally genotyped with the SSO method (a WAKFlow HLA typing kit (Wakunaga) together with the Luminex Multi-Analyte Profiling system (xMAP, Luminex); n = 182 for HLA-A , HLA-B , HLA-C and HLA-DRB1 , and n = 144 for HLA-DQA1 , HLA-DQB1 and HLA-DPB1 ). We observed a high concordance rate of 98.2% between typed alleles of NGS and SSO (2,278 of 2,320 alleles in total). We confirmed that most mismatched alleles (29 of 42) derived from wrong typing of SSO but not NGS as previously reported (for example, HLA-DRB1*14:01 by SSO was corrected to HLA-DRB1*14:54 by NGS 33 ; details in Supplementary Table 1 ). This provides confidence in the accuracy of our NGS-based HLA typing protocol (≤0.56% of potentially inaccurate typing). Although we further attempted to verify these ambiguous mismatched alleles by using tools to estimate HLA alleles from WGS or whole-exome-sequencing data (OptiType (version 1.3.1) 49 , Polysolver (version 4) 50 , HLA-HD (version 1.2.0.1) 51 and Kourami (version 0.9.6) 52 ), it was difficult to determine the correct alleles, owing to inconsistent outputs of the tools. In addition, we assessed concordance rates of the SNV genotypes between microarray-based SNP genotyping data (described below) and those obtained by target sequencing used for NGS-based HLA typing. Among the 203 SNVs genotyped by both SNP microarray and NGS, the genotype concordance was as high as 0.997. Of these, 29 and 45 SNVs were included in the coding regions of classical and nonclassical HLA genes, with concordance rates of 0.994 and 0.998, respectively. Assessment of LD structure on the basis of normalized entropy index To evaluate LD structure among HLA genes, we introduced an LD-measurement index called ε, which uses differences in the normalized entropy of the haplotype-frequency distributions between LD and the null hypothesis of linkage equilibrium 53 , by using eLD software (version 1.0) 54 . ε was originally developed to assess LD among multiple biallelic markers, and we previously showed that ε is also applicable to assess LD between two multiallelic markers such as the HLA alleles 4 . For each pair of HLA genes, we calculated ε to quantify LD between the HLA genes, by using the observed frequency of the four-digit HLA alleles. Because the estimation of ε can be biased when the haplotype frequency distribution is sparse, we combined the HLA alleles with frequency <0.01 into a single dummy allele. The value of ε ranges between 0 and 1, and a higher ε value represents stronger LD. Machine-learning-based clustering by using HLA allele information We performed unsupervised clustering of haplotypes with NGS-based HLA typing data by using tSNE, a machine-learning method for high-dimensionality compression and visualization 35 , 36 . tSNE is usually used to classify cells by using single-cell transcriptome or immunoprofiling data (cytometry by time of flight) 45 , 46 , and in this study we applied tSNE to classify haplotypes to obtain unbiased classification patterns based on HLA allele information 47 . We conducted tSNE for phased haplotype data of HLA alleles separately for classical or nonclassical HLA genes and for each digit by using the Rtsne R package (version 0.13). On the basis of the two components obtained from the tSNE results (tSNE 1 and tSNE 2 ), we conducted unsupervised clustering by adopting the DBSCAN R package (version 1.1.1) 37 . We first determined the following parameters to optimize the average silhouette width score by using the four-digit classical HLA alleles: a perplexity value = 25, a minimum number of reachable points = 3, and a reachable epsilon neighborhood parameter = 8.62. We fixed the perplexity value and the minimum number of reachable points, and then determined the reachable epsilon neighborhood parameters for two-digit classical, six-digit classical and four-digit nonclassical HLA alleles separately to optimize the average silhouette width score (10.0, 8.96 and 8.94, respectively). Parsimonious connections of the clusters were constructed with the alluvial R package (version 0.1–2). Construction of population-specific NGS-based HLA imputation reference panel For individuals with NGS-based HLA typing data, we obtained high-density SNP data of the MHC region by genotyping with the Illumina HumanCoreExome BeadChip (v1.1; Illumina). We applied stringent quality control (QC) filters as previously described 12 , 55 . Briefly, we applied QC filters to the individuals (call rates >0.99, exclusion of outliers by principal component analysis, exclusion of closely related individuals) and then applied QC filters to the SNPs (call rates ≥0.99, MAF ≥0.01, Hardy–Weinberg-equilibrium P value ≥ 1.0 × 10 −7 ). We extracted the genotyped SNPs in the entire MHC region (24–36 Mb on chromosome 6, NCBI Build 37). In addition to the HLA alleles typed by NGS (two-digit, four-digit and six-digit), we incorporated HLA gene amino acid polymorphisms corresponing to the four-digit HLA alleles according to the IPD-IMGT/HLA database 5 . We encoded both HLA alleles and HLA amino acid polymorphisms, and constructed the NGS-based HLA imputation reference panel of the Japanese population together with SNP genotype data with SNP2HLA software (version 1.0.3; n = 1,120 for the 33 HLA genes) 9 . The imputation accuracy of the constructed HLA imputation reference panel was empirically evaluated by a cross-validation approach 12 . We randomly split the panel into two data sets ( n = 560 for each data set). HLA alleles from one of the data sets were masked and then imputed by using another data set as an imputation reference. The concordance between imputed and genotyped HLA allele dosages was calculated separately for each HLA gene and each allele digit. To relatively compare the imputation accuracy among the different reference panels, we evaluated accuracy in the previously reported HLA imputation reference panel of independent Japanese individuals in the same way ( n = 908) 4 , 12 . HLA and SNV imputation of GWAS data of BBJ individuals Using the constructed NGS-based HLA imputation reference panel, we imputed the HLA variants of the large-scale GWAS data of the BBJ individuals ( n = 166,190). Detailed characteristics of the GWAS data and the QC process are described elsewhere 21 , 22 . Although we usually use SNP2HLA software for HLA imputation because of the high imputation accuracy and ability to impute HLA amino acid polymorphisms 9 , 56 , SNP2HLA is currently not applicable to such large-scale GWAS data, owing to a very large requirement of memory resources. Therefore, we initially used SNP2HLA to align SNP-strand and position information between the GWAS data and the reference panel, and then imputed the HLA variants with standard genome-wide imputation software. Specifically, we phased the GWAS data with Eagle (version 2.3) and imputed the variants with minimac3 (version 2.0.1). In addition, we densely imputed SNV and indels within the entire MHC region by using the deep-WGS data of the Japanese population as a reference ( n = 1,276, average depth = 24.6×, sequenced on the Illumina HiSeq2500 platform (Illumina)) 30 . For the PheWAS, we applied stringent postimputation QC filtering of the variants (MAF ≥0.5% and imputation score Rsq ≥0.7). PheWAS of HLA variants by using imputed BBJ GWAS data PheWAS was conducted by using clinical information of the individuals included in the imputed BBJ GWAS data. Associations of the imputed variants in the MHC region with 106 phenotype datasets (46 diseases and 60 quantitative traits; Supplementary Table 5 ) were examined. The diseases comprise four major categories (immune related ( n = 10), metabolic and cardiovascular ( n = 10), cancers ( n = 13) and other diseases ( n = 13)). The quantitative traits comprised ten major categories (anthropometric ( n = 2), metabolic ( n = 6), protein ( n = 4), kidney related ( n = 4), electrolyte ( n = 5), liver related ( n = 6), other biochemical ( n = 6), hematological ( n = 13), blood pressure ( n = 4) and echocardiographic ( n = 10)). Definitions of the diseases and the process of patient registration have been described elsewhere 21 , 22 . For the controls in disease association studies, we constructed a shared control group by excluding individuals affected by diseases known to have associations in the MHC region. Detailed processes of outlier exclusion, adjustment with clinical status and normalization methods of the quantitative traits have been described elsewhere 25 , 26 . We evaluated associations of the HLA variants with the risk of the diseases, by using a logistic regression model, and with dosage effects on the normalized values of the quantitative traits, by using a linear regression model 18 , with a glm() function implemented in R statistical software (version 3.2.3). We defined the HLA variants as biallelic SNVs in the entire MHC region (24–36 Mbp at chromosome 6, NCBI build 37), two-digit, four-digit and six-digit biallelic alleles of the HLA genes, biallelic HLA amino acid polymorphisms corresponding to the respective residues and multiallelic HLA amino acid polymorphisms for each amino acid position. We assumed additive effects of the allele dosages on phenotypes in the regression models. We included the top ten principal components obtained from the GWAS genotype data (not including the MHC region) as covariates in the regression models to correct potential population stratification. An omnibus P value for each HLA amino acid position was obtained by a log likelihood-ratio test for the likelihood between the null model and the fitted model, followed by a χ 2 distribution with m – 1 degree(s) of freedom for an amino acid position with m residues. To evaluate the nonadditive effects of the HLA alleles, we conducted a multivariate regression analysis that additionally included nonadditive genotype dosages of the HLA alleles as previously described 18 , 57 . We adopted a genome-wide-significance threshold of P < 5.0 × 10 −8 in our study 39 . Assignments of the candidate responsible genes to the top-associated variants of the phenotypes in the nominal and conditional analyses were conducted in the following manner: (i) when the variant was in moderate LD with any of the HLA alleles or amino acid polymorphisms ( r 2 ≥ 0.5), or located in the coding region of the HLA gene, the HLA gene was assigned; (ii) when the variant was in LD with the coding variants of the non-HLA gene, the non-HLA gene was assigned; and (iii) when the variant was located in an intergenic region, the nearest gene was assigned. Considering the strong functional effects of the HLA gene polymorphisms on human phenotypes, our assignment protocol puts relatively higher weights on HLA genes than on non-HLA genes. We note that r 2 values (that is, correlation of haplotypes) between the imputed dosages were approximately estimated by calculating Pearson’s correlation of genotype dosages ( R 2 ). Conditional-association analysis of HLA variants To evaluate independent risk among variants (and genes), we conducted a forward-type stepwise conditional regression analysis for phenotypes that satisfied the genome-wide-significance threshold. In each conditional step, we additionally included the associated variants as covariates in the regression model and repeated the analysis until no variants satisfied the significance threshold. When the top-associated variant itself was the HLA gene polymorphism or the SNV and indel in strong LD with any of the HLA gene polymorphisms ( r 2 ≥ 0.7), we additionally included all the two-digit, four-digit and six-digit alleles and the amino acid polymorphisms of the corresponding HLA gene as covariates in the regression to robustly condition the associations attributable to the HLA gene, as previously described 4 , 18 . Otherwise, the top-associated SNV and indels were additionally included as the associated variants. Heritability estimates of the variants within the MHC region We estimated the heritability of the phenotypes in the PheWAS that was explained by the variants within the entire MHC region, as well as calculating pairwise genetic correlations among the phenotypes. We adopted a Haseman–Elston regression implemented in GCTA software (version 1.91.1beta) 58 , because a genomic restricted maximum-likelihood method, a typical method for estimating SNP-based heritability 59 , was difficult to apply to the large sample size of our study. The estimated heritability of the diseases was adjusted according to disease prevalence in the Japanese population (Supplementary Table 5 ) 59 . In addition to the heritability estimation using all the MHC variants, we repeated the analysis separately for classical HLA variants (using the polymorphisms of classical HLA genes) and other variants (using the MHC variants not in LD with any of the classical HLA variants ( r 2 < 0.1)), and quantified their relative proportions. Standard errors (s.e.) of the proportions were estimated by simulating the distribution of the proportion values according to random sampling from the mean and s.e.m. of the heritability estimates (×100,000 iterations). Although there have been discussions on how to precisely estimate heritability within a genetic locus with strong LD 60 , because our main focus was on relative comparison of heritability across traits rather than quantification of absolute heritability values, we adopted GCTA as a standard method, as previously applied 61 . Using the matrix of pairwise genetic correlations among the phenotypes, we constructed a network of phenotypes representing shared genetic backgrounds of MHC across the phenotypes. We assigned each phenotype to a node, and the nodes were connected by edges weighted according to the magnitude of the corresponding genetic correlation. To effectively extract biological information embedded in the network and to avoid dense visualization, we used only highly significant genetic correlations (top 10% of the significance in the phenotype pairs and P < 0.05 after adjustment of Bonferroni’s correction). Network visualization was conducted according to the Fruchterman–Reingold algorithm, with the igraph R package (version 1.1.2). Statistical analyses Two-tailed logistic and linear regression was applied by using a glm() function implemented in R statistical software (version 3.2.3). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Code availability Software and codes used for this study are available from URLs or upon request to the authors. Data availability HLA data have been deposited at the National Bioscience Database Center (NBDC) Human Database (research ID: hum0114) as open data without any access restrictions. GWAS data and phenotype data of the BBJ individuals are available at the NBDC Human Database (research ID: hum0014).
Although genes are distributed widely across chromosomes, many genes related to the immune system are clustered together on human chromosome 6 in a segment called the major histocompatibility complex (MHC) region. The density of genes there makes it difficult for researchers to characterize them and their effects, but new technologies and large biobanks with data on huge numbers of people have opened the door to deeper insights into this region. In a major new study published in the journal Nature Genetics, researchers at Osaka University and their colleagues have surveyed the MHC region specifically in the Japanese population, revealing the existence of different gene variants and their connections with diseases and other traits. The team based their analyses on three sets of data. One was sequencing information on 33 genes determining white blood cell types in over 1,000 Japanese individuals, obtained by high-throughput sequencing. The second was data from genome-wide association studies looking at links between regions across the whole of the genome and traits and diseases in over 170,000 Japanese individuals. The third set comprised data taken from medical records on over 100 phenotypes reflecting clinical states and other traits. "Our multiple analyses first revealed the levels of polymorphism in the human leukocyte antigen (HLA) genes, then classified the overall patterns of this polymorphism into 11 distinct groups across the Japanese population using a machine learning approach," says lead author Jun Hirata. "This provided insight into the genetic landscape of the MHC region and showed us that 'non-classical' HLA genes should also be included in efforts to characterize the functional effects of this genomic region." The list of the phenotypes associated with the genetic variants in the MHC region. Credit: Osaka University After surveying the full complement of variation across the MHC region in the Japanese individuals, the team then focused on clarifying the associations of these variants with different traits and diseases. For this, they used data from medical records on 106 different phenotypes, including 46 complex diseases, from over 170,000 Japanese individuals. About half of these phenotypes showed significant associations with the studied genes. The findings revealed that it is common for a single gene in the MHC region to influence multiple traits, a phenomenon known as "pleiotropy." "Our work shows the importance of differences in white blood cell type for health in Japanese people," senior author Yukinori Okada says. "The cross-phenotype networks that we constructed also showed correlations between health conditions that were not previously known to be related." The key findings of this work, including the clinical importance of non-classical HLA genes and the effects of gene variants within MHC haplotypes, should provide a solid foundation for future studies on risk factors associated with this part of the genome.
10.1038/s41588-018-0336-0
Nano
Researchers able to watch phase transition in 2D semiconductors using STEM
Atomic mechanism of the semiconducting-to-metallic phase transition in single-layered MoS2, Nature Nanotechnology (2014) DOI: 10.1038/nnano.2014.64 . On Arxiv: arxiv.org/ftp/arxiv/papers/1310/1310.2363.pdf Abstract Phase transitions can be used to alter the properties of a material without adding any additional atoms and are therefore of significant technological value. In a solid, phase transitions involve collective atomic displacements, but such atomic processes have so far only been investigated using macroscopic approaches. Here, we show that in situ scanning transmission electron microscopy can be used to follow the structural transformation between semiconducting (2H) and metallic (1T) phases in single-layered MoS2, with atomic resolution. The 2H/1T phase transition involves gliding atomic planes of sulphur and/or molybdenum and requires an intermediate phase (α-phase) as a precursor. The migration of two kinds of boundaries (β- and γ-boundaries) is also found to be responsible for the growth of the second phase. Furthermore, we show that areas of the 1T phase can be controllably grown in a layer of the 2H phase using an electron beam. via Nanotechweb Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/nnano.2014.64
https://phys.org/news/2014-04-phase-transition-2d-semiconductors-stem.html
Abstract Phase transitions can be used to alter the properties of a material without adding any additional atoms and are therefore of significant technological value. In a solid, phase transitions involve collective atomic displacements, but such atomic processes have so far only been investigated using macroscopic approaches. Here, we show that in situ scanning transmission electron microscopy can be used to follow the structural transformation between semiconducting (2H) and metallic (1T) phases in single-layered MoS 2 , with atomic resolution. The 2H/1T phase transition involves gliding atomic planes of sulphur and/or molybdenum and requires an intermediate phase (α-phase) as a precursor. The migration of two kinds of boundaries (β- and γ-boundaries) is also found to be responsible for the growth of the second phase. Furthermore, we show that areas of the 1T phase can be controllably grown in a layer of the 2H phase using an electron beam. Main For several centuries, molybdenum disulphide (MoS 2 ) has been widely used as a practical solid lubricant 1 , 2 . MoS 2 crystal is composed of stacks of atomic layers bound by van der Waals forces, with each layer constructed from S–Mo–S′ triple atomic planes with strong in-plane bonding. Recently, single-layered MoS 2 , a direct-bandgap quasi-two-dimensional semiconductor, has shown its great potential for applications in electrical and optoelectronic devices 3 , 4 , 5 . Interestingly, one of the unique features of MoS 2 is polymorphism, with its distinct electronic characteristics. Depending on the arrangement of its S atoms, single-layered MoS 2 appears in two distinct symmetries: the 2H (trigonal prismatic D 3h ) and 1T (octahedral O h ) phases ( Fig. 1a,b ). The two phases should exhibit completely different electronic structures, with the 2H phase being semiconducting and the 1T phase metallic 6 , 7 , 8 . The two phases can easily convert one to the other via intralayer atomic plane gliding, which involves a transversal displacement of one of the S planes. The 1T phase was first reported to transform from 2H-MoS 2 by Li and K intercalation 6 , 9 , with restacked 1T phases in LiMoS 2 and KMoS 2 confirmed by electron diffraction 10 , 11 , and it is also known to be stabilized by substitutional doping of Re, Tc and Mn atoms, which serve as electron donors 12 . However, 1T-LiMoS 2 is thermodynamically unfavourable, and has been observed (by Raman spectroscopy 13 ) to gradually transform to the 2H phase at room temperature. The phase transitions between 2H and 1T or 2H′ phases due to atomic plane gliding are presented in Fig. 1c,d . Note that 2H′ is a 60° (or 180°) rotational phase of 2H. Figure 1: Polymorphs of single-layered MoS 2 . a , b , Schematic models of single-layered MoS 2 with 2H ( a ) and 1T ( b ) phases in basal plane and cross-section views. Mo, blue; top S, orange; bottom S′, purple. The incident electron beam transmits from top to bottom. The 2H phase shows a hexagonal lattice with threefold symmetry and the atomic stacking sequence (S–Mo–S′) ABA. The 1T phase shows the atomic stacking sequence (S–Mo–S′) ABC, with the bottom S′ plane occupying the hollow centre (HC) of a 2H hexagonal lattice. c , The S plane glides over a distance equivalent to ( a = 3.16 Å). and occupies the HC site of the 2H hexagon, which results in a 2H → 1T phase transition. d , Gliding of the Mo plane results in a 2H → 2H′ transition. The shadow atomic model shows the original 2H-MoS 2 structure. The three planes (Mo, S and S′) in single-layer MoS 2 can glide individually to give different transitions. Full size image Although the coexistence of metallic and semiconducting phases has indeed been reported in chemically exfoliated MoS 2 by Eda and colleagues 14 , the actual dynamical process of the transformation between 2H and 1T phases involving intralayer atomic plane gliding has never been experimentally proven, nor has the atomic process of the phase transition been investigated in situ . If one is to consider the possibility of intentionally introducing the phase transition in single-layered materials in a controllable manner, the atomic process of this phase transition as well as its boundary structures must be corroborated in order to reliably design future low-dimensional devices. In situ observation of 2H/1T phase transition Here, we provide in situ observations of the transformation process between 2H and 1T phases in single-layered MoS 2 at high temperatures. To monitor the phase transition in situ , we operated an aberration-corrected scanning transmission electron microscope (STEM) at 60 kV to visualize the dynamic process of the atomic motions in single-layered MoS 2 . This technique has already been used and verified while studying another ideal two-dimensional material, graphene, for dislocations 15 , 16 , grain boundaries 17 , 18 , 19 and the dynamics of defect movement 20 , 21 . In the case of MoS 2 , few studies have been carried out, except for those that study defects and the native grain boundary between two MoS 2 domains 22 , 23 , 24 . A MoS 2 specimen doped with 0.6 at% Re was exfoliated and transferred to a microgrid 25 , 26 . To promote the phase transition, the specimen was heated to ∼ 400–700 °C in a microscope to provide thermal activation energy for atom displacement. An example of the phase transition is provided in Fig. 2a–d as sequential annular dark-field (ADF) images, where the step-by-step progress of MoS 2 phase transformation at T = 600 °C is represented (see also Supplementary Movie 1 ). Figure 2e–h presents schematics correlating with the ADF images in Fig. 2a–d to illustrate the structural changes in the MoS 2 lattice. A corresponding model of the atomic movements in the 2H → 1T phase transition is presented in Fig. 2i–k . The Re dopants (indicated by arrowheads in Fig. 2a ) tend to substitute at the Mo sites and display brighter contrast 26 . The initial MoS 2 lattice ( Fig. 2a ) exhibits the 2H phase, with a honeycomb structure consisting of three Mo atoms and three overlapped S pairs arranged in a hexagon. At t = 100 s, two identical band-like structures (labelled α in Fig. 2b ) gradually form along two zigzag directions. This α-phase is a precursor that basically consists of three to four constricted MoS 2 zigzag chains ( Fig. 2i ). When two non-parallel α-phases are in contact, the atoms at the corner formed by the local acute angle are very densely packed, triggering them to glide towards the area with less atomic concentration to release the stress. As a consequence, a triangular 1T phase forms, as shown in Fig. 2c . The 1T phase, with its different S contrast, can be unambiguously discriminated from the 2H phase in the ADF image ( Supplementary Fig. 1 ). After continuous electron-beam scanning, the size of the 1T phase can be gradually enlarged to ∼ 8.47 nm 2 ( Fig. 2d ). Two new phase boundaries (β and γ in Fig. 2d,k ) are found at the edges of the 1T phase. The structures and dynamic behaviours of these boundaries will be discussed in detail in the following. The phase transformation occurs only in the area scanned by the electron beam, and no atom loss is needed to explain this phase transition. For more detailed discussions about the mechanism of phase transition see Supplementary Figs 2 and 3 . Figure 2: Atomic movements during 2H → 1T phase transformation in single-layered MoS 2 at T = 600 °C. a , Single-layered MoS 2 doped with Re substitution dopants (indicated by arrowheads) has the initial 2H phase of a hexagonal lattice structure with a clear HC. b , At t = 100 s, two identical intermediate (precursor) phases (denoted α) form with an angle of 60°, and consist of three constricted Mo zigzag chains. c , At t = 110 s, a triangular shape indicating the 1T phase ( ∼ 1.08 nm 2 ) appears at the acute corner between the two α-phases. The 1T phase provides noticeable contrast because of the S atoms at the HC sites (Supplementary Fig. 1 ). d , At t = 220 s, the area of the transformed 1T phase is enlarged to ∼ 8.47 nm 2 . Three different boundaries (α, β and γ) are found at the three edges between the 1T and 2H phases. e – h , Simple schematic illustrations of the 2H → 1T phase transition corresponding to the ADF images in a – d , respectively. i , Atomic model of α-phase formation by the constriction of three Mo zigzag chains. j , Nucleation of the 1T phase (triangular) with the Mo + S (or S′) atoms gliding in the directions indicated by blue and pink arrows. k , β-Boundary formation at the growth frontier side. The α 1 -phase transforms to a γ-boundary, and the α 2 -phase becomes wider (Supplementary Fig. 2 ). Full size image The phase transformation in single-layered MoS 2 involves numerous atomic displacements besides the simple atomic plane gliding. We investigated the atomic process of phase transitions for more than 100 cases using in situ STEM, for cases involving 2H → 1T, 1T → 2H, 2H → 2H′ and 1T → 1T′ phase transitions ( Table 1 ). We then tried to categorize these transitions into three important elemental steps: (i) nucleation (formation) of the α-phase as a precursor or an intermediate state, and (ii, iii) migration of the β and γ boundaries. Figure 3 shows these three steps with independent sequential ADF images. See also Supplementary Figs 4 and 5 for structure models and image simulations of the α-phase and β- and γ-boundaries. Table 1 Summary of phase transformation in single-layered MoS 2 . Full size table Figure 3: Three elemental steps responsible for phase transitions in single-layered MoS 2 ( T = 600 °C). a – d , α-Phase (three or four zigzag chains) formation. a , Nucleation of an α-phase at an angle of 60° with the other α-phases. The α-phase shows three or four constricted zigzag MoS 2 chains. Three white lines highlight the distance between the zigzag chains, with the in-plane constriction in the α-phase being ∼ 15% that of the original MoS 2 . b , Growth of the α-phase. c , At t = 117 s, the α-phase begins to migrate rightward. The left side of the bottom α-phase (indicated by an arrowhead) disappears and reverts to the initial MoS 2 lattice. Re dopants are marked by green circles. d , Constriction (green arrows) induces strain in-plane (left), and the model α-phase forms with a reduced Mo–Mo distance (right). The S atoms in the α-phase are also misaligned vertically. e – h , β-Boundary migration. e , Single-layered MoS 2 with 2H phase. The orientation of the initial 2H phase is indicated by the blue triangle. f , At t = 60 s, the β-boundary (highlighted by yellow shading) appears in the middle of the 2H-MoS 2 . The left-hand side of the β-boundary demonstrates the 1T phase. g , The β-boundary migrates rightward and the 1T phase is enlarged. h , Schematic model before (top) and after (bottom) gliding of the Mo + S (or S′) atoms, which causes β-boundary migration. In e – g the ADF images are filtered by a local two-dimensional Wiener filter to enhance the contrast. i – l , γ-Boundary (two zigzag chains) migration. i , Single-layered MoS 2 with 1T phase (α-phase is also visible). j , At t = 20 s, a γ-boundary (highlighted by purple shading) appears in the middle. The left-hand side of the γ-boundary demonstrates the nucleated 2H phase. k , The γ-boundary migrates rightward and has a non-straight structure. l , Schematic model before (top) and after (bottom) gliding of top S (or S′) atoms, which drives γ-boundary migration. Scale bars, 1 nm. Full size image α-Phase formation The α-phase is a precursor structure that is essential before phase transition. It is an intermediate state, but forms a stable structure at high temperatures under electron-beam irradiation. In the α-phase, Mo atoms do not show a trigonal arrangement, but align as zigzag chains. The Mo–Mo distance is locally compressed along these zigzag chains, resulting in a local strain in the MoS 2 lattice ( Fig. 3a , Supplementary Movie 2 ). The Mo–Mo in-plane constriction is probably connected to the S out-of-plane displacement induced by the electron beams. These local strains must be released by changing the Mo–S bond angles, although our STEM observation was not capable of proving the exact strained structure of this α-phase. Figure 3d presents a schematic of the in-plane constriction (green arrows, left) and a structure model of the α-phase (right). The S out-of-plane displacement can propagate in the zigzag direction and so elongate the α-phase ( Fig. 3b ). Interestingly, the α-phase has a strong tendency to nucleate at the vicinity of Re substitution dopants ( Supplementary Fig. 3 and Movie 3 ). Presumably, the initial out-of-plane protuberance of the Re–S bond could help the S out-of-plane displacement and thus the formation of the α-phase 26 . Note here that the α-phase always consists of three or four MoS 2 zigzag chains and does not expand in width ( Supplementary Fig. 6 and Movie 4 ). There is no atom loss during the formation and elongation (directional growth) of the α-phase. In Fig. 3c , during migration of the central α-phase, the left part of the bottom α-phase indeed disappears and reverts to the original 2H structure (indicated by an arrowhead). Such a reversible phase transformation between the 2H and α-phase proves that there is no massive atom loss during α-phase formation. Even though the electron beam is certainly required to displace the S atoms out-of-plane, no atom is kicked out by the knock-on effect. In the Supplementary Information we show another scenario in which prolonged electron-beam irradiation occasionally leads to the loss of a MoS 2 zigzag chain and the formation an agglomerated structure on the MoS 2 surface ( Supplementary Fig. 7 and Movie 5 ). The α-phase barely forms at room temperature (not shown). β-Boundary migration The β-boundary between the 2H/2H′ interfaces is a twin boundary containing the Mo–S four-membered rings, which in principle is the same as that recently observed at the boundary of two 60° rotated MoS 2 domains (that is, 2H and 2H′) synthesized by chemical vapour deposition 23 . The S atoms in the β-boundary are four-coordinated despite all the other phases having three-coordinated S atoms. The 2H → 2H′ phase transition requires Mo plane gliding and generates a β-boundary ( Supplementary Fig. 8 and Movie 6 ). The β-boundary can also be found between 2H and 1T phases ( Fig. 2d ); in this case, it is no longer a twin boundary. For example, Fig. 3e shows a typical ADF image of MoS 2 in the 2H phase with a threefold symmetry (orientation described by a blue triangle). At t = 60 s, the β-boundary (highlighted by yellow shading) appears in the middle of the 2H-MoS 2 ( Fig. 3f ). The left-hand side of the β-boundary becomes 1T phase. Note that the β-boundary shows up when a Mo plane and a S plane both glide during the 2H → 1T transition. Another simpler transition from 2H to 1T with only one S-plane glide results in the formation of a γ-boundary between the phases ( Table 1 and Supplementary Movie 9 ). The Mo + S (or S′) atoms gliding across the β-boundary ( Fig. 3h , left) then drive the β-boundary to migrate ( Fig. 3h , right). Figure 3g shows the β-boundary migrating rightward at t = 110 s. For a detailed structure and gliding model, see Supplementary Fig. 11 . γ-Boundary migration The γ-boundary consists of two constricted MoS 2 zigzag chains. The α-phase is made of exclusively three or four MoS 2 zigzag chains, but the γ-boundary always has two chains. The S atoms at the γ-boundary remain three-coordinated to the Mo atoms. Figure 3i presents an ADF image of MoS 2 in the initial 1T phase. The two S planes are misaligned vertically in the 1T phase. At t = 20 s, a γ-boundary (highlighted by purple shading) appears between the initial 1T phase and the nucleated 2H phase ( Fig. 3j ). The left-hand side of the γ-boundary transforms from 1T to 2H by S plane gliding. The schematic model shown in Fig. 3l (left) illustrates the S atoms sequentially gliding towards the γ-boundary, and the 2H phase then increasing with γ-boundary migration ( Fig. 3l , right). The γ-boundary migrates rightward step by step (the non-straight boundary showing the atomic step is indicated by an arrow in Fig. 3k ; Supplementary Movie 7 ). For a detailed structure and gliding model see Supplementary Fig. 11 . The distinct features of the boundary structure were also corroborated by electron energy loss spectroscopy (EELS). Recent literature has reported single impurity atoms of Si in a graphene lattice discriminated in three- and four-coordinated configurations 27 , 28 . Accordingly, the bonding states of the S atoms in the newly discovered boundaries are intriguing and would be most prominent in the S L -edge. The electron energy loss near-edge structures (ENLES) for the S L 23 -edge and Mo M 45 -edge were recorded in the β- and γ-boundary regions, and are shown in Supplementary Fig. 12 . The phase transformation presented here involves complicated dynamic processes. The gliding planes decide the relationship of the initial and final phases, as well as the correlated phase boundaries. Table 1 catalogues the results of a systematic investigation of the discovered phase transformations in single-layered MoS 2 , and all the corresponding schematic models and detailed discussions are presented in the Supplementary Information . In an attempt to control the phase transformation by the electron beam, we continuously recorded the size of the transformed area as a function of time. The data points plotted in Fig. 4a shows the relation between the electron dose and the area of the transformed phase in different thermal environments (400 °C < T < 700 °C). We used the dose instead of time, because the data were normalized by the dose rate for the unit area. The phase transformation area A increases as , where D is the electron dose, D 0 ≈ 40 MeV nm −2 is the threshold before triggering the phase transformation, and σ ≈ 0.028–0.061. The relation between the transformed area and the electron dose can be divided into two regions by D 0 . In region I, D < D 0. , the phase transformation does not start until the electron dose creates the intermediate-state structures, the α-phases. In region II, D > D 0 , the phase transition can be triggered in the local area in the temperature range 400–700 °C, and the phase transformation starts to enlarge with the increasing dose. Figure 4: Time dependence of phase transformation process and fabrication of nanodevices in single-layered MoS 2 . a , Area of transformed phase as a function of total electron dose (instead of time). The dispersion is divided into two regions by the threshold electron dose (grey dotted line). Region I comprises the initial step to create the intermediate structures, α-phase. In region II, the phase transition is initiated and the transformed area increases with increasing electron dose. Data sets were recorded at various temperatures ranging from 400 °C to 700 °C. The four digit number after the temperature in the legend indicates the experiment batch number. b – f , Attempts to create prototypes of the nanodevices: 2H and 1T heterostructure with a γ-boundary, as a Schottky diode ( b ); 2H sandwiched between two 1T phases with two β-boundaries, as a Schottky barrier nanotransistor ( c ); single Mo hexagon chain formed on top of the 2H-MoS 2 , as a metallic quantum wire ( d ) (this is an exceptional case and its formation mechanism is still unclear); 1T phase embedded in a 2H phase, as an embedded metallic quantum dot ( e ); 2H phase embedded in a 1T phase, as an embedded semiconducting quantum dot ( f ). Scale bars, 1 nm. Full size image Previous studies have suggested that the 2H → 1T transformation is triggered by a high doping concentration 12 . In our experiment, Re is an n-type dopant 26 , but the doping rate is relatively small (<1 at%). Accordingly, we can reasonably infer that the continuous electron-beam irradiation may play an electronic role in accumulating negative charge to trigger the phase transition. Single layers with semiconducting and metallic domains Because the electron beam scanning area and the irradiation time can be controlled easily in a STEM, we can intentionally introduce the phase transition in a chosen area with a predetermined size. Because the 1T and 2H phases have distinct electronic properties, this controllable local phase transition may enable bottom-up processes in the fabrication of nanoelectronics. To explore these possibilities, in Fig. 4 we demonstrate several attempts to produce prototypes of nanodevices. Figure 4b shows a serial junction of semiconductor and metallic phases, which can be regarded as a Schottky diode. A local semiconductor region sandwiched between two metallic electrodes forms a nanoscale transistor ( Fig. 4c ). A metallic wire can be embedded in the semiconducting matrix as a quantum lead ( Fig. 4d ). Finally, quantum dots (in a triangular shape) can be stably produced in the initial phase, with a metallic quantum dot embedded in semiconductor ( Fig. 4e ), or vice versa ( Fig. 4f ). To date, these structures have been fabricated only in an electron microscope and their functions have not yet been confirmed experimentally. The transfer process and the prevention of surface contamination remain obstacles to be overcome. The relatively stable single-layered structures of this system are, however, very promising in the quest to obtain the first single-layered electronic device. Although the low-dimensional nanodevice was first proposed using nanotubes of metallic and semiconducting components, it has turned out to be difficult to realize nanotube composites with controlled chiralities. Patterning single layers is definitely a more promising approach towards the realization of nanodevices. Indeed, it would be extremely intriguing to explore similar phenomena in MoWS 2 alloys with tunable bandgaps 29 and in n- and p-type doped dichalcogenides 26 . Methods Material synthesis and specimen preparation A single crystal of Re-doped MoS 2 was grown by a chemical vapour transport method using Br 2 as a transport agent at 950 °C (ref. 25 ). Mo, S and Re elements (99.99% purity, 10 g in total) containing Br 2 ( ∼ 5 mg cm −3 ) were cooled in a quartz ampoule with liquid nitrogen and sealed in vacuum (1 × 10 −6 torr). Single-crystalline Re-doped MoS 2 flakes (3 × 3 mm 2 surface area, 0.5 mm thickness) were mechanically exfoliated by Scotch tape and transferred to the surface of a Si substrate with 300 nm thermal oxide. The target single-layer flakes were transferred to a Mo quantifoil grid with 2-propanol and cleaned with chloroform 26 . The specimens were heated in the TEM chamber (vacuum of ∼ 1.7 × 10 −5 Pa) overnight at 550 °C in a JEOL heating holder to remove residual contamination. ADF-STEM ADF images were obtained using an aberration-corrected JEOL-2100F cold field-emission gun electron microscope equipped with a DELTA corrector and operated at an accelerating voltage of 60 kV. The convergence semi-angle and inner acquisition semi-angle were 35 and 79 mrad for the ADF imaging. The electron-beam current was ∼ 10–15 pA. The dwell time per pixel was ∼ 38–76 µs and the pixel size ∼ 0.0139–0.026 nm for sequential images (movies), corresponding to an electron dose of ∼ 0.7–1.8 × 10 7 e nm −2 . Electron beam damage on the MoS 2 was observed when the total electron dose exceeded ∼ 5 × 10 8 to 11 × 10 8 e nm −2 . Surface cleanliness was critical to preventing damage, and hydrocarbon contamination on the surface enhanced damage development. False-colour images and the alignment of sequential images (movies) were performed using ImageJ. Figure 3e–g presents filtered images using a local two-dimensional Wiener filter to enhance the contrast. EELS Data were recorded using a Gatan Quantum camera. The EEL image spectra in Fig. 4 consist of 12 × 12 pixels, obtained using a 0.1 nm probe with 0.05 nm increments for each step. Each spectrum was acquired in 0.5 s and was summed in the vertical direction to increase the signal-to-noise ratio. The energy dispersion for the recorded spectra was 0.25 eV and the zero-loss width of the incident electron beam was ∼ 0.35 eV. The EELS collection semi-angle was ∼ 79 mrad. Structure modelling and image simulations MoS 2 polymorphous and phase boundary models were constructed using CrystalMaker and geometry optimizations using HyperChem. ADF image simulations were carried out using QSTEM with a probe size of 1 Å (spherical aberration coefficient, Cs = 1 µm, Scherzer defocus = −4 nm).
(Phys.org) —A team of researchers with members from Japan, Taiwan and Switzerland has discovered that it is possible to watch a phase transition occur in a 2D semiconducting material using a scanning transmission electron microscope (STEM). In their paper published in the journal Nature Nanotechnolgy describing their research and results, the team outlines how they used the microscope to watch as a sample of the direct bandgap semiconductor molybdenum sulphide underwent a phase shift. An ability to phase shift between metallic capabilities and a semiconductor is an important feature of a material—one that scientists would like to better understand. Up till now however, researchers had to infer some of what occurs when a material undergoes a phase shift, because they couldn't actually see it as it was happening. In this new effort, the researchers show that it is possible to directly watch a phase shift by doing so with a sample of molybdenum sulphide. In so doing, they have discovered that atom-by-atom movements are part of the shift, rather than complete shifts by a collective. The researchers suggest their observations hint at the prospect of creating layered 2D semiconductors "in-layer" rather than as a series of steps where one material is layered over another. That would allow for creating structures with atomic scale precision. Molybdenum sulphide is polymorphic—it can function as either a metal or a semiconductor, depending on how much heat is present. Even better the two phases can be made to interconvert using intralayer atomic plane gliding, (a transversal displacement of one of the materials across the other) though it had never been seen actually doing so. As part of their research, the team performed in situ plane gliding while watching using the STEM, giving them an unprecedented view of what actually occurs as such phase shifting happens. The phase shift with the molybdenum sulphide sample occurred due to the heat exerted by the STEM itself. They suggest such a technique could also be used to induce phase shifting in other 2D materials. The researchers also report that they have already used what they have learned to create several prototype nanodevices—one of which performs the functions of a Schottky diode.
10.1038/nnano.2014.64
Medicine
Possible new plan of attack for opening and closing the blood-brain barrier
Paper: Mfsd2a is critical for the formation and function of the blood–brain barrier, dx.doi.org/10.1038/nature13324 Journal information: Nature
http://dx.doi.org/10.1038/nature13324
https://medicalxpress.com/news/2014-05-blood-brain-barrier.html
Abstract The central nervous system (CNS) requires a tightly controlled environment free of toxins and pathogens to provide the proper chemical composition for neural function. This environment is maintained by the ‘blood–brain barrier’ (BBB), which is composed of blood vessels whose endothelial cells display specialized tight junctions and extremely low rates of transcellular vesicular transport (transcytosis) 1 , 2 , 3 . In concert with pericytes and astrocytes, this unique brain endothelial physiological barrier seals the CNS and controls substance influx and efflux 4 , 5 , 6 . Although BBB breakdown has recently been associated with initiation and perpetuation of various neurological disorders, an intact BBB is a major obstacle for drug delivery to the CNS 7 , 8 , 9 , 10 . A limited understanding of the molecular mechanisms that control BBB formation has hindered our ability to manipulate the BBB in disease and therapy. Here we identify mechanisms governing the establishment of a functional BBB. First, using a novel tracer-injection method for embryos, we demonstrate spatiotemporal developmental profiles of BBB functionality and find that the mouse BBB becomes functional at embryonic day 15.5 (E15.5). We then screen for BBB-specific genes expressed during BBB formation, and find that major facilitator super family domain containing 2a ( Mfsd2a ) is selectively expressed in BBB-containing blood vessels in the CNS. Genetic ablation of Mfsd2a results in a leaky BBB from embryonic stages through to adulthood, but the normal patterning of vascular networks is maintained. Electron microscopy examination reveals a dramatic increase in CNS-endothelial-cell vesicular transcytosis in Mfsd2a −/− mice, without obvious tight-junction defects. Finally we show that Mfsd2a endothelial expression is regulated by pericytes to facilitate BBB integrity. These findings identify Mfsd2a as a key regulator of BBB function that may act by suppressing transcytosis in CNS endothelial cells. Furthermore, our findings may aid in efforts to develop therapeutic approaches for CNS drug delivery. Main Two unique features of the CNS endothelium determine BBB integrity ( Extended Data Fig. 1 ) 2 . One is specialized tight junctions between a single endothelial cell layer lining the CNS capillaries, which form the physical seal between the blood and brain parenchyma 2 . In addition, CNS endothelial cells have lower rates of transcytosis than endothelial cells in other organs 3 . Peripheral endothelial cells display active vesicle trafficking to deliver nutrients to peripheral tissues, whereas CNS endothelial cells express transporters to selectively traffic nutrients across the BBB 1 , 3 , 11 . However, it is not clear when and how these properties are acquired. Furthermore, the molecular mechanisms that give rise to the unique properties of the CNS endothelium have not been identified. Although recent studies revealed molecular pathways involved in the development of the embryonic BBB 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , disruption of some of these genes affect vascular network development, making it difficult to determine whether barrier defects are primary or secondary to a broader vascular effect. We aimed to first identify the developmental time-point when the BBB gains functional integrity, and then use that time window to profile BBB-specific genes when the BBB is actively forming, to maximize the chance of identifying key regulators. The prevailing view has been that the embryonic and perinatal BBB are not yet functional 1 . However, previous embryonic BBB functionality studies were primarily performed by trans-cardiac tracer perfusion, which may dramatically affect blood pressure, cause bursting of CNS capillaries, and artificially produce leakiness phenotypes 1 , 20 . To circumvent these obstacles, we developed a method to assess BBB integrity during mouse development, in which a small volume of tracer is injected into embryonic liver to minimize changes in blood pressure ( Fig. 1a , see Supplementary Information for the method). Figure 1: A novel tracer-injection method reveals a temporal profile of functional BBB formation in the embryonic cortex. a , In utero embryonic liver tracer injection method; fenestrated liver vasculature enabled rapid tracer uptake into the embryonic circulation. b , Dextran-tracer injection revealed a temporal profile of functional cortical BBB formation. Representative images of dorsal cortical plates from injected embryos after capillary labelling with lectin (green, lectin; red, 10-kDa tracer). Top panel (E13.5), tracer leaked out of capillaries and was subsequently taken up by non-vascular parenchyma cells (arrowheads), with little tracer left inside capillaries (arrow). Middle panel (E14.5), tracer was primarily restricted to capillaries (arrow), with diffused tracer detectable in the parenchyma (arrowheads). Bottom panel (E15.5), tracer was confined to capillaries (arrow). n = 6 embryos (3 litters per age). PowerPoint slide Full size image Using this method, we identified the timing of BBB formation in the developing mouse brain and observed a spatial and temporal pattern of ‘functional-barrier genesis’ ( Fig. 1b ). We found that in E13.5 cortex a 10-kDa dextran tracer leaked out of capillaries and was taken up by non-vascular brain parenchyma cells ( Fig. 1b , top panel). At E14.5, the tracer was primarily restricted to capillaries, but tracer was still detected outside vessels ( Fig. 1b , middle panel). In contrast, at E15.5, the tracer was confined to vessels with no detectable signal in the surrounding brain parenchyma, similar to the mature BBB ( Fig. 1b , bottom panel). The development of BBB functionality differed across brain regions ( Supplementary Information and Extended Data Fig. 2 ). These data demonstrate that following vessel ingression into the neural tube, the BBB gradually becomes functional as early as E15.5. Based on the temporal profile of BBB formation, we compared expression profiles of BBB (cortex) and non-BBB (lung) endothelium at E13.5, using an Affymetrix array ( Supplementary Information ), and identified transcripts with significantly higher representation in cortical than lung endothelium ( Fig. 2 ). These transcripts included transporters, transcription factors, and secreted and transmembrane proteins ( Fig. 2c ). We were particularly interested in transmembrane proteins, owing to their potential involvement in cell–cell interactions that regulate BBB formation. Figure 2: Expression profiling identifies genes involved in BBB formation. a , Dot-plot representation of Affymetrix GeneChip data showing transcriptional profile of cortical (BBB) and lung (non-BBB) endothelial cells isolated at the critical barrier-genesis period (E13.5). Dots reflect average expression of a probe in the cortex ( x axis) and lung ( y axis). Cortex expression values above 500 arbitrary expression units (a.u.) are presented. Red dots indicate a fivefold higher expression in the cortex. Mfsd2a value is circled in blue. b , Pan-endothelial markers were highly represented, whereas pericyte, astrocyte, and neuronal markers were detected at extremely low levels in both cortex and lung samples. c , Barrier-genesis specific transporters, transcription factors, and secreted and transmembrane proteins were significantly enriched in the cortical endothelial cells. All data are mean ± s.d. n = 4 litters (4 biological replicates). PowerPoint slide Full size image One of the genes identified, Mfsd2a , had 78.8 times higher expression in cortical endothelium than in lung endothelium ( Fig. 3a ). In situ hybridization showed prominent Mfsd2a mRNA expression in CNS vasculature but no detectable signal in vasculature outside the CNS, such as in lung or liver ( Fig. 3b ). Moreover, both Mfsd2a mRNA and Mfsd2a protein were absent in the choroid plexus vasculature, which is part of the CNS but does not possess a BBB 1 ( Fig. 3c, d, g ). Mfsd2a expression in CNS vasculature was observed at embryonic stages (E15.5), postnatal days 2 and 5 (P2 and P5) and in adults (P90) ( Fig. 3b–e and Extended Data Fig. 3 ). Finally, Mfsd2a protein, which is absent in the Mfsd2a −/− mice ( Fig. 3e ) 21 , was specifically expressed in claudin-5-positive CNS endothelial cells but not in neighbouring parenchyma cells (neurons or glia) or adjacent Pdgfrβ-positive pericytes ( Fig. 3f ). Previously, Mfsd2a was reported to be a transmembrane protein expressed in the placenta and testis, which have highly restrictive barrier properties 22 . Together with our demonstration of Mfsd2a-specific expression in BBB-containing endothelial cells, this suggests that Mfsd2a may have a role in BBB formation and/or function. Figure 3: Mfsd2a is selectively expressed in BBB-containing CNS vasculature. a , At E13.5, Mfsd2a expression in cortical endothelium was ∼ 80-fold higher than in lung endothelium (microarray analysis, mean ± s.d.). b–d , Specific Mfsd2a expression in BBB-containing CNS vasculature (blue, Mfsd2a in situ hybridization; green, vessel staining (PECAM) adjacent sections). b , Mfsd2a expression at E15.5 in CNS vasculature (sagittal view of brain and spinal cord, arrows), but not in non-CNS vasculature (asterisk). c , Mfsd2a expression at E15.5 in BBB vasculature (cortex coronal view, for example, striatum, arrow), but not in non-BBB CNS vasculature (choroid plexus, dashed line). d , High-magnification coronal view of Mfsd2a expression in BBB-containing CNS vasculature but not in vasculature of the choroid plexus (left, dashed line), or outer meninges or skin (right, red arrows). e–g , Immunohistochemical staining of Mfsd2a protein shows specific expression in CNS endothelial cells (red, Mfsd2a; green, claudin-5 or lectin (endothelium); blue, DAPI (nuclei); grey, Pdgfrβ (pericytes)). e , Mfsd2a expression in the brain vasculature of wild-type mice (top panel), but not of Mfsd2a −/− mice (bottom panel). f , Mfsd2a expression only in claudin-5-positive endothelial cells (arrow; endothelial nucleus is indicated by an asterisk) but not in adjacent pericytes (arrowhead; pericyte nucleus is indicated by a double asterisk). g , Lack of Mfsd2a expression in choroid plexus vasculature (fourth ventricle coronal view, dashed line), as opposed to the prominent Mfsd2a expression in cerebellar vasculature. n = 3 embryos (3 litters per age). PowerPoint slide Full size image To test this hypothesis, we examined BBB integrity in Mfsd2a −/− mice. Using our embryonic injection method, 10-kDa dextran was injected into Mfsd2a −/− and wild-type littermates at E15.5. As expected, dextran was confined within vessels of control embryos. In contrast, dextran leaked outside the vessels in Mfsd2a −/− embryonic brains and was found in the cortical parenchyma ( Fig. 4a ) and individual parenchyma cells (quantified as tracer-positive parenchyma cells per unit area of the developing lateral cortical plate; Fig. 4b ). Furthermore, using imaging and spectrophotometric quantification methods 5 , we found that the leaky phenotype persisted in early postnatal ( Extended Data Fig. 4 ) and adult ( Fig. 4c ) Mfsd2a −/− mice. Because the sequence of Mfsd2a has similarities to the major facilitator superfamily of transporters, and Mfsd2a facilitates the transport of tunicamycin in cancer cell lines 23 , we injected two non-carbohydrate-based tracers of different sizes to rule out the possibility that dextran leakiness is due to interactions with Mfsd2a. Sulfo-NHS-biotin ( ∼ 550 Da) and horseradish peroxidase (HRP; ∼ 44 kDa) tracers exhibited the leaky phenotype in Mfsd2a −/− mice ( Extended Data Fig. 4a, b ). Moreover, a larger molecular weight tracer, 70-kDa dextran, also displayed leakiness in Mfsd2a −/− mice ( Extended Data Fig. 4d ). In contrast to severe barrier leakage defects ( Fig. 4a–c and Extended Data Fig. 4 ), brain vascular patterning was similar between Mfsd2a −/− mice and littermate controls. No abnormalities were identified in capillary density, capillary diameter or vascular branching ( Fig. 4d and Extended Data Fig. 5a ), in embryonic (E15.5), postnatal (P4), and adult (P70) brains of Mfsd2a −/− mice. Moreover, we found no abnormalities in cortical arterial distribution in adult Mfsd2a −/− mice ( Extended Data Fig. 5b ). Therefore, Mfsd2a is specifically required for proper formation of a functional BBB but not for CNS vascular morphogenesis in vivo . This result, together with the temporal difference between cortical vascular ingression (E10–E11) and cortical barrier-genesis (E13.5–E15.5), demonstrates that vascular morphogenesis and barrier genesis are distinct processes. Figure 4: Mfsd2a is required for the establishment of a functional BBB but not for CNS vascular patterning in vivo . a , b , Dextran-tracer (10 kDa) injections at E15.5 revealed a defective BBB in mice lacking Mfsd2a . a , The tracer was confined to the capillaries (arrow) in wild-type littermates, whereas Mfsd2a −/− embryos showed large amounts of tracer leakage in the brain parenchyma (arrowheads). b , Capillaries (arrows) surrounded by tracer-filled brain parenchyma cells (arrowheads) in Mfsd2a −/− cortex. Quantification of tracer-filled parenchyma cells in control versus Mfsd2a −/− cortical plates (bottom panel, n = 7 embryos per genotype). c , Spectrophotometric quantification of 10-kDa dextran-tracer from cortical extracts of P90 mice, 16 h post intravenous injection, indicating that BBB leakiness in Mfsd2a −/− mice persists into adulthood ( n = 3 mice per genotype). d , Mfsd2a −/− mice exhibit normal vascular patterning. No abnormalities were found in cortical vascular density, branching and capillary diameter (E15.5; green, PECAM). Quantification of wild-type and Mfsd2a −/− samples ( n = 4 embryos per genotype). All data are mean ± s.e.m. MUT, mutant; N.S., not significant; WT, wild type. * P < 0.05 (Mann–Whitney U -test). PowerPoint slide Full size image We next addressed whether Mfsd2a regulates endothelial tight-junction formation, transcytosis, or both. We examined these properties by electron microscopy in embryonic brains and P90 mice following intravenous HRP injection 2 . Electron microscopy failed to reveal any apparent abnormalities in the ultrastructure of endothelial tight junctions ( Fig. 5a ). At E17.5, tight junctions in control and Mfsd2a −/− littermates appeared normal, with electron-dense linear structures showing ‘kissing points’ where adjacent membranes are tightly apposed ( Fig. 5a ). In electron micrographs of cerebral cortex in HRP-injected adults, peroxidase activity was revealed by an electron-dense reaction product that filled the vessel lumen. In both control and Mfsd2a −/− mice, HRP penetrated the intercellular spaces between neighbouring endothelial cells only for short distances. HRP was stopped at the tight junction, creating a boundary between HRP-positive and HRP-negative regions without leakage through tight junctions ( Fig. 5a ). In contrast, CNS endothelium of Mfsd2a −/− mice displayed a dramatic increase in the number of vesicles, including luminal and abluminal plasma membrane-connected vesicles and free cytoplasmic vesicles, which may indicate an increased rate of transcytosis ( Fig. 5b ). Specifically, pinocytotic events were evidenced by type II lumen-connected vesicles pinching from the luminal plasma membrane. Greater than twofold increases in vesicle number in Mfsd2a −/− mice compared to control littermates were observed in different locations along the transcytotic pathway ( Fig. 5 and Extended Data Table 2 ). Furthermore, the HRP reaction product in adult mice was observed in vesicles invaginated from the luminal membrane and exocytosed at the abluminal plasma membrane only in Mfsd2a −/− mice ( Fig. 5d ), suggesting that HRP was subject to transcytosis in these animals but not in wild-type littermates ( Extended Data Table 2 ). Together, these findings suggest that the BBB leakiness observed in Mfsd2a −/− mice was not caused by opening of tight junctions, but rather by increased transcellular trafficking across the endothelial cytoplasm. Figure 5: Mfsd2a is required specifically to suppress transcytosis in brain endothelium to maintain BBB integrity. Electron-microscopy examination of BBB integrity. a , Embryonic Mfsd2a −/− endothelium (E) showed no overt tight-junction ultrastructural defect (left, normal ‘kissing points’, small arrows). The vessel lumen (L) in HRP-injected adult mice was filled with electron-dense 3-3′ diaminobenzidine (DAB) reaction (black) that diffused into intercellular clefts but stopped sharply at the junction without parenchymal leakage (right, arrows). b , Increased vesicular activity in embryonic Mfsd2a −/− endothelium (E17.5). Left, wild-type endothelium displayed very few vesicles (arrow). Right, Mfsd2a −/− endothelium contained many vesicles of various types: luminal (arrows) and abluminal (Ab; arrowheads) membrane-connected and cytoplasmic vesicles. c , Vesicular density quantification (as shown in b , reference WT values (dashed line), see also Supplementary Fig. 7a ). d , Increased transcytosis was evident in HRP-injected adult Mfsd2a −/− mice (P90). In wild-type littermates (left) HRP activity was confined to the lumen with no HRP-filled vesicles. Many HRP-filled vesicles found in Mfsd2a −/− endothelial cells (right, see quantification in Supplementary Fig. 7b ). Luminal invaginations (dye uptake, arrows) and release to the basement membrane (abluminal side, asterisk). Scale bars, 100 nm ( a , b ), 200 nm ( c ). All data are mean ± s.e.m. ** P < 0.01, *** P < 0.001 (student’s t -test). PowerPoint slide Full size image Studies using pericyte-deficient genetic mouse models have shown that pericytes can also regulate BBB integrity. These mice had increased vesicle trafficking without obvious junction defects 4 , 5 , similar to our observations in Mfsd2a −/− mice. We therefore examined the possibilities that Mfsd2a may regulate CNS endothelial transcytosis by modulating pericyte function or that the effect of pericytes on endothelial transcytosis is mediated by Mfsd2a. First, pericyte coverage, attachment to the capillary wall, and pericyte ultrastructure and positioning relative to endothelial cells were normal in Mfsd2a −/− mice ( Extended Data Fig. 6 ). These data, together with the lack of Mfsd2a expression in pericytes, suggest that the increased transcytosis observed in Mfsd2a −/− endothelial cells is not secondary to pericyte abnormalities. Second, a genetic reduction in pericyte coverage can influence endothelial gene expression profiles 4 , 5 . Therefore we analysed published microarray data of two pericyte-deficient mouse models 5 and found a dramatic downregulation of Mfsd2a in these mice, with a direct correlation between the reduction of Mfsd2a gene expression and the degree of pericyte coverage ( Extended Data Fig. 7a ). Furthermore, immunostaining for Mfsd2a in Pdgfb ret/ret mice 5 revealed a significant decrease in Mfsd2a protein levels in endothelial cells that are not covered by pericytes ( Extended Data Fig. 7b–d ). Therefore, it is plausible that the increased vesicular trafficking phenotype observed in pericyte-deficient mice is, at least in part, mediated by Mfsd2a, and that endothelial–pericyte interactions control the expression of Mfsd2a, which in turn controls BBB integrity. We demonstrate that Mfsd2a is required to suppress endothelial transcytosis in the CNS. Because of Mfsd2a’s involvement in human trophoblast cell fusion 24 and of our genetic evidence for its role in suppressing transcytosis, we propose that Mfsd2a serves as a cell-surface molecule to regulate membrane fusion or trafficking. Indeed, from immuno-electron-microscopy examination, Mfsd2a protein was found in the luminal plasma membrane and associated with vesicular structures in cerebral endothelial cells, but not in tight junctions ( Extended Data Fig. 8 ). At present, it is not clear whether the reported transporter function of Mfsd2a is related to its role in BBB formation. BBB breakdown has been reported in the aetiology of various neurological disorders 7 , 8 , 9 , 10 , and two separate Mfsd2a -deficient mouse lines were reported to exhibit neurological abnormalities, such as ataxic behaviour 21 , 25 . Finding a novel physiological role of Mfsd2a may provide a valuable tool to address how a non-functional BBB could affect brain development. In addition, our finding also highlights the importance of the transcytotic mechanism in BBB function, whereas most previous attention has been focused on potential BBB leaks through intercellular junctions. Indeed, increased numbers of pinocytotic vesicles were observed following acute exposure to external stress inducers in animal models 26 , and have also been observed in human pathological conditions 9 . It will be interesting to examine whether Mfsd2a is involved in these pathological and acute assault situations. We cannot be certain that the elevated levels of transcytosis in Mfsd2a −/− mice were not due to some form of acute cellular stress, but it is very unlikely. This is because under stress, either cells respond to restore homeostasis or cell death occurs 27 . However, increased transcytosis in Mfsd2a −/− mice persists from embryonic stages to adulthood, and up to 6 months of age these mice exhibit no sign of vascular degeneration ( Extended Data Fig. 5c ). Our identification of a key molecular player in BBB formation may also aid efforts to develop therapeutic approaches for efficient drug delivery to the CNS. As an accessible cell surface molecule, Mfsd2a is poised to be a potential therapeutic target for BBB restoration and manipulation. Methods Summary The lowest volume of 10-kDa dextran tetramethylrhodamine, lysine fixable (D3312 Invitrogen) that still facilitated full perfusion was injected into the embryonic liver, while keeping the embryo connected to the maternal blood circulation through the umbilical cord. After 3 minutes of tracer circulation, embryonic heads were fixed by immersion in 4% paraformaldehyde (PFA) overnight at 4 °C, cryopreserved in 30% sucrose and frozen in TissueTek OCT (Sakura). Sections of 12 µm were then collected and post-fixed in 4% PFA at room temperature (20–25 °C) for 15 min, washed in PBS and co-stained with either α-PECAM antibody or with isolectin B4 to visualize blood vessels (see Methods for details). P90 HRP injection and E17.5 cortex capillaries transmission electron microscopy (TEM) imaging was done as described previously 2 . Online Methods Animals Wild-type Swiss-Webster mice (Taconic Farms) were used for embryonic BBB functionality assays and expression profiles. Homozygous Tie2-GFP transgenic mice (Jackson laboratory, strain 003658) were used for BBB transcriptional profiling. Mfsd2a -null mice 21 (Mouse Biology Program, University of California, Davis —MMRRC strain 032467-UCD, B6;129S5-Mfsd2atm1Lex/Mmucd) were maintained on C57Bl/6;129SVE mixed background and used for testing the involvement of Mfsd2a in barrier genesis. Mfsd2a -null mutant mice were genotyped using the following PCR primers: 5′-CCTGGTTTGCTAAGTGCTAGC-3′ and 5′-GTTCACTGGCTTGGAGGATGC-3′, which provide a 210-bp product for the Mfsd2a wild-type allele; and 5′-CACTTCCTAAAGCCTTACTTC-3′ and 5′-GCAGCGCATCGCCTTCTATC-3′, which provide a 301-bp product for the Mfsd2a -knockout allele. Pregnant mice were obtained following overnight mating (day of vaginal plug was defined as embryonic day 0.5). All animals were treated according to institutional and US National Institutes of Health (NIH) guidelines approved by the Institutional Animal Care and Use Committee (IACUC) at Harvard Medical School. Immunohistochemistry Tissues were fixed with 4% paraformaldehyde (PFA) at 4 °C overnight, cryopreserved in 30% sucrose and frozen in TissueTek OCT (Sakura). Tissue sections were blocked with 5% goat serum, permeabilized with 0.5% Triton X-100, and stained with the following primary antibodies: α-PECAM (1:500; 553370, BD PharmingenTM), α-Claudin5 (1:400; 35-2500, Invitrogen), α-Mfsd2a (1:500; Cell Signaling Technologies (under development)), α-Pdgfrβ (1:100; 141402, eBioscience), α-CD31 (1:100; 558744, BD PharmingenTM), α-SMA (1:100; C6198, Sigma Aldrich), followed by 568/488 Alexa Fluor-conjugated secondary antibodies (1:300–1:1000, Invitrogen) or with Isolectin B4 (1:500; I21411, Molecular Probes). Slides were mounted in Fluoromount G (EMS) and visualized by epifluorescence, light, or confocal microscopy. In situ hybridization Tissue samples were frozen in liquid nitrogen and embedded in TissueTek OCT (Sakura). Sections (18 μm) were hybridized with a digoxigenin (DIG)-labelled mouse Mfsd2a antisense riboprobe (1,524–2,024 bp NM_029662) at 60 °C overnight. A sense probe was used to ensure signal specificity. For detection, signals were developed using anti-DIG antibody conjugated with alkaline phosphatase (Roche). After antibody treatment, sections were incubated with BM Purple AP Substrate (Roche). Embryonic BBB permeability assay The method is based on the well-established adult BBB dye-injection assay with special considerations for the injection site and volume to cater the nature of embryonic vasculature 20 , 28 , 29 , 30 . Four major modifications were made: first, embryos were injected while still attached via the umbilical cord to the mother’s blood circulation, minimizing abrupt changes in blood flow. Deeply anaesthetized pregnant mice were used. Second, taking advantage of the sinusoidal, fenestrated and most permeable liver vasculature, dye was injected using a Hamilton syringe into the embryonic liver and was taken into the circulation in a matter of seconds. Third, dye volume was adjusted to a minimum that still allows detection in all CNS capillaries after 3 min of circulation. High-fluoresce-intensity dye enables the use of small volumes and facilitates detection at the single-capillary level (10-kDa dextran-tetramethylrhodamine, lysine fixable, 4 mg ml −1 (D3312 Invitrogen), 1 µl for E13.5, 2 µl for E14.5, 5 µl for E15.5). Fourth, traditional perfusion fixation was omitted, again to prevent damage to capillaries. Instead, fixable dyes were used to allow reliable immobilization of the dye at the end of the circulation time (relatively small embryonic brain facilitates immersion fixation). Embryonic heads were fixed by immersion in 4% PFA overnight at 4 °C, cryopreserved in 30% sucrose and frozen in TissueTek OCT (Sakura). Sections of 12 µm were then collected and post-fixed in 4% PFA at room temperature for 15 min, washed in PBS and co-stained with either α-PECAM antibody or with isolectin B4 to visualize blood vessels. All embryos from each litter were injected blind before genotyping. Postnatal and adult BBB permeability assay P2–P5 pups were deeply anaesthetized and three methods were used: the first method involved injection of 10 µl of 10-kDa or 70-kDa dextran tetramethylrhodamine (4 mg ml −1 D3312 Invitrogen) into the left ventricle with a Hamilton syringe. After 5 min of circulation, brains were dissected and fixed by immersion in 4% PFA at 4 °C overnight, cryopreserved in 30% sucrose and frozen in TissueTek OCT (Sakura). Sections of 12 µm were collected and post-fixed in 4% PFA at room temperature for 15 min, washed in PBS and co-stained to visualize blood vessels with either α-PECAM primary antibody (1:500; 553370, BD Pharmingen), followed by 488-Alexa Fluor conjugated secondary antibody (1:1000, Invitrogen) or with isolectin B4 (1:500; I21411, Molecular Probes). The second method involved injection of 10 µl of HRP type II (5 mg ml −1 P8250-50KU Sigma-Aldrich) into the left heart ventricle with a Hamilton syringe. After 5 min of circulation brains were dissected and immersion fixed in 2% glutaraldehyde in 4% PFA in cacodylate buffer (0.1 M, pH 7.3) at room temperature for 1 h then at 4 °C for 3 h then washed in cacodylate buffer overnight. Cortical-vibratome sections (100 µm) were processed in a standard DAB reaction. The third method involved the use of EZ-link NHS-sulfo-biotin as a tracer, as described previously 17 . Imaging Nikon Eclipse 80i microscope equipped with a Nikon DS-2 digital camera was used to image HRP tracer experiments, vasculature density and pericyte coverage comparisons and expression analyses. Zeiss LSM 510 META upright confocal microscope was used to image Dextran and NHS-sulfo-biotin BBB permeability assays. A Nikon FluoView FV1000 laser scanning confocal microscope and a Leica SP8 laser scanning confocal microscope were used for imaging Mfsd2a and pericyte marker immunohistochemistry. Images were processed using Adobe Photoshop and ImageJ (NIH). Morphometric analysis of vasculature Coronal sections (25-µm thick) of E15.5, P4 and P70 brains were immunostained for PECAM. For vascular density and branching, confocal images were acquired with a Nikon FluoView FV1000 laser scanning confocal microscope and maximal projection images (5 per animal) were used for quantifications. The number of branching points was manually counted. Capillary density was quantified using MetaMorph software (Universal Imaging, Downingtown, Pennsylvania) by measuring the area occupied by PECAM-positive vessels per cortical area. The mean capillary diameter was measured manually in ImageJ from cross-sectional vascular profiles (20 per animal) on micrographs (5–7 per animal) taken under a ×60 objective with a ×2 digital zoom. For artery distribution quantification, 25-µm-thick sections (P60) were stained for smooth muscle actin (SMA) and PECAM. The proportion of PECAM-positive brain vessels with artery (SMA) identity was quantified using MetaMorph and expressed as percent of controls. Quantification was carried out blind. Quantification of cortical-vessel pericyte coverage Pericyte coverage of cortex vessels in Mfsd2a −/− and wild-type littermate control mice was quantified by analysing the proportion of total claudin-5-positive endothelial length also positive for the pericyte markers CD13 or Pdgfrβ. Immunostaining was performed on 20-µm sections of P5 cortex. In each animal, 20 images of 10 different sections were analysed. Microvasculature was found to be completely covered by pericytes in both control and Mfsd2a −/− mice and therefore no error bars are presented for the average pericyte coverage in Extended Data Fig. 6a, b ( n = 3). All the analysis was done with ImageJ (NIH). Quantification was carried out blind. Quantification of vessel leakage Epifluorescence images of sections from injected tracer and co-stained with lectin were analysed manually with ImageJ (NIH). Coronal cortical sections (12 µm) of the same rostrocaudal position were used for the analysis. The same acquisition parameters were applied to all images and the same threshold was used. Tracer-positive cells found outside a vessel (parenchyma) were used as a parameter for leakage. For each embryo, at least 20 sections of a fixed lateral cortical plate area were scored. Four arbitrary leakage groups were classified based on the number of tracer parenchyma positive cells per section (0, 1–5, 5–10 and 10–40). Average representation of each leakage group was calculated for Mfsd2a −/− and control embryos. Quantification was carried out blind. Spectrophotometric quantification of 10-kDa fluoro-ruby-dextran tracer was carried out from cortical extracts, 16 h after tail-vein injections in adult mice, as described previously 5 . Transmission electron microscopy TEM imaging of P90 HRP injection and E17.5 cortex capillaries was carried out as described previously 2 . HRP (10 mg (per 20 g); Sigma Aldrich, HRP type II) were dissolved in 0.4 ml of PBS and injected into the tail veins of deeply anaesthetized P90 mice. After 30 min of HRP circulation, brains were dissected and fixed by immersion in a 0.1 M sodium-cacodylate-buffered mixture (5% glutaraldehyde and 4% PFA) for 1 h at room temperature followed by 5 h in PFA at 4 °C. Following fixation, the tissue was washed overnight in 0.1 M sodium-cacodylate buffer and then cut in 50-µm-thick free-floating sections using a vibrotome. Sections were incubated for 45 min at room temperature in 0.05 M Tris-HCl pH 7.6 buffer, containing 5.0 mg per 10 ml of 3-3′ diaminobenzidine (DAB, Sigma Aldrich) with 0.01% hydrogen peroxide. Sections were then post-fixed in 1% osmium tetroxide and 1.5% potassium ferrocyanide and dehydrated and embedded in epoxy resin. E17.5 samples were processed as the P90 samples without HRP injection and with longer fixation times (2–3 days in room temperature). Ultrathin sections (80 nm) were then cut from the block surface, collected on copper grids, stained with Reynold’s lead citrate and examined under a 1200EX electron microscope (JEOL) equipped with a 2k CCD digital camera (AMT). Immunogold labelling for electron microscopy Mice were deeply anaesthetized and perfused through the heart with 30 ml of PBS followed by 150 ml of a fixative solution (0.5% glutaraldehyde in 4% PFA prepared in 0.1 mM phosphate buffer, pH 7.4), and then by 100 ml of 4% PFA in phosphate buffer. The brain was removed and post fixed in 4% PFA (30 min, 4 °C) and washed in PBS. Coronal brain sections (50-μm thick) were cut on the same day with a vibratome and processed free floating. Sections were immersed in 0.1% sodium borohydride in PBS (20 min, room temperature), rinsed in PBS and pre-incubated (2 h) in a blocking solution of PBS containing 10% normal goat serum, 0.5% gelatine and 0.01% Triton. Incubation (24 h, 20–25 °C) with rabbit anti-Mfsd2a (1:100; Cell Signaling Technologies (under development)) primary antibody was followed by rinses in PBS and incubation (overnight, 20–25 °C) in a dilution of gold-labelled goat anti-rabbit IgGs (1:50; 2004, Nanoprobes). After washes in PBS and sodium acetate, the size of immunogold particles was silver-enhanced and sections rinsed in phosphate buffer before processing for electron microscopy. Statistical analysis Comparison between wild-type and Mfsd2a −/− pericyte coverage and spectrophotometric quantification of 10-kDa fluoro-ruby-dextran tracer leakage was performed by a Mann–Whitney U -test (appropriate for small sample size; each embryo was considered as a sample). An unpaired student’s t -test was used (GraphPad Prism 4 Software) for comparison between wild-type and Mfsd2a −/− for vascular density, artery distribution, number of vesicular types, mean capillary diameter and Mfsd2a expression in pericyte deficient mice. P < 0.05 was considered significant (StatXact Cytel Software Corporation,Cambridge, Massachusetts, USA). Transcriptional profiling E13.5 Tie2-GFP embryos were micro-dissected for cortex and lungs. Cortex tissue was carefully cleared of the meninges and choroid plexus. FACS purification of GFP-positive cells and GeneChip analysis was performed as described previously 31 . RNA was purified with Arcturus PicoPure RNA isolation kit (Applied biosystems), followed by NuGEN Ovation V2 standard linear amplification and hybridization to Affymetrix Mouse Genome 430 2.0 Array. All material from a single litter (10–13 embryos) was pooled and considered as a biological replicate. Four biological replicates were used. Each biological replicate represents purification from different litters performed on different days. Transcriptional profile analysis of pericyte deficient mice Expression data from a published study of pericyte-deficient mice 5 were obtained from the Gene Expression Omnibus ( , accession number GSE15892). All microarrays were analysed using the MAS5 probe set condensation algorithm with Expression Console software (Affymetrix). P values were determined using a two-tailed student’s t -test ( n = 4). Mfsd2a protein expression in Pdgfb ret/ret mice Brain samples from P10–P14 mice and controls were kindly provided by C. Betsholtz. Sample processing and immunohistochemistry was carried out as described for all other samples in our study. Mfsd2a staining quantification was carried out with 12-μm cortical sagittal sections. Confocal images were acquired with a Nikon FluoView FV1000 laser scanning confocal microscope. Quantification of mean grey value per vascular profile was done with ImageJ (NIH) by outlining vascular profiles according to lectin staining and measuring Mfsd2a intensity in these areas. In all images, Pdgfrβ antibody staining was used to test presence of pericytes in quantified vessels. n = 2 animals per genotype, 60 images quantified of at least 600 vascular profiles per animal. Quantification was carried out blind. Accession codes Primary accessions Gene Expression Omnibus GSE56777 Referenced accessions Gene Expression Omnibus Data deposits Microarray data have been deposited in NCBI’s Gene Expression Omnibus ( ) and are accessible through GEO series accession number GSE56777 .
Like a bouncer at an exclusive nightclub, the blood-brain barrier allows only select molecules to pass from the bloodstream into the fluid that bathes the brain. Vital nutrients get in; toxins and pathogens are blocked. The barrier also ensures that waste products are filtered out of the brain and whisked away. The blood-brain barrier helps maintain the delicate environment that allows the human brain to thrive. There's just one problem: The barrier is so discerning, it won't let medicines pass through. Researchers haven't been able to coax it to open up because they don't know enough about how the barrier forms or functions. Now, a team from Harvard Medical School has identified a gene in mice, Mfsd2a, that may beresponsible for limiting the barrier's permeability—and the molecule it produces, Mfsd2a, works in a way few researchers expected. "Right now, 98 percent of small-molecule drugs and 100 percent of large-molecule drugs and antibodies can't get through the blood-brain barrier," said Chenghua Gu, associate professor of neurobiology at HMS and senior author of the study. "Less than 1 percent of pharmaceuticals even try to target the barrier, because we don't know what the targets are. Mfsd2a could be one." Most attempts to understand and manipulate blood-brain barrier function have focused on tight junctions, seals that prevent all but a few substances from squeezing between barrier cells. Gu and her team discovered that Mfsd2a appears to instead affect a second barrier-crossing mechanism that has received much less attention, transcytosis, a process in which substances are transported through the barrier cells in bubbles called vesicles. Transcytosis occurs frequently at other sites in the body but is normally suppressed at the blood-brain barrier. Mfsd2a may be one of the suppressors. "It's exciting because this is the first molecule identified that inhibits transcytosis," said Gu. "It opens up a new way of thinking about how to design strategies to deliver drugs to the central nervous system." Because Mfsd2a has a human equivalent, blocking its activity in people could allow doctors to open the blood-brain barrier briefly and selectively to let in drugs to treat life-threatening conditions such as brain tumors and infections. Conversely, because researchers have begun to link blood-brain barrier degradation to several brain diseases, boosting Mfsd2a or Mfsd2a could allow doctors to strengthen the barrier and perhaps alleviate diseases such as Alzheimer's, amyotrophic lateral sclerosis (ALS) and multiple sclerosis. The findings may also have implications for other areas of the body that rely on transcytosis, such as the retina and kidney. The study was published May 14 in Nature. Back to the beginning As developmental biologists, Gu and her colleagues believed watching the barrier develop in young organisms would reveal molecules important for its formation and function. The team introduced a small amount of dye into the blood of embryonic mice at different stages of development and watched whether it leaked through the walls of the tiny capillaries of the mice's brains, suggesting that the blood-brain barrier hadn't formed yet, or stayed contained within the capillaries, indicating that the barrier was doing its job. This allowed them to define a time window during which the barrier was being built. The team was able to do this by devising a new dye injection technique. Researchers studying blood-brain barrier leakage in adult organisms can inject dye directly into blood vessels, but the capillaries of embryos are too small and delicate. Instead, researchers typically inject dye into the heart. However, according to Gu, this can raise blood pressure and burst brain capillaries, making it difficult to tell whether leakage is due to blood-brain barrier immaturity or the dye procedure itself. She and her team used theirvascular biology expertise to identify an alternate injection site that would avoid such artifacts: the liver. "This allowed us to provide definitive evidence that the blood-brain barrier comes into play during embryonic development," said Ayal Ben-Zvi, a postdoctoral researcher in the Gu lab and first author of the study. "That changes our understanding of the development of the brain itself." Telltale pattern Now that they knew when the barrier formed in the mice, the team compared endothelial cells—the cells that line blood vessel walls and help form the blood-brain barrier—from peripheral blood vessels and cortical (brain) vessels and looked for differences in gene expression. They made a list of genes that were expressed only in the cortical endothelial cells. From thatlist, they validated about a dozen invivo. The team could have studied any of the genes first, but they were most intrigued by Mfsd2a because of its expression pattern. In addition to being switched on in brain vessels, it was active in the placenta and testis, two other organs that have barrier-type functions. Also, the gene is shared across vertebrate organisms that have blood-brain barriers, including humans. Gu and the team then conducted experiments in mice that lacked the Mfsd2a gene. They found that without Mfsd2a, the blood-brain barrier leaked (although it didn't prevent the blood vessels themselves from forming in the first place). The next question was why. "We focused on two basic characteristics: tight junctions between cells, which prohibit passage of water-soluble molecules, and transcytosis, which happens all the time in peripheral vessels but very little in the cortical vessels," said Gu. "We found the surprising result that Mfsd2a regulates transcytosis without affecting tight junctions. This is exciting because conceptually it says this previously unappreciated feature may be even more important than tight junctions." "At first we were looking at tight junctions, because we were also biased by the field," said Ben-Zvi, who will be starting his own lab later this year at The Hebrew University of Jerusalem. "We weren't finding anything on the electron micrographs even though we knew the vessels leaked. Then we noticed there were tons of vesicles. "It really shows that if you do systematic science and see something strange, you shouldn't dismiss it, because maybe that's what you're looking for." Next steps The team also began to study the relationship between the cortical endothelial cells and another contributor to the blood-brain barrier, cells called pericytes. So far, they have found that pericytes regulate Mfsd2a. Next, they want to learn what exactly the pericytes are telling the endothelial cells to do. Other future work in the Gu lab includes testing the dozen other potential molecular players and trying to piece together the entire network that regulates transcytosis in the blood-brain barrier. "In addition to Mfsd2a, there may be several other molecules on the list that will be good drug targets," said Gu. "The key here is we are gaining tools to manipulate transcytosis either way: opening or tightening." As important as the molecules themselves, she added, is the concept. "I personally hope people in the blood-brain barrier field will consider the mind-shifting paradigm that transcytosis could be targeted or modulated," said Ben-Zvi. Better understanding—and potentially being able to manipulate—the molecular underpinnings of transcytosis could aid in the study and treatment of diseases in tissues beyond the brain, from the intestines absorbing nutrients to the kidneys filtering waste. Being able to open and close the blood-brain barrier also promises to benefit basic research, enabling scientists to investigate how abnormal barrier formation affects brain development and what the relationship may be between barrier deterioration and disease.
dx.doi.org/10.1038/nature13324
Chemistry
Scientists cook up new recipes for taking salt out of seawater
Hyungmook Kang et al, Molecular insight into the lower critical solution temperature transition of aqueous alkyl phosphonium benzene sulfonates, Nature Communications Chemistry (2019). DOI: 10.1038/s42004-019-0151-2
http://dx.doi.org/10.1038/s42004-019-0151-2
https://phys.org/news/2019-07-scientists-cook-recipes-salt-seawater.html
Abstract Ionic liquid (IL)-water mixtures can exhibit a lower critical solution temperature (LCST) transition, but changes in long-range order and local molecular environment during this transition are not comprehensively understood. Here we show that in IL-H 2 O LCST mixtures, the IL forms loosely held aggregate structures that grow in size leading up to a critical temperature, whereas the aggregation of a fully miscible aqueous mixture, obtained by minor chemical modification of the anion, decreases with increasing temperature. Radial distribution functions from molecular dynamics simulations support the observation of aggregation phenomena in the IL-H 2 O mixtures. A local molecular structure of the ions is derived from multi-dimensional NMR experiments in conjunction with reported molecular dynamics simulations. In addition to considerable shifts of water’s hydrogen bonding network in the fully miscible phase, by NMR we observe the anion’s protons response to the intermolecular thermal environment and the intramolecular environment and find that the responses are determined by the sulfonate ionic functional group. Introduction Room temperature ILs (RTILs) are ionic materials with a melting point below 100 °C due to functional groups introducing steric hindrance and preventing closed packing structures. Owing to their ionic character, ILs have a number of desirable attributes, such as negligible vapor pressure, high ionic conductance, and often high thermal and chemical stability 1 , 2 , 3 , 4 . The physicochemical properties of ionic liquids can be tailored by chemical modification of the cation and/or anion, leading to a vast number (>10 14 ) of distinct ionic liquid combinations 5 . This presents an enormous library of ionic liquids to fully explore. To date, much of the fundamental and applied studies have focused on the imidazolium cation-based ILs 6 , 7 , 8 , 9 . A subclass of ionic liquids undergoes a thermoresponsive liquid–liquid phase transition of either an upper critical solution temperature (UCST) or lower critical solution temperature (LCST). Such thermoresponsive IL-based mixtures have opened up new potential applications such as protein extraction 10 , 11 , 12 , metal ion extraction 13 , and forward osmosis draw solutes for water purification 14 , 15 , 16 , 17 , 18 . In liquid–liquid mixtures with a LCST transition, a single and miscible phase appears at lower temperatures. However, upon heating above a critical temperature T c , the single-phase liquid–liquid mixture separates into two immiscible phases. From a thermodynamic view, this behavior is understood in the framework of equation (1), where ΔG mix is the free energy of mixing, ΔH mix is the enthalpy of mixing, and ΔS mix is the entropy of mixing. $${\it{\Delta }}G_{{{mix}}} = {\it{\Delta }}H_{{{mix}}} - T\Delta S_{{{mix}}}$$ (1) At lower temperatures, strong intermolecular interactions, such as hydrogen bonding, lead to a negative enthalpy of mixing and formation of a miscible phase between the two components. These intermolecular interactions are often highly directional and come at an entropic cost. Upon heating above T c , the entropic term dominates as intermolecular interactions are broken, and the system entropy can increase by phase separation due to increased degrees of freedom from the broken intermolecular interactions 19 , 20 or if dispersion forces between two like components (A–A) and (B–B) is greater than between unlike components (A–B) 21 . While this type of behavior has been observed for many polymer-solvent systems, there are fewer cases of small molecule LCST mixtures. Ionic liquids exhibiting a LCST in aqueous mixtures have been developed by Ohno and coworkers 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 . Their argument for the physicochemical conditions necessary for LCST behavior between an ionic liquid and water mixtures is the balance between the hydrophilic and hydrophobic character of the component positive and negative ions 26 . If both the anion and cation are too hydrophobic, aqueous mixtures will be immiscible over the whole temperature range, and conversely, if both components are too hydrophilic, aqueous mixtures will remain fully miscible independent of temperature. In studying related protonated tertiary amines, our team has found that is not the total organic content of the molecular ions that drive this phenomena but the proximity of the organic content to the charged center 30 . Recent studies have attempted to understand the molecular interactions responsible for the IL-water LCST transition. FT-IR spectroscopic probing of tetrabutylphosphonium [P 4444 ] 4-vinylbenezenesulfonate-water mixtures concluded that the C–H functional groups of the cation responded first to a temperature perturbation, followed by the sulfonate group of the anion. The mechanism proposed that the cation initiates conformational changes (due to greater hydrophobic interaction with water) forming single cation–anion ion pair aggregates due to strong coulombic forces 31 . A different IL-H 2 O LCST mixture obtained by replacing the 4-vinylbenezenesulfonate anion with trifluoroacetate ([P 4444 ][CF 3 COO]) was studied to examine the hydrophilic nature of the ions using 1-propanol probing methodology. The results found that [P 4444 ] showed equally strong hydrophilic and hydrophobic character, whereas the anion exhibited slightly hydrophobic character. The number of water molecules hydrating the cation was 10 times that of the anion and this large hydration shell of the cation results in an unfavorable entropy of mixing 32 . Cations are known to require greater hydration than anions 33 , and the largely neutral polarity (neither polar nor non-polar) of the [P 4444 ] further explains why to date many of the observed IL-H 2 O LCST mixtures are based on quaternary ammonium and phosphonium cations 34 . MD simulations demonstrated that the LCST transition of [P 6668 ] and amino acid anions in aqueous solutions occurs by temperature-dependent changes in intermolecular interactions between the anion, cation, and water 35 . Specifically, the anion’s functional groups, –NH 2 and –COOH, are able to form a hydrogen bond to the carboxylate group of another anion. Similar interactions between anions are believed to play a role in the self-assembly of tertiary amine bicarbonates 30 . With increasing temperature, the anion-water and cation–anion interactions weaken, whereas anion–anion interactions increase, resulting in an LCST transition. Furthermore, radial distribution functions showed no clear interaction of –CH 3 groups of one anion to –COO − of another. Consequently, the proposed conclusions only represent a system specific mechanism as it cannot account for the LCST transition of tetralkylphosphonium benzenesulfonate derivatives 24 , which have no hydrogen bond donor functional groups. Currently, there exists disagreement in the literature about the molecular interactions responsible for the IL-water LCST transition. In this article, we study the changes in LCST aqueous mixtures of tetrabutylphosphonium 2,4-dimethylbenzene sulfonate [P 4444 ][DMBS] across temperature ranges in both the fully miscible and phase separated regions to obtain further insight in the molecular mechanism of IL-H 2 O phase separation. The aggregation of ILs is investigated experimentally and theoretically by using the dynamic light scattering method and molecular dynamics simulations, respectively. Multi-dimensional NMR provides an insight into the local molecular structure of the ILs with respect to the views of the intermolecular and intramolecular environment. Results Temperature-dependent light scattering of RTIL Dynamic light scattering (DLS) was used to study the effects of aggregation and changes in long-range order for LCST mixtures. For each experiment, scattering intensity versus delay time was collected, and then analyzed by software to give the distribution versus decay constant occurring by diffusion under Brownian motion. The decay constant, Γ, given by eq. ( 2 ), is a function of the diffusion coefficient, D t , and the scattering vector, q. $$\Gamma = - D_tq^2$$ (2) The scattering vector is given by eq. ( 3 ), where n D is the refractive index of the material, λ is the wavelength and θ is the detection angle. $$q = \frac{{4\pi n_D}}{\lambda }\sin \left( {\frac{\theta }{2}} \right)$$ (3) From the analyzed decay time distribution and known scattering vector, the hydrodynamic diameter, D h , of the scattering particle can be determined from eq. ( 4 ), $$D_h = \frac{{k_BT}}{{3\pi \eta (T)D_t}}$$ (4) where k b is Boltzmann’s constant, η(T) is the viscosity and the D t is the translational diffusion. Figure 1a shows the results for the particle size distribution versus temperature across the fully miscible region of a 50 wt.% (w/w) mixture of [P 4444 ][DMBS]|H 2 O. Initially, at temperatures 10 °C below T c , the solution shows minimal scattering aggregates, meaning the solution is homogenous within the length scale of sensitivity for the DLS apparatus. However, around 30 °C, the solution shows clear correlation with an average scattering size of 3–4 nm, which enlarges with increasing temperature up to 9 nm at 35 °C, just below T c . A steady increase in water activity (observed vapor pressure relative to the vapor pressure of pure water (a w = p/p 0 )) suggests that the solute is steadily transitioning from an evenly distributed ideal solute to smaller number of clusters. Fig. 1 Dynamic light scattering cumulant analysis results for particle size versus temperature. a Particle size versus temperature of [P 4444 ][DMBS]|H 2 O 50 wt.% over the fully miscible region and phase separated regions of the phase diagram. Phase separation temperature, T c ca. 36 °C. b Particle size distribution of a [P 4444 ][BnzSO 3 ]|H 2 O 50 wt.% mixture versus temperature Full size image Plotting the water activity as an observed colligative concentration of osmolality (Equation 5 ) demonstrates how the effective number of particles in solution are steadily declining as aggregate size grows (Fig. 2 ). The Osm kg −1 versus temperature plot converges with pure at the critical point 36 °C, just above T c , which is consistent with limited solubility (<60 mM ideal solute particles) and/or large solute aggregates. $$\frac{{ - \ln (a_w)}}{{V_m}} \approx {\mathrm{Osm}}\,{\mathrm{kg}}^{ - 1}$$ (5) Fig. 2 Colligative concentration through the vapor pressure of [P 4444 ][DMBS]|H 2 O 50 wt.%. See Eq. 5 Full size image The aggregate size growth follows DLS measurements of a different LCST aqueous mixture with [P 4444 ][CF 3 COO], which showed a similar increase in particle size with increasing temperature leading up the critical point 36 . This observation was also supported by additional experimental techniques and it was assigned to the surfactant free micellar formation by the ionic liquid that swelled in size with increasing temperature until macroscopic phase separation occurred. Aggregation behavior approaching the critical point for [P 4444 ][CF 3 COO] aqueous mixtures was also observed by density fluctuations from small-angle X-ray scattering (SAXS) 37 . However, the SAXS study did not support micelle formation i.e., structures consisting of an ordered hydrophilic shell and hydrophobic core 36 . The authors concluded that the aggregates formed non-distinct ‘fuzzy clusters’ composed of ionic liquid and water molecules 37 . What appears consistent of IL-water LCST mixtures is the change from a more homogenous mixture (reduced density fluctuations, smaller aggregates) to a more inhomogeneous mixture upon heating up to the T c and macroscopic phase separation. Thus, the system’s microstructure seems to change gradually with temperature perturbation rather than undergoing a sudden structural change at T c . This trend was compared against the same test on a chemically similar system not exhibiting a LCST phase transition in water, tetrabutylphosphonium benzenesulfonate [P 4444 ][BnzSO 3 ], and is shown in Fig. 1b . In this system, the 2, 4- methyl positions on the DMBS anion are replaced with hydrogens, yielding benzenesulfonate. Despite the minor chemical change, this system exhibits markedly different behavior. As noted by Ohno 24 , this mixture remains fully miscible versus temperature rather than undergoing a liquid–liquid phase separation. At lower temperatures, the system reveals scattering aggregates on the order of hundreds of nanometers. The aggregate’s size rapidly reduces an order of magnitude and then more gradually shrinks to tens of nanometers upon further heating. The seemingly inert substitution of methyl groups with hydrogens results in the loss of the LCST transition and opposite aggregation behavior versus temperature as compared to the fully miscible phase of the LCST IL. One possible explanation for the distinct result is that [P 4444 ][BnzSO 3 ] in water forms extended apolar and polar networks, which has been understood to occur in other ionic liquid mixtures 38 , 39 , 40 . In contrast [P 4444 ][DMBS] cannot form extended networks (possibly due to steric disruption) and is relegated to distinct solvated ion pair/pair clustering in water. Previously unexplored in LCST mixtures are changes in long-range order after phase separation into an aqueous rich and ionic liquid rich phase. In the fully miscible state just below the critical point, the solution formed aggregates on the order of 9 nm. Above T c , each of the two-phase mixture shows substantial changes in long-range order. The ionic liquid rich phase shows aggregates on the order of 500 nm at 37 °C, just above T c . The size of aggregates in the IL-rich phase subsequently decrease with increasing temperature to 150 nm at 39 °C and then to 100 nm at 41 °C. The initial 50-fold increase in IL aggregate size going from the miscible phase to IL rich phase would likely arise from the sudden decrease in water concentration and additionally the reduced electrostatic screening from water enabling larger aggregates to form. The aqueous-rich phase shows nearly an order of magnitude increase scattering diameter of 3000 nm compared to the IL-rich phase just above phase separation. While an increase in aggregate size matches the temperature-dependent exponential particle growth trend in the miscible phase this is also a seemingly counter-intuitive result, as the concentration of ionic liquid is greatly reduced in the aqueous phase. One would expect the aggregate structure to decrease in size with reduced ionic liquid and increased screening between ion pairs from higher water content. Koga et al. studied the higher order derivatives of the Gibbs energy with regard to the excess enthalpy, and found the LCST IL [P 4444 ][CF 3 COO] exhibited strong hydrophobic character, especially compared to other ionic liquids not exhibiting a LCST transition in water. The authors proposed that [P 4444 ][CF 3 COO] acting as an extreme hydrophobe may not dissociate in water-rich regions 41 . Our observation of large IL aggregates in the water-rich phase supports the author’s proposal, where the hydrophilic IL aggregates rather than dissociate. This decline in aggregate size upon additional heating is similar to the general trend in the fully soluble IL, [P 4444 ][BnzSO 3 ]. Molecular dynamics of the ionic liquid, tetraalkylphosphonium-Bis(oxalato)borate, at very dilute concentrations in water found a ‘loose micelle-like aggregate’ structure with the cation alkyl chains forming a hydrophobic core and hydrophilic shell formed by polar segments of the anion and cation 40 . However, such a structure cannot be verified by DLS measurements in this work. Molecular dynamics simulation Molecular dynamics (MD) simulations were employed to investigate the aggregation trends in the IL-H 2 O mixtures in response to the temperature for the both [P 4444 ][DMBS] and [P 4444 ][BnzSO 3 ] systems. The radial distribution function (RDF), g(r), is a calculated parameter, which quantifies the spatial correlation between specific atoms, and thus enables detailed understanding of structural features of microscopic ionic liquid systems. The g(r) is defined as the ratio of the local time-averaged number density of specific particles at a given distance, r, from an origin particle to the total average number density of the particles, so then expressed as $$g(r) = \frac{{dn(r)}}{{\rho _n4\pi r^2dr}}$$ (6) where ρ n is the total number density of the particles. The mostly charged atom in Supplementary Fig. 3 represents the position of each molecules for the RDF analysis. i.e., the P atoms, S atoms, S atoms, and O atoms, respectively, are taken as reference sites of the [P 4444 ] cation, the [DMBS] anion, the [BnzSO 3 ] anion, and water molecule. Figure 3 shows the calculated RDF results for [P 4444 ][DMBS]. According to the cation–anion RDF (Fig. 3a ), the ion pairs are attracted strongly to each other by means of strong electrostatic interactions. The pronounced peak observed at around 5 Å results from the geometry of the ions. The two simulations for temperatures above T c display a greater peak height as compared to the that of the lower temperature cases. It is basically counter-intuitive for the more thermal motion of molecules at the higher temperature. This indicates that at higher temperatures ion pairs reside in closer proximity to each other. However, inspection of the two cases in the below T c range show a remarkable lack of difference. Thus, the apparent temperature dependence of the cation–anion g(r) suggests that the electrostatic interaction is a main driving force responsible for the aggregation in the ionic liquids. As no strong attractive potential exists among ions of the same charge, the cation–cation and anion–anion RDFs must be a consequence of the electrostatic interactions between oppositely charged ions. The cation–cation RDF (Fig. 3b ) shows evidence of clustering at above T c range. Considering the size of the single cation with even butyl chains of around 6 Å, the multiple small peaks between 5 and 10 Å are observed and can be interpreted as all located in the first coordination shell. The many numbers of peaks come from different coordination orientations and mean the ions are loosely tied. The merging of neighboring small peaks at elevated temperature causes a major peak at 7 Å at the two cases above T c . The peak distance corresponds the summation of the length of a butyl chain in [P 4444 ] and the O–S bond length of sulfonate in [DMBS]. Therefore, the major peak means the ions are compactly tied to each other by the strong electrostatic forces and the ion-pair aggregation is in the dense form of multiple layers of cation–anion shells. The first small peak moving to the larger radius as a temperature change from 10 °C to 20 °C is also consistent with the experimental results (Fig. 1a ) at below T c range. Nevertheless, since the MD simulations were carried out with 80 pairs of ions equilibrated in a simulation box with box lengths less than 10 nm in all dimensions, some experimental results, such as the aggregation on length scales of several thousands of nm, are difficult to be realized in the typical size scale of an MD simulation. For the anion–anion RDF of Fig. 3c , since the [DMBS] anion is a relatively small and flat molecule, which can be located between cations and the aggregation is mainly caused by cation–anion interactions, no clear conclusion can be reached from this. However, the aforementioned trend depending on temperature is not observed at the [P 4444 ][BnzSO 3 ]|H 2 O mixture without the LCST behavior, as shown in Supplementary Fig. 4a–c . The cation–anion RDF of the 50 wt.% of [P 4444 ][BnzSO 3 ] mixture system shows the opposite trend of a smaller peak height with increasing temperature, which is also consistent with the experimental results of Fig. 1b . Fig. 3 RDFs from MD simulations. RDFs obtained for the 50 wt.% [P 4444 ][DMBS]|H 2 O system. ( a ) cation–anion, ( b ) cation–cation, ( c ) anion–anion, ( d ) water-water, ( e ) water-cation, and ( f ) water-anion pairs Full size image Figure 3d–f presents RDFs with water molecules and also supports the experimental findings of the LCST characteristic. The overall g(r) of all RDFs with water molecules of the two cases above T c is lower than that of the two below T c cases, which means there are fewer spaces between ions. The trend is more clearly observed in the ion-water RDFs, and demonstrates the ion-water interactions declines as temperature increases. Hydrogen bonding exists in this ILs|H 2 O mixture system. The gain or loss of the directional bonding has been used as the most reliable approach to explain the LCST behavior of small molecules 42 . In order to gain insight into hydrogen bonding in the IL|H 2 O mixtures, representative oxygen and hydrogen pair distributions in the [P 4444 ][DMBS] mixture system are presented in Fig. 4 . The O–H separation based on the RDFs clearly suggests a simple definition of the hydrogen bonding, as the radius of the first hydrogen atom coordination shell for each oxygen atom, without any further complicated definition. Each occurrence of an O–H approach within 2.45 Å for both pairs in Fig. 4 can be determined as the hydrogen bonding. Applying the definition of O–H separation distance, the number of hydrogen bonds per each oxygen atom in each temperature case was counted and summarized in Fig. 4c . For the [P 4444 ][DMBS] mixture system with the LCST behavior, the degree of hydrogen bonding seems to decrease with increasing temperature. It is known that solutions of 50 wt.% or more contain essentially no free water 42 . In other words, all water molecules either participate in hydrogen bonding or the formation of hydration shells around the ILs. A lesser degree of hydrogen bonding at the above T c means more water molecules contribute to the hydration shell of ILs. However, for the [P 4444 ][BnzSO 3 ] mixture system, each oxygen atom is basically involved in more hydrogen bonds compared to the ILs with LCST behavior. As a result, the shortage of water contributing to stable hydration interactions disturbs the mechanism leading to the large IL clusters. The number of hydrogen bonds per oxygen atom in [BnzSO 3 ] anion increases as a function of temperature. The net number of hydrogen bonds (O w –H w and O A –H w ) for the mixture display a notable lack of temperature dependence a wide temperature range. Although an additional analysis of angle distribution of water molecules to verify the contribution of dipoles relative to the ions, the dependency of average angle on the temperature is not observed as shown in Supplementary Fig. 5 . Fig. 4 Calculated the number of hydrogen bonds for the 50 wt.% ILs|H 2 O system. a , b RDFs between oxygen and hydrogen for the 50.8 wt.% [P 4444 ][DMBS] ILs|H 2 O system. a Oxygen of sulfite in anion-Hydrogen in water, b Oxygen in water-Hydrogen in water pairs. Vertical dashed lines denote O–H separation to define hydrogen bonding. c Number of hydrogen bonds as a function of temperature per each oxygen atom for both 50 wt.% of [P 4444 ][DMBS] and [P 4444 ][BnzSO 3 ] with water solution. Error bars present the range of recorded data from MD per 1 ps Full size image Temperature-dependent NMR and IR Spectroscopy of RTIL 1 H NMR was used to gain deeper molecular insight into the mechanism of LCST phase transition. Changes in spectral shifts versus temperature were used to monitor and assess intra- and intermolecular interactions in the low temperature fully miscible water-RTIL phase, the high temperature water-rich phase, the high temperature IL-rich phase, and the pure RTIL as a reference sample. Figure 5a shows a 50 wt.% solution of [P 4444 ][DMBS] below and above T c with Nile Red as a dye to visual identify the two immiscible phases above T c . Nile Red will be predominately dissolved by the ionic liquid, and therefore remain in the ionic liquid rich phase. Thus, the ionic liquid-rich phase is less dense than the aqueous phase and will reside on top. Control over, which phase is probed, is demonstrated in Fig. 5b . The sampling for in situ temperature-dependent NMR assumes that any initial starting solution within the LCST miscibility gap will result in the same final concentration of IL rich phase and H 2 O phase for a given temperature above T c . This assumption is validated from the IL content in the two immiscible phases versus initial concentration measure by Cai et al. 14 . Fig. 5 Setup and results of the temperature-dependent NMR of RTIL. a 50 wt.% solution of [P 4444 ][DMBS] below and above T c with Nile Red which is predominately dissolved by the ionic liquid. b Illustrations of controlling which phase is primarily probed above T c . Green lines indicate ionic liquid rich phase and blue lines indicate aqueous rich phase, yellow dotted line indicates visible separation of the two phases, and black rectangular is the area primarily probed the NMR. c 1 H NMR spectra of pure and H 2 O mixtures of [P 4444 ][DMBS]. Temperature and solution content are indicated with spectra. Note: For the pure ionic liquid, protons E and F overlap. d Temperature dependent proton shifts of a 50 wt.% solution referenced against their position at 25 °C. After phase separation, the IL rich phase is probed. e Similar procedure to d except a 37 wt.% solution used to examine aqueous phase above T c Full size image Pure [P 4444 ][DMBS] shows negligible ppm shifts versus temperature. The peak structure shows narrowing and resolving of finer structure as motional mobility increases with temperature (Supplementary Fig. 6 ). Figure 5c compares the spectra of pure [P 4444 ][DMBS] versus a 50 wt.% aqueous solution at 25 °C. Moving from pure IL to an aqueous mixture, the majority of the peak positions remains largely unchanged except for two hydrogen groups. The methylene hydrogen atoms next to the phosphorous cation core (peak F) show a substantial shift upfield of 0.24 ppm. These hydrogens would be more electropositive due to their proximity with cation core, and thus able act as hydrogen bonding acceptors from the water oxygen’s lone pair, resulting in the upfield increase. Hydrogen bonding between water and the methylene group adjacent to the phosphorous core was calculated to exist in [P 666(14) ][BOB] aqueous mixtures 40 . In contrast, the anion aromatic peaks B and C shift downfield 0.08 ppm in the presence of water, likely as a result of acting as hydrogen bond acceptors and having electron density pulled away. The aromatic peak A in closest proximity to the sulfonate group does not show any measurable shift in the presence of water. No meaningful shifts corresponding to the ionic liquid protons are observed over the miscible phase region at 25 < T < 35 °C (Fig. 5d, e ). Over the same region, water protons show a linear shift of −0.012 ppm °C −1 , a slightly increased rate compared to pure water shift of −0.01 ppm °C −1 43 . This linear shifts arises from weakened hydrogen bonds as temperature increases, resulting in increased electron density and shielding 44 . Above T c , water behaves differently in the two immiscible phases (Fig. 5d, e ) compared to the miscible phase. The remaining water in the ionic liquid rich phase (right Fig. 5d ), on the order of 10–15 wt.% 14 , moves upfield at a nearly threefold increased rate of −0.038 ppm °C −1 , as water’s hydrogen bonds are further weakened. Water in the aqueous phase (Fig. 5e ) actually shows an initial shift downfield, likely as a result of water restructuring its extended hydrogen bonded network as the RTIL solute phase separates. Upon further heating, water moves upfield at a rate of −0.01 ppm °C −1 , similar to that of bulk water, further indicating that the phases are restructuring to essentially resemble bulk water 45 . The chemical interpretation of the NMR shifts of the water peak is supported by the behavior of the –OH stretch mode in Raman spectra, shown in Fig. 6 . Moving from the fully miscible phase to the ionic liquid-rich phase, there is clear intensity loss in the low frequency region of the –OH stretch in which water is more hydrogen bonding. Similarly, moving to the aqueous phase, an increase in intensity of the same region is observed, and the –OH region more closely resembles pure water. Fig. 6 Confocal Raman spectra of a 40 wt.% solution of [P 4444 ][DMBS]. The inset is the original normalized intensity before the main comparison with pure water data. Spectra are baseline subtracted using a polynomial function, and normalized to max intensity over the OH stretch region Full size image The ionic liquid protons in the fully miscible phase (T < T c ) showed negligible shifts. In the IL-rich phase (T > T c ), only a few ion protons show minor changes (Fig. 5d ). Cation peak F shifts downfield and the anion aromatic peaks, B and C, shift upfield. These are the same peaks that showed the greatest changes when moving from pure to aqueous mixture, however, in the opposite direction. The IL protons in the H 2 O-rich phase at T > T c show very different behavior from the ions in the IL-rich phase (Fig. 5e ). Nearly all the IL protons shift downfield with increasing temperature, with aromatic peaks B and C showing the greatest shifts, followed by the Ph–CH 3 group E and terminal cation alkyl group I. These downfield shifts can be attributed to increased hydrogen bonding associated with increased water content. Interestingly, the anion’s Ph–H and Ph–CH 3 protons A and D nearest the sulfonate group show negligible shifts across the entire temperature range and in all the three phases, whereas protons, in B, C, and E locations show much more sensitivity to temperature and phase change. Such radically different behavior for protons on the anion may indicate that the protons nearest the sulfonate group, A and D are sensitive to the intramolecular environment, whereas those protons furthest from the sulfonate grate, B, C, and E, are sensitive to changes in the intermolecular environment. Temperature-dependent NMR was previously used to study the LCST behavior of two ionic liquids, ([P 4444 ][SS]) and ([P 4446 ][MC3S]), in aqueous solutions 31 , 46 . We note two important distinctions made in our work versus that in Wu ’s studies, namely the use of an external lock and reference in our measures, whereas the D 2 O signal in the sample was used, and the steps taken to control the phase being probed by the instrument. By referencing the spectrum to chemical shift of tetramethylsilane, known to be largely temperature independent, we are more accurately able to track spectral changes with increasing temperature. This cannot be done using D 2 O as a reference due to its established temperature-dependent shift. Secondly, efforts to control the phase of probing above the critical temperature when performing in situ measurements on separated phase have important implications for data interpretation. Secondly, the integral area shifts versus temperature, using D 2 O as a reference, were used to assign the ionic liquid formation of globules after observing a sharp decrease in integral area above T c . However, we note this observation could arise from probing the aqueous phase above T c , in which the HOD reference signal would greatly increase relative to IL signal. Thus, we note two important advantages in using an external lock and reference and actively controlling the phase of probing above T c when studying liquid–liquid phase separations by in situ NMR. Temperature-dependent and spin-recovery NMR study of LCST RTIL Temperature-dependent spin-lattice relaxation times (t 1 ) were examined to probe rotational mobility in response to temperature and phase separation. Similar to VT- 1 H NMR experiments, t 1 relaxation changes in low temperature fully miscible water-RTIL phase (T < T c ), and phase separated water-rich phase and IL-rich phases (T > T c ) were measured. All protons showed strictly monoexponential decay. Similar to temperature-dependent chemical shifts, the t 1 times of the IL protons show no significant changes versus temperature change in the fully miscible region whereas water’s protons show a continual increased rotational mobility (increased t 1 ) as temperature increases (Fig. 7a, b ). After phase separation, water shows markedly different behavior between the IL and aqueous phase. Above T c , the H 2 O-rich phase’s water t 1 greatly increases towards that of pure water (Fig. 7b ). This signal arises from bulk water and thus is not informative of how water is interacting within or in close proximity to the IL nanoparticles observed through dynamic light scattering. Fig. 7 Temperature dependent 1 H spin-lattice relaxation of [P 4444 ][DMBS]|H 2 O solution. a 50 wt.% solution, miscible to IL rich phase above T c . b 37 wt.% solution, miscible to aqueous phase above T c Full size image The t 1 of residual water in the IL-rich phase (Fig. 7a ) shows a continuous decrease with increasing temperature above T c , indicating a reduced rotational mobility. As temperature increases, water content in the IL-rich phase decreases, resulting in a reduction of IL-water cluster size and/or shortened water chains 40 . Effectively, water becomes more nanoconfined by the IL, resulting in decreased rotational mobility. Decreased t 1 lifetime and the corresponding reduced rotational mobility were previously observed for nanoconfined water in microfluidic devices as a function of volume 47 . 2D ROSEY NMR measurement Temperature-dependent 2D ROESY NMR method was used to obtain further information about the structure of individual IL-H 2 O phases and the mechanism of relevant phase transitions. Cross peaks on 2D ROESY arise from the nuclear Overhauser effect, in which nuclear spin polarization couples to a different nuclear spin through space. Thus, cross peaks represent nuclei correlated through space, approximately up to 0.5 nm, and whose intensities are proportional to the magnitude of magnetization transfer between protons. Figure 8a shows the 2D ROESY spectrum of a 50 wt.% aqueous mixture of [P 4444 ][DMBS]. Initially clear from the spectrum is the large number of proton correlation between the anion and cation protons, thus making determination of structure ordering difficult. Fig. 8 2D ROSEY NMR measurement and integrated values. a 2D ROESY spectrum of 50 wt.% solution at 25 °C, b–d NOE cross-peak integrals, normalized to DMBS integral, and number of hydrogens contributing to cross peak, noted in parenthesis. b anion–anion, c cation–cation, d cation–anion Full size image Changes in cross-peak intensities were calculated versus temperature. The cross-peak integrals were obtained from elliptical integration with ranges from the 1D peak integral limits in f1 and f2 dimension for each proton, normalizing to the external DMBS diagonal integral as a reference for comparison across temperature, and then normalizing to the total number of hydrogens contributing to each cross to allow for comparison between cross peaks (i.e., cross-peak D–I, n H = 15). Unfortunately, NOE cross peaks versus temperature reveal no meaningful trends, likely as a result of sensitivity of observing NOE cross peaks to experimental parameters used. This is further exacerbated by the likely change in coupling constant versus temperature. The results of the analysis are shown in Fig. 8b . Comparing various cross-peak intensities reveals a few important trends: (i) anion Ph- meta -H peaks B and C show strong interactions with the terminal cation alkyl chain I, (ii) Ph- ortho -H peak A, the aromatic hydrogen nearest the anionic sulfonate group, shows an interaction with the alpha -methylene group F, nearest the cationic phosphorous core, (iii) Ph- ortho -CH 3 group D closest to the anionic sulfonate group shows correlation to the cation’s alpha -methylene group F, whereas the anion’s methyl group furthest from the sulfonate group, Ph- para -CH 3 E cross peak with the cation’s alpha -methylene group F is not observed, and (iv) cation–cation and anion–anion cross peaks, especially protons 2–3 carbon neighbors away, show much greater cross-peak intensity than anion–cation cross peaks. Trends i through iii suggest that the cation is intimate and overlapping contact with the anion. The functional groups ortho to anion’s charged group are correlated with the cations core methylene group alpha to the cations charge center. Functional groups more distant from the charge centers are moving out of the way so the charge centers can make a close approach. This orientation is supported by terminally methyl of the cation butyl group’s interacting with anions meta -hydrogen’s more distant from the anion’s charge center. The anion–anion and cation–cation cross peaks show greater intensity found in trend iv most likely arises from TOCSY (Total Correlation Spectroscopy) artifacts, despite using pulse sequences to suppress such signals. As such, no conclusion can be drawn from these cross peaks. Observations of anion–cation cross-peak support the structural organization of the anion’s sulfonate group interacting in close proximity to the cation’s core, and the aromatic ring facing outwards from the core. These results, in conjunction with the large chemical shift of methylene group F upon solvation in water, are in good agreement with modelling structures of [P 666(14) ]-H 2 O-[BOB] complexes 40 . From MD simulations, the pure IL exhibit extended apolar and polar regions from the cation’s alkyl chains and polar segments from the cation core and anion. However, upon the addition of water, the cation and anion showed enhanced spatial correlation forming contact ion pairs with water filling cavities between the ions forming cation–water–anion complexes. In these pairings the negative and polar segments of the anion coordinated with the cation core. These theoretical results align with the results of the ROESY experimental data of our system, which indicates the charged portions of the ions are associating intimately. Discussion In summary, we observed a common trend among IL-water LCST mixtures in which the IL forms small aggregates below T c , which subsequently increase in size upon heating leading up to the critical point. Furthermore, this observation was specific to LCST mixtures i.e., after minor chemical modification of the anion, which led to a fully miscible mixture, the system exhibits long-range order of hundreds on nm, which then decreased in size upon heating. From NMR studies, negligible changes in chemical shifts or rotational mobility of the IL protons over the temperature range encompassing the miscible region were found. The DLS and NMR data reveal the ionic liquids formed aggregates in the aqueous phase rather than undergo major restructuring to fully solvated ions. We further identified order of magnitude changes in the concentration of the residuals both phases after phase separation (IL in water rich and water in the IL rich). More significant and unexpected changes were observed in the aqueous IL-rich phase that revealed scatters on the micron scale. Currently experimental techniques are unable to elucidate the organizational structure of the aqueous phase. Methods IL synthesis [P 4444 ][DMBS] was prepared by dissolving commercially available tetrabutylphosphonium hydroxide solution (40%) and sodium 2,4-dimethylbenzene sulfonate (slight molar excess) in deionized water and stirring overnight. Note: commercially available tetrabutylphosphonium bromide was initially used, but resulted in cation molar excess after purification. The dissolved mixture was then rinsed three times with dichloromethane to extract the ionic liquid, and further purified with three washings of deionized water. The solvent was removed by rotary evaporation at 70 °C and the ionic liquid was further dried in a vacuum oven for at least 48 h at 100 °C. 1 H NMR (500 MHz, Deuterium Oxide) δ 7.78 (d, 1 H), 6.99 (s, 1 H), 6.96 (d, 1 H), 2.60 (s, 3 H), 2.26 (s, 3 H), 2.07–1.98 (m, 8 H), 1.43–1.35 (m, 16 H), 0.89 (t, 12 H). Dynamic light scattering Dynamic light scattering experiments were carried out using an ALV-6010/ 200 Multiple Tau Digital Correlator with a 632 nm HeNe laser and a 1 cm cuvette containing a U-shaped channel for external liquid flow enabling temperature-controlled measurements with a water–heater chiller within 0.05 °C. The flow channel prevented 90° scattering detection, so scattered light was collected in a backscatter geometry with a collection angle of 165°. Scans were collected for 90 s and averaged over multiple data acquisitions. Temperature-dependent viscosity of the solutions were measured with a TA Rheometer equipped with a Peltier temperature controller at a 200 rev s −1 shear rate. Temperature-dependent refractive index was measured with an Abbe DR-M2 refractometer at 589 and 680 nm, and extrapolated to 632 nm. (The methods for the viscosity and refractive index are summarized at Supplementary Note 1 . The viscosity and refractive results value are presented in Supplementary Figs. 1 and 2 , respectively.) Water activity Water activity, a w , of solutions and polymeric matrixes was measured via non-contact resistive electrolytic sensor technology on a Novasina LabMaster Standard Water Activity Instrument (range −0.003 to 1.00 a w , with an accuracy of +/−0.003 a w ) with Full Temperature Control (0–50 °C, with an accuracy of +/−0.3 °C). Molecular dynamics simulation All-atom AMBER force fields for potential energy U were used in the MD simulation of this system. $$\begin{array}{c}U_{{{potential}}} = \mathop {\sum}\limits_{i > j} {\left[ {4\varepsilon _{ij}\left\{ {\left( {\frac{{\sigma _{ij}}}{{r_{ij}}}} \right)^{12} - \left( {\frac{{\sigma _{ij}}}{{r_{ij}}}} \right)^6} \right\} + \frac{{q_iq_j}}{{4\pi \varepsilon _0\varepsilon _rr_{ij}}}} \right]} \\ + \mathop {\sum}\limits_{{{bonds}} = {{water}}{\kern 1pt} OH} {D_r[1 - e^{ - \beta (r - r_0)}]^2} \\ + \mathop {\sum}\limits_{{{bonds}} \ne {{water}}{\kern 1pt} OH} {K_r(r - r_0)^2} + \mathop {\sum}\limits_{{{angles}}} {K_\theta (\theta - \theta _0)^2} \\ + \mathop {\sum}\limits_{{{torsions}}} {\frac{{K_\varphi }}{2}\left\{ {1 + \cos (n\phi - \gamma )} \right\}} \end{array}$$ (7) The first term describes the non-bonded interactions including Van der Waals as the Lennard-Jones 12-6 form and Coulombic forces from atom-centered partial charges. The following terms in the potential energy equation represent, respectively, bonds, angles and torsional interactions. A hybrid bond potential was applied: the Morse potential for the OH bond in water and a harmonic potential for others. The force field parameters of atomistic [P 4444 ] cation and [DMBS] anion were developed in previous works 48 , 49 and summarized in Supplementary Tables 1 – 5 . A flexible water model 50 based on a four-site water model of TIP4P/2005 was employed for water molecules, which depicts well the dynamics and bulk properties of the condensed water. In particular, the flexibility of the OH bond distance and HOH angle enables observation of the structural behavior, which provides this information about directional bonding, e.g., hydrogen bonding. The VdW interaction parameters between unlike atoms were obtained by the Lorentz−Berthelot combining rule. The non-bonded interactions separated by exactly three consecutive bonds (1–4 interactions) were reduced by related scaling factors 51 , 52 , which were optimized as 0.50 for VdW interactions and 0.83 for electrostatic interactions, respectively. Atomic charges were calculated using a web base calculator, AtomicChargeCalculator, via the Electronegativity Equalization Method (EEM) 53 . The schematic molecular structures and partial charges of the [P 4444 ] cation and [DMBS] anion and water molecule of the flexible TIP4P/2005 water model are presented in Supplementary Fig. 3 . The MD simulation was performed using the LAMMPS package with standard three-dimensional periodic boundary conditions. The non-bonded interactions were cut off at 15 Å while the Ewald summation method was applied to treat the long-range electrostatic interactions. All simulations were carried out at isothermal-isobaric conditions in the Nose-Hoover NPT ensemble with time coupling constants of 25 and 250 fs, respectively. After an initial relaxation with short time steps and an equilibration with long time steps, additional simulations of 12 ns of the ensemble at each temperature were further performed with a fixed time step of 0.25 fs. The atomic trajectories of simulation were recorded with an interval of 1 ps for post-analysis. For Fig. 3 , 80 pairs of [P 4444 ][DMBS] and 1920 water molecules were initially displayed without an overlap, corresponding 50.8 wt.% of ILs in IL|water mixtures. A total of four different temperature cases were performed at atmospheric condition: Two cases of below T c (10, 20 °C) and two cases of above T c (50, 60 °C). Temperature conditions far from the measured T c were selected because the temperature of the liquid states in MD simulation fluctuates by 10 °C up and down. The MD simulation setup for [P 4444 ][BnzSO 3 ] is explained at Supplementary Note 2 . Nuclear magnetic resonance Variable-temperature 1 H Nuclear Magnetic Resonance (VT-NMR) was obtained with a Bruker Avance 500 MHz magnet equipped with a 5 mm PABBO BB/19F-1H/D Z-GRD Z119470/0057 probe head and airflow temperature control within 0.1 °C. All samples were given ample time to equilibrate between temperature changes. Experiments were collected in a coaxial tube, with the inner tube containing DMBS-d6 and tetramethylsilane as the lock and reference, and the outer tube containing the sample 54 , 55 . Aqueous mixtures for NMR experiments used D 2 O with 5–10 wt.% of H 2 O to give a comparable HOD intensity to IL signal. Spin-lattice relaxation times (t 1 ) were collected using the inversion recovery method (180-τ-90-acq) with 10–12 geometrically spaced delay times, DS = 2, NS = 8, and 4–5 × t 1 times between scans. Nuclear Overhauser Effect (NOEs) were measured using 2D-T-ROESY (Rotating-frame Overhauser Spectroscopy) and the roesyphpp.2 pulse sequence with spectrum width of 8 ppm, 300 ms mixing time, 10 s delay time, 1024 × 256 data points, NS = 8, and DS = 16. Spectra were processed with a sine-square 90° apodization and then phase corrected and baseline subtracted in both dimensions. Raman spectroscopy Raman spectroscopy was collected using a Horiba Confocal Raman Microscope with an Ar Ion Laser centered at 488 nm. Laser power levels were verified to be in the linear response regime and low power levels to prevent sample degradation. Samples were sandwiched between glass slides to prevent water evaporation and the temperature was controlled with a temperature-controlled stage connected to a temperature-controlled water bath with 0.1 °C sensitivity. Data availability The data supporting the findings of this study are available within the article and its Supplementary Information file or from the corresponding author upon reasonable request.
As populations boom and chronic droughts persist, coastal cities like Carlsbad in Southern California have increasingly turned to ocean desalination to supplement a dwindling fresh water supply. Now scientists at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) investigating how to make desalination less expensive have hit on promising design rules for making so-called "thermally responsive" ionic liquids to separate water from salt. Ionic liquids are a liquid salt that binds to water, making them useful in forward osmosis to separate contaminants from water. (See Berkeley Lab Q&A, "Moving Forward on Desalination") Even better are thermally responsive ionic liquids as they use thermal energy rather than electricity, which is required by conventional reverse osmosis (RO) desalination for the separation. The new Berkeley Lab study, published recently in the journal Nature Communications Chemistry, studied the chemical structures of several types of ionic liquid/water to determine what "recipe" would work best. "The current state-of-the-art in RO desalination works very well, but the cost of RO desalination driven by electricity is prohibitive," said Robert Kostecki, co-corresponding author of the study. "Our study shows that the use of low-cost "free" heat—such as geothermal or solar heat or industrial waste heat generated by machines—combined with thermally responsive ionic liquids could offset a large fraction of costs that goes into current RO desalination technologies that solely rely on electricity." Kostecki, deputy director of the Energy Storage and Distributed Resources (ESDR) Division in Berkeley Lab's Energy Technologies Area, partnered with co-corresponding author Jeff Urban, a staff scientist in Berkeley Lab's Molecular Foundry, to investigate the behavior of ionic liquids in water at the molecular level. Using nuclear magnetic resonance spectroscopy and dynamic light scattering provided by researchers in the ESDR Division, as well as molecular dynamics simulation techniques at the Molecular Foundry, the team made an unexpected finding. Berkeley Lab scientists investigating how to make desalination less expensive have hit on promising design rules for making so-called "thermally responsive" ionic liquids to separate water from salt. Credit: Berkeley Lab It was long thought that an effective ionic liquid separation relied on the overall ratio of organic components (parts of the ionic liquid that are neither positively or negatively charged) to its positively charged ions, explained Urban. But the Berkeley Lab team learned that the number of water molecules an ionic liquid can separate from seawater depends on the proximity of its organic components to its positively charged ions. "This result was completely unexpected," Urban said. "With it, we now have rules of design for which atoms in ionic liquids are doing the hard work in desalination." A decades-old membrane-based reverse osmosis technology originally developed at UCLA in the 1950s, is experiencing a resurgence—currently there are 11 desalination plants in California, and more have been proposed. Berkeley Lab scientists, through the Water-Energy Resilience Research Institute, are pursuing a range of technologies for improving the reliability of the U.S. water system, including advanced water treatments technologies such as desalination. Because forward osmosis uses heat instead of electricity, the thermal energy can be provided by renewable sources such as geothermal and solar or industrial low-grade heat. "Our study is an important step toward lowering the cost of desalination," added Kostecki. "It's also a great example of what's possible in the national lab system, where interdisciplinary collaborations between the basic sciences and applied sciences can lead to creative solutions to hard problems benefiting generations to come." Also contributing to the study were researchers from UC Berkeley and Idaho National Laboratory. The Molecular Foundry is a DOE Office of Science User Facility that specializes in nanoscale science. This work was supported by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy.
10.1038/s42004-019-0151-2
Medicine
Memory loss from West Nile virus may be preventable
Charise Garber et al. Astrocytes decrease adult neurogenesis during virus-induced memory dysfunction via IL-1, Nature Immunology (2017). DOI: 10.1038/s41590-017-0021-y Journal information: Nature Immunology
http://dx.doi.org/10.1038/s41590-017-0021-y
https://medicalxpress.com/news/2018-01-memory-loss-west-nile-virus.html
Abstract Memory impairment following West Nile virus neuroinvasive disease (WNND) is associated with loss of hippocampal synapses with lack of recovery. Adult neurogenesis and synaptogenesis are fundamental features of hippocampal repair, which suggests that viruses affect these processes. Here, in an established model of WNND-induced cognitive dysfunction, transcriptional profiling revealed alterations in the expression of genes encoding molecules that limit adult neurogenesis, including interleukin 1 (IL-1). Mice that had recovered from WNND exhibited fewer neuroblasts and increased astrogenesis without recovery of hippocampal neurogenesis at 30 d. Analysis of cytokine production in microglia and astrocytes isolated ex vivo revealed that the latter were the predominant source of IL-1. Mice deficient in the IL-1 receptor IL-1R1 and that had recovered from WNND exhibited normal neurogenesis, recovery of presynaptic termini and resistance to spatial learning defects, the last of which likewise occurred after treatment with an IL-1R1 antagonist. Thus, ‘preferential’ generation of proinflammatory astrocytes impaired the homeostasis of neuronal progenitor cells via expression of IL-1; this might underlie the long-term cognitive consequences of WNND but also provides a therapeutic target. Main Members of the Flavivirus genus, which include West Nile virus (WNV), Japanese encephalitis virus and Zika virus, are the most important arthropod-borne viruses that cause encephalitis in humans 1 . Acutely, patients suffering from WNV neuroinvasive disease (WNND) can experience confusion, fatigue, loss of motor control, memory loss and coma, and acute WNND has a mortality rate of 5–10% (ref. 1 ). WNV is a (+)-sense single-stranded RNA virus that targets fully differentiated neurons but can be cleared by immune-system-mediated processes, even after infection of the central nervous system (CNS) 2 . However, approximately half of the survivors of WNND experience debilitating, long-term cognitive sequelae, including defects in verbal and visuospatial learning, for months to years beyond the acute infectious event 3 , 4 . Animal studies have identified multiple cytokines with critical roles in cell-mediated antiviral immunity, including tumor-necrosis factor (TNF) 5 , type I, II and III interferons 6 , 7 , 8 , 9 and interleukin 1 (IL-1) 10 , 11 , that improve survival. Published studies have determined that human and mouse neurons are the target of WNV in vivo 12 , 13 , 14 . Notably, studies in which critical cytokines have been deleted via genetic approaches have not led to expanded tropism of WNV to non-neuronal cells within the CNS 10 , 15 . While neuronal death is associated with high mortality of WNV encephalitis in humans and mice 16 , survivors may exhibit limited neuronal loss 14 , 17 , which suggests that inflammatory processes triggered acutely contribute to long-term memory dysfunction. The fact that many patients recovering from WNND experience memory impairments for months to years beyond viral clearance indeed suggests a chronic condition with either sustained damage or limited repair. In a mouse model of recovery from WNND in which intracranial inoculation of an attenuated mutant WNV (WNV-NS5-E218A) leads to high survival rates with visuospatial learning defects, hippocampi exhibit upregulation of genes encoding molecules involved in microglia-mediated synaptic remodeling, including drivers of phagocytosis and the classical complement pathway, and decreased expression of genes encoding synaptic scaffolding proteins and glutamate receptors 14 . Complement-mediated elimination of synapses has been reported to occur in numerous neuroinflammatory diseases, including multiple sclerosis 18 , Alzheimer’s disease 19 and schizophrenia 20 , which suggests that this might be a general mechanism underlying inflammation-associated disruption of neural circuitry. The hippocampus, which is essential for spatial and contextual memory formation, receives input from the entorhinal cortex, which relays through the dentate gyrus (DG) and regions CA3 and CA1 21 . Mice that have recovered from WNV-NS5-E218A infection with poor spatial learning show persistence of phagocytic microglia engulfing presynaptic terminals within hippocampal region CA3 both acutely and during recovery 14 . While this provides a molecular explanation for the poor spatial learning in mice that have recovered from WNV infection, it does not explain why other hippocampal correlates of learning, such as adult neurogenesis, are not able to restore spatial learning. Adult neurogenesis occurs within the hippocampal DG and the subventricular zone (SVZ) 22 . Within the DG, adult neural stem cells give rise to astrocytes and intermediate neuronal progenitor cells, the latter of which proliferate and differentiate into neuroblasts that mature into granule-cell neurons and integrate into the hippocampal circuit over the course of a few weeks 23 . This process is regulated by intrinsic and extrinsic factors, including local signaling molecules, exercise, aging and inflammation 24 . A variety of endogenous factors have critical roles in the generation and integration of newly generated neurons in the adult hippocampus. These include morphogens, such as neurogenic Notch proteins, Shh, Wnts and BMPs, and neurotrophic factors, such as BDNF, CNTF, IGF-1 and VEGF 25 . Proinflammatory pathways, including those triggered by systemic accumulation of TNF, IL-1β and IL-6, and microglial activation have been linked to the regulation of neural correlates of memory, including adult neurogenesis, synaptic plasticity and modulation of long-term potentiation 14 , 26 , 27 , 28 , 29 . IL-1, in particular, has gained attention for its effect on cognitive function in the context of neuroinflammation. IL-1 signaling is mediated by a family of proteins comprising IL-1α, the anti-neurogenic cytokine IL-1β and IL-1 receptor antagonist, mainly through the type I IL-1 receptor (IL-1R1). IL-1β is generated via proteolytic cleavage of pro-IL-1β by caspase-1 during inflammasome activation 30 . IL-1 has high expression in vivo by infiltrating myeloid cells during WNV encephalitis, during which it critically regulates antiviral effector T cell responses 10 , 11 . While IL-1 is a key player in the orchestration of CNS immune responses, including the onset of fever 27 , it also has a role in spatial learning and memory-related behavior 28 . Indeed, injection of IL-1β into the brain impairs spatial learning, contextual fear memory and adult neurogenesis 29 , 30 , 31 . Although several studies have investigated the effects of IL-1 on hippocampus-based learning and behavior in neurologic diseases 31 , 32 , 33 , 34 , 35 , none have done so in the setting of IL-1 induction during viral infection of the CNS. Here, using an established model of post-infectious cognitive dysfunction from WNND in which mice display defects in spatial learning, we investigated the regulation of neurogenesis during repair and recovery. We identified a novel feed-forward mechanism in which IL-1 contributed to spatial learning defects via derailment of hippocampal neurogenesis to generate proinflammatory astrocytes, which revealed a previously unidentified role in IL-1-mediated cognitive dysfunction. Our results indicate that this pathway might be successfully targeted for the prevention of learning defects during recovery from viral encephalitis. Results Mice exhibit genetic signatures of derailed neurogenesis after recovery from WNND A published microarray study of hippocampal gene expression in mice that had recovered from infection with WNV-NS5-E218A identified several pathways significantly altered in mice with impaired spatial memory, including the pathways of axon guidance, Wnt signaling and p53 signaling 14 , indicative of potential effects on adult neurogenesis. To further investigate these pathways, we assessed the genes with altered expression in mock-infected mice relative to their expression in mice that had recovered from infection with WNV-NS5-E218A and grouped the genes into those encoding molecules that promote neurogenesis or inhibit neurogenesis (Fig. 1a,b ). The expression of genes encoding molecules associated with inflammation and/or prohibition of neurogenesis ( Casp1 , Il1a , Tnf and Tnfrsf1a ) was higher in mice that had recovered from infection with WNV-NS5-E218A than in mock-infected mice (Fig. 1a ), which exhibited higher expression of genes encoding molecules that promote the proliferation and differentiation of neuroblasts ( Epha5 , Wnt2 and Nrg3 ) and axon guidance ( Robo2 and Sema3 ) (Fig. 1b ). The expression of genes encoding markers for the A2 subgroup and A1 subgroup of reactive astrocytes 36 , the latter of which are proposed to be induced by activated microglia and lose the ability to promote neuronal survival and outgrowth, was also significantly higher in mice that had recovered from infection with WNV-NS5-E218A than in mock-infected mice (Fig. 1c,d ). Alterations in the expression of genes encoding molecules that affect neurogenesis after infection with WNV-NS5-E218A and markers of reactive astrocytes, including additional pan-reactive astrocyte markers 36 , were validated by qPCR analysis of an independent set of hippocampal samples (Fig. 1e–h ). These data suggested that WNND might limit adult neurogenesis in favor of astrogenesis. Fig. 1: Transcripts of genes encoding molecules that affect neurogenesis and markers of proinflammatory astrocytes are altered in mice that had recovered from WNV infection. a – d , Microarray analysis of hippocampal RNA collected at 25 d.p.i. from mock-infected mice and mice recovering from infected WNV-NS5-E218A (WNV-E218A) (above plots; one mouse per column), showing genes with expression altered by infection (left margin), presented as z -score-normalized relative expression: black font, significant change in expression ( P < 0.05); green font, change in expression trending toward significance (0.05 < P < 0.1) (both two-tailed Student’s t -test). e – h , qPCR analysis of hippocampal RNA collected at 25 d.p.i. from mock-infected mice and WNV-NS5-E218A-infected, recovering mice (independent of the samples in a – d ), assessing panels of gene-expression markers that include genes encoding molecules that are pro- and anti-neurogenic ( e ), A1 astrocyte markers ( f ), A2 astrocyte markers ( g ) and pan-reactive astrocyte markers ( h ), to validate the microarray results in a – d . Each symbol ( e – h ) represents an individual mouse; small horizontal lines indicate the mean (± s.e.m.). * P < 0.05, ** P < 0.01 and *** P < 0.001 (two-tailed Student’s t -test). Data are pooled from two independent experiments with four to ten mice per condition in each. Full size image WNND induces acute loss of adult neurogenesis in the hippocampus Given that the genetic signatures in the hippocampi of mice that had recovered from WNV infection were consistent with alterations to pathways that negatively affect adult neurogenesis, we evaluated the generation of new neurons during WNV encephalitis. We administered the thymidine analog BrdU to mice during the peak of WNV encephalitis for a period of 4 d (days 3–6), allowed mice to recover for 45 d and then evaluated the number of newly generated neurons (Fig. 2a–d ). Mice that had recovered from WNV infection exhibited fewer BrdU-labeled neurons within the DG granule cell layer than did mock-infected mice (Fig. 2a,c,d ). To determine whether the reduction in newly generated neurons in mice that had recovered from WNV infection was due to an alteration in the rate at which neuronal progenitor cells proliferated, we administered BrdU to mock-infected mice or mice infected with wild-type WNV (strain NY-99) or WNV-NS5-E218A during the peak of encephalitis, followed by evaluation of neuronal progenitor cells, which express doublecortin (DCX + ), in the SVZ or hippocampus, as described previously 37 , either by immunohistochemistry (Fig. 3a,b ) or flow cytometry (Supplementary Fig. 1 ). Infection with wild-type WNV (strain NY-99) via either the peripheral footpad or an intracranial route, or infection with the attenuated mutant WNV-NS5-E218A, led to fewer BrdU-labeled neuroblasts at 6–8 d post infection (d.p.i.) (Fig. 3c ). That reduction in the generation of neuroblasts was assessed over the course of recovery from WNND, at 6, 15 and 30 d.p.i., relative to that in age-matched mock-infected mice. This analysis revealed that significant deficits persisted until day 30 in the hippocampus, with a trend toward recovery observed in the SVZ (Fig. 3d ). Together these data indicated that hippocampal neuronal repair was defective after WNND. Fig. 2: Fewer new neurons are born within the DG during recovery from infection with WNV-NS5-E218A. a , Microscopy of the DG of mock- and WNV-NS5-E218A-infected mice (above images) at 45 d.p.i., assessed after in vivo BrdU labeling during acute infection, showing staining of BrdU (red) and the neuron-differentiation marker NeuN (green) and the DNA-binding dye DAPI (blue). Scale bars, 50 μm. b , Experimental design: mice were given BrdU (50 mg per kg body weight) by intraperitoneal injection every 12 h for 3.5 d (seven injections) beginning at 3 d.p.i. (red lines above bar), followed by immunohistochemical analysis at 45 d.p.i. (time, below diagram). c , d , Quantification of BrdU + NeuN + cells per section of DG as in a ( c ), and percent BrdU + NeuN + cells, normalized to the total number of BrdU + cells. Each symbol ( c , d ) represents an individual mouse; small horizontal lines indicate the mean (± s.e.m.). * P < 0.05 and *** P < 0.001 (two-tailed Student’s t -test). Data are representative of one experiment. Full size image Fig. 3: Deficits in adult neurogenesis during WNV infection. a , Experimental design: mock-, WNV NY-99– or WNV-NS5-E218A mice were given intraperitoneal injection of BrdU (100 mg per kg body weight) at 24 and 48 h (red lines above) before harvest at 6, 15 or 30 d.p.i. (top left corners). b , Proliferation of neuroblasts isolated from the SVZ or hippocampus (HP) (left margin) of mock-infected mice or mice infected intracranially with WNV NY-99 or WNV-NS5-E218A (above plots), measured by flow cytometry as incorporation of BrdU, after gating on DCX + cells (Supplementary Fig. 1 ). Numbers above bracketed lines indicate percent BrdU + (proliferating) cells. c , Quantification of proliferating neuroblasts in the hippocampus (left) and SVZ (right) of mice at 6, 15 and 30 d.p.i., after in vivo BrdU labeling as in a , presented as the frequency of DCX + cells labeled with BrdU, normalized to that of age-matched mock-infected mice (to compensate for age-related alterations in neurogenesis). d , Microscopy (left) and quantification (right) of immunostaining for proliferating neuroblasts in the SVZ and hippocampus of mice left uninfected (UI) or after footpad infection with WNV NY-99; quantification (right) presented as in c . Scale bars (left), 10 μm. Each symbol ( c , d ) represents an individual mouse; small horizontal lines indicate the mean (± s.e.m. in c ). * P < 0.05, ** P < 0.005 and *** P < 0.001 (two-tailed Student’s t -test). Data are representative of two independent experiments ( b ), are pooled from two independent experiments with three to five mice per condition per time point ( c ) or are representative of one experiment with five mice ( d ). Full size image WNV does not target neural stem cells or neuronal progenitor cells One possible explanation for the finding of fewer new neurons after the recovery period of WNV encephalitis would be that the neuroblasts were dying before they reached maturity. In agreement with published studies 36 , we confirmed that neural stem cells and intermediate neuronal progenitor cells were not permissive to infection with WNV, and we observed that less than 1% of DCX + neuroblasts were infected in vivo in both the SVZ and DG (Supplementary Fig. 2a ). Furthermore, the rate of neuroblast apoptosis during acute WNND was equivalent to that in mock-infected mice (data not shown). The few infected neuroblasts that we observed within the DG were located within the granule cell layer, which suggested that these might have been late-stage DCX + cells during their transition into immature neurons. Late-stage neuroblasts exhibit many of the same properties that neurons exhibit, including receptors, filament proteins and cellular processes (for example, axon and dendrite formation), which would potentially explain the ability of WNV to infect them. Alterations in the proliferation rate of neural progenitor cells could result in changes to the overall pool of stem cells over time. Using mice expressing the gene encoding green fluorescent protein (GFP) under control of the Nestin promoter, we performed immunohistochemistry for GFP together with the neural stem cell and astrocyte marker GFAP and counted neural stem cells (double positive for Nestin-GFP and GFAP) that remained within the hippocampal DG at 45 d.p.i. but found no difference in the number of these cells in mock-infected mice versus that in mice that had recovered from infection with WNV-NS5-E218A (Supplementary Fig. 2b ). Thus, WNV did not target or alter the number of neural stem cells, nor did it infect neuroblasts. Astrocytes are the main source of IL-1β in the recovering CNS To further investigate the potential for an alteration in the fate of early-stage progenitor cells, we determined whether fewer neuronal progenitor cells were produced in favor of more glial progenitor cells within the hippocampus of WNV-infected mice. To test this, we administered BrdU to mock-, WNV NY-99– or WNV-NS5-E218A-infected mice during the peak of encephalitis, followed by the evaluation of hippocampal neuronal progenitor cells and astrocytes 48 h later by flow cytometry (Fig. 4a,b ). Infection with either wild type WNV (NY-99) or WNV-NS5-E218A led to more BrdU-labeled GFAP-expressing astrocytes in the hippocampus at 7 d.p.i. than in that of mock-infected mice (Fig. 4b ). Although GFAP expression varies among astrocyte subpopulations, published work has demonstrated that GFAP is the transcript and protein with the highest expression in astrocytes isolated from the hippocampus 38 . Next we determined whether an increase in astrocyte genesis during acute infection significantly contributed to alterations in the immunological profile of the CNS after viral clearance. Analysis of astrocytes isolated ex vivo at 25 d.p.i. demonstrated an increase in the expression of genes encoding markers of A1 reactive astrocytes but not in the expression of those encoding markers of A2 reactive astrocytes (Fig. 4c ). Thus, the slight increase in the expression of genes encoding neuroprotective markers previously detected in whole hippocampal samples at the same time point (Fig. 1c–e ) was probably a contribution of other cell types. Analysis of the expression of genes encoding pan-reactive markers 36 also detected increased expression of the gene encoding GFAP (Fig. 4c , bottom). Identification of the cellular sources of anti-neurogenic and proinflammatory cytokines in the CNS via transcriptional analysis of astrocytes and microglia isolated ex vivo at 25 d.p.i. revealed that astrocytes were the main source of IL-1β, caspase-1 and TNF (Fig. 4d ). The purity of those cellular sources was confirmed by transcriptional analysis of cell-type-specific markers, as previously reported 39 . Isolated astrocytes demonstrated increased expression of Gfap , while isolated microglia showed increased expression of Cx3cr1 and Trem2 . Both ASCA-2 + astrocytes and CD11b + microglia had negligible expression of Rbfox3 , which has high expression in neurons (Supplementary Fig. 3 ). Kinetic analysis of Il1b expression within the hippocampi of WNV-NS5-E218A-infected mice showed persistent elevation of its expression at 25 d.p.i. (Fig. 4e ). Immunohistochemical analysis of mock- and WNV-NS5-E218A-infected tissue at 25 d.p.i. (Fig. 4f ) confirmed an increase in the number of activated GFAP + astrocytes in WNV-NS5-E218A-infected tissue (Fig. 4g ) that were the main cellular source of IL-1β protein expression in the recovering hippocampus (Fig. 4h,i ). Together these data indicate that WNND promoted the generation of A1 reactive astrocytes that expressed anti-neurogenic cytokines. Fig. 4: More astrocytes are born within the hippocampus during acute WNV encephalitis, and they adopt a proinflammatory phenotype and express IL-1β. a , b , Quantification of BrdU + DCX + cells (indicative of neurogenesis; a ) and BrdU + GFAP + cells (indicative of astrogenesis; b ) isolated from the dissected hippocampi of mock-, WNV NY-99– or WNV-NS5-E218A-infected mice (horizontal axis) given intraperitoneal injection of BrdU at 24 and 48 h before harvest at 6 d.p.i., followed by staining of cells for flow cytometry; results are normalized to those of mock-infected mice, set as 100%. c , qPCR analysis of genes encoding A1 astrocyte markers (top) or A2 and pan-reactive astrocyte markers (bottom) in ex vivo–isolated astrocytes from whole brains of mock- or WNV-NS5-E218A-infected mice (key). d , qPCR analysis of cytokine-encoding genes in ex vivo–isolated microglia and astrocytes (key) from the whole brain of WNV-NS5-E218A-infected mice; results are normalized to those of their mock-infected counterparts. e , qPCR analysis of Il1b among RNA isolated from hippocampal tissue collected from mock- or WNV-NS5-E218A-infected mice (key) at 7, 25 and 52 d.p.i. (horizontal axis). f – i , Immunostaining for IL-1β and GFAP in the hippocampus of mock- or WNV-NS5-E218A infected mice at 25 d.p.i., presented as microscopy ( f ) and percent GFAP + area ( g ), IL-1β + area ( h ) or IL-1β + GFAP + area, normalized to the total IL-1β + area, indicative of colocalization ( i ). Scale bars ( f ), 50 μm. Each symbol ( a – e , g – i ) represents an individual mouse; small horizontal lines ( c – e ) indicate the mean (± s.e.m.). * P < 0.05, ** P < 0.005 and *** P < 0.001 (two-tailed Student’s t -test). Data are representative of one experiment ( a , b , e – i ; mean ± s.e.m. in a , b , g – i ) or are pooled from three independent experiments with at least three mice per group ( c , d ). Full size image Il1r1 −/− mice resist alterations to the fate of early progenitor cells To determine whether the WNV-mediated reduction in neuroblast proliferation requires IL-1R1 signaling, we administered BrdU to mock- and WNV-NS5-E218A-infected Il1r1 −/− mice during the peak of encephalitis, followed by evaluation of hippocampal neuronal progenitor cells by flow cytometry. In contrast to the reduction in neuroblast proliferation observed in neurogenic zones of wild-type mice, Il1r1 −/− mice exhibited normal neurogenesis in both the hippocampus (Fig. 5a ) and SVZ (Fig. 5b ). In addition, Il1r1 −/− mice did not undergo the substantial increase in proliferating astrocytes observed in wild-type mice (Fig. 5c ). Immunohistochemical identification of proliferating (Ki67 + ) neural progenitor cells in conjunction with staining for Mash1, which identifies early neural progenitor cells with neurogenic potential 40 , within the DG revealed acute WNV-mediated decreases in these parameters in wild-type mice but not in mice deficient in IL-1R1 ( Il1r1 −/− ) or caspase-1 ( Casp1 −/− ), which cleaves pro-IL-1β to produce mature IL-1β (Fig. 5d ). Acute viral loads in various brain regions of wild-type mice did not differ from those in Il1r1 −/− mice at 6 d.p.i. (Supplementary Fig. 4a-c ), and there was no difference in persistent viral RNA at 25 d.p.i. in the hippocampus (Supplementary Fig. 4d ), which confirmed that control of the virus WNV-NS5-E218A in Il1r1 −/− mice was similar to that in wild-type mice. In addition, flow cytometry of cells isolated from the hippocampus demonstrated similar numbers of various subpopulations of resident and infiltrating CD45 + cells in WNV-NS5-E218A infected wild-type mice and their Il1r1 −/− counterparts (Supplementary Fig. 4e-l ). These data supported the notion that IL-1R1 signaling underlies derailment of neurogenesis during WNND. Fig. 5: Il1r1 −/− mice resist WNV-mediated alterations in neuroblast proliferation and recover synapses earlier than do wild-type mice. a , b , Quantification (by flow cytometry) of BrdU + DCX + cells in the hippocampus ( a ) and SVZ ( b ) of mock- or WNV-NS5-E218A-infected wild-type (WT) and Il1r1 −/− mice (horizontal axis) at 6 d.p.i., following in vivo BrdU labeling at 24 and 48 h before harvest, assessing alterations in neurogenesis; results are normalized to those of age-matched mock-infected mice, set as 100%. c , Quantification (by flow cytometry) of BrdU + GFAP + cells in the hippocampus of mock- or WNV-NS5-E218A-infected Il1r1 −/− mice (horizontal axis) at 6 d.p.i., following BrdU labeling as in a , b , assessing alterations in astrogenesis (normalized as in a , b ). d , Quantification (by immunohistochemistry) of Mash1 + Ki67 + cells (per mm DG) in mock- or WNV-NS5-E218A-infected (key) wild-type, Il1r1 −/− and Casp1 −/− mice (horizontal axis) at 6 d.p.i., assessing hippocampal neurogenesis. e , f , Microscopy (left) and quantification (right) of the immunostaining of synapses in mock- or WNV-NS5-E218A-infected wild-type and Il1r1 −/− mice at 7 d.p.i. ( e ) and 25 d.p.i. ( f ), showing staining for synaptophysin (red) and DAPI (blue) (left) or synaptophysin-positive area in the CA3 of the hippocampus (right), normalized as in a , b . Scale bars (left), 10 μm. Each symbol represents an individual mouse. * P < 0.05, ** P < 0.005 and *** P < 0.001 (two-tailed Student’s t -test). Data are pooled from two independent experiments ( a , b ) or one experiment with three to five mice per group ( c – f ) (mean ± s.e.m. throughout). Full size image Il1r1 −/− mice exhibit synapse recovery and normal spatial learning Given that Il1r1 −/− mice were protected from derailed neurogenesis following infection with WNV, we hypothesized that IL-1R1-deficient mice would exhibit improved synapse recovery following infection with WNV-NS5-E218A. Both WNV-NS5-E218A-infected wild-type mice and their Il1r1 −/− mice counterparts exhibited acute loss of presynaptic terminals at 7 d.p.i. (Fig. 5e ), and wild-type mice continued to show decreased numbers of presynaptic terminals at 25 d.p.i. (Fig. 5f ). In contrast, Il1r1 −/− mice displayed recovery of synapses at this time point (Fig. 5f ). Given that adult neurogenesis and synaptic plasticity are critical for spatial learning, we allowed WNV-NS5-E218A-infected Il1r1 −/− mice to recover for a month beyond viral clearance (46 d.p.i.) and assessed their ability to spatially locate and remember the location of a target hole in a Barnes maze over the course of ten trials held twice daily for 5 d (Fig. 6a ). While wild-type mice that had recovered from WNV infection displayed significant deficits in spatial learning 14 (Fig. 6b ), the performance of Il1r1 −/− mice that had recovered from WNV infection was indistinguishable from that of mock-infected mice (Fig. 6c ). Analysis of the area under the curve for each mouse, which provides a comprehensive view of how individual mice within each group performed across all 5 d of testing on the Barnes maze, demonstrated that wild-type mice recovering from WNND had more severe memory impairments than those of mock-infected wild-type mice or Il1r1 −/− mice that had recovered from WNND (Fig. 6d ). To assess differences in exploratory behavior that might affect performance in the Barnes maze spatial learning task, we performed open field testing (OFT) at 45 d.p.i., 1 d before the Barnes maze test (Fig. 6a ). While the number of lines crossed during OFT were similar for mock-infected mice and WNV-infected mice of both genotypes, mock-infected Il1r1 −/− mice crossed fewer lines than did mock-infected wild-type mice (Fig. 6e ). No differences among the groups were observed in the number of center crosses (Fig. 6f ). These studies demonstrated a critical role for IL-1R1 in limiting synapse and cognitive recovery. Fig. 6: WNV-NS5-E218A-infected Il1r1 –/– mice are protected from virus-induced spatial learning deficits in the Barnes maze behavior task. a , Experimental design: mock- or WNV-NS5-E218A infected wild-type and Il1r1 −/− mice underwent behavioral testing beginning with OFT at 45 d.p.i., followed by 5 consecutive days of Barnes maze testing. b , c , Barnes maze performance of mock- or WNV-NS5-E218A infected (key) wild-type mice ( b ) and Il1r1 −/− mice ( c ), presented as average errors per trial of each group on each day of testing (horizontal axis). d , Barnes maze performance of mock- or WNV-NS5-E218A-infected wild-type and Il1r1 −/− mice (horizontal axis), quantified as area under the curve for each individual mouse in each group. e , f , OFT of mice as in d , presented as the number of lines crossed in 5 min of testing for each mouse ( e ) or the number of times a mouse crossed the center of the arena in 5 min of testing, for each mouse ( f ). Each symbol ( d – f ) represents an individual mouse; small horizontal lines ( d ) represent the mean (± s.e.m.). NS, not significant ( P > 0.05); * P < 0.05 and ** P < 0.005 (two-way ANOVA (for effect of WNV in b – d ) or one-way ANOVA with Bonferroni’s multiple comparisons test (to compare groups in d – f )). Data are pooled composites of three independent experiments with at least four mice per group (mean ± s.e.m. in b , c , e , f ). Full size image IL-1R antagonism prevents WNV-induced spatial learning deficits To explore the potential for therapeutic intervention, we assessed cognitive recovery in wild-type mice treated, during acute infection, with either vehicle or the IL-1R antagonist anakinra. As synapse elimination 14 and loss of neurogenesis occurs during acute infection, which is also when IL-1 expression is highest, we began treatment at 10 d.p.i., a time point during viral clearance 14 when the blood–brain barrier was still permeable (Supplementary Fig. 5 ). Mock- or WNV-NS5-E218A-infected mice received five daily intraperitoneal doses of vehicle or anakinra, and then recovered for 30 d before Barnes maze testing at 46 d.p.i. (Fig. 7a ). As expected, vehicle-treated mice that had recovered from WNND showed significant impairment in their ability to identify the location of the target hole during Barnes maze testing (Fig. 7b ). In contrast, mice treated with anakinra were protected from WNV-induced impairment in spatial learning (Fig. 7c ). Analysis of the area under the curve confirmed that mice that had recovered from WNND and were treated with anakinra had better performance on the Barnes maze than that of vehicle-treated mice (Fig. 7d ). Finally, OFT at 45 d.p.i. demonstrated that there were no differences between any of the groups in exploratory behavior, locomotion or number of center crosses (Fig. 7f ). These studies identified a potential therapeutic target for the prevention of spatial learning defects that occur during recovery from WNND. Fig. 7: WNV-NS5-E218A-infected mice treated with anakinra are protected from virus-induced spatial learning deficits on the Barnes maze behavior task. a , Experimental design: mock- or WNV-NS5-E218A-infected wild-type mice were treated with five consecutive doses of anakinra (100 mg per kg body weight per day) or vehicle by intraperitoneal injection beginning at 10 d.p.i. (top row), then were allowed to recover for 30 d after treatment, followed by behavior testing with OFT at 45 d.p.i., followed by 5 consecutive days of Barnes maze testing (bottom row). b , c , Barnes maze performance of mock- or WNV-NS5-E218A-infected wild-type mice (key) treated with vehicle ( b ) or anakinra ( c ), presented as average errors per trial of each group on each day of testing (horizontal axis). d , Barnes maze performance of mock- or WNV-NS5-E218A-infected wild-type mice treated with vehicle or anakinra (horizontal axis), quantified as area under the curve for each individual mouse in each group. e , f , OFT of mice as in d , presented as the number of lines crossed in 5 min of testing for each mouse ( e ) or the number of times a mouse crossed the center of the arena in 5 min of testing, for each mouse ( f ). Each symbol ( d – f ) represents an individual mouse; small horizontal lines ( d ) represent the mean (± s.e.m.). NS, not significant ( P > 0.05); * P < 0.05 and ** P < 0.005 (two-way ANOVA (for effect of WNV in b – d ) or one-way ANOVA with Bonferroni’s multiple comparisons test (to compare groups in d – f )). Data are pooled composites of three independent experiments with at least three mice per group (mean ± s.e.m. in b , c , e , f ). Full size image Discussion The original designation of cytokines as immunological modulators has expanded to include a variety of functions in non-lymphoid tissues, especially the CNS, where they have critical roles in neurodevelopment. Regulation of signaling by the gp130 family of cytokines through Jak–STAT pathways maintains the pool of neural stem and progenitor cells (NPCs), which promotes neurogenesis of the NPCs 41 . During development, IL-1R1 is also expressed on proliferating NPCs, and IL-1β exerts an anti-proliferative, anti-neurogenic and pro-gliogenic effect on embryonic hippocampal NPCs in vitro 42 , 43 . Studies of viral encephalitis have demonstrated critical roles for innate cytokines expressed by infiltrating immune cells in T cell–mediated clearance of the pathogen. However, their effect on recovery and repair of the CNS after viral clearance via actions on NPCs has not been elucidated. Our findings suggest that during the acute phase of viral infection, myeloid cell–derived IL-1 10 alters the proliferation and differentiation fates of neural progenitor cells, which leads to a shift from neurogenesis to astrogenesis. Proinflammatory astrocytes then become the predominant source of the cytokine, which continues to inhibit neurogenesis, after myeloid cells retreat from the CNS 14 . Accordingly, mice deficient in IL-1R signaling were resistant to both derailment of neurogenesis and spatial memory impairments, which were observed in wild-type mice that had recovered from WNV infection. Because loss of neurogenesis was detected early in the course of infection, administration of IL-1R antagonist at 10 d.p.i., a time point at which the blood–brain barrier was still permeable, was able to reverse the effects of IL-1 and improve spatial learning. Our data indicate that the combinatorial effect of synapse loss 14 and reduced neurogenesis can negatively affect hippocampal spatial learning and memory long beyond the initial episode of infection via a shift in sources of cytokines to neural cells. Other neurotropic flaviviruses, including Japanese encephalitis virus and Zika virus, have been shown to directly infect neural progenitor cells and cause apoptosis of progenitor cells and their progeny 44 , 45 . In agreement with published studies 46 , we found very few neural progenitor cells infected with WNV. However, transcriptional profiling detected altered expression of genes encoding molecules linked to hippocampal neurogenesis during recovery from WNND. These included increased expression of the genes encoding CDKN1a, CDCA4 and CCND1, which are cell-cycle-progression inhibitors, and decreased expression of the genes that encode Epha5 and Sema6B, which regulate hippocampal axon pathfinding during neural development 47 , 48 , during recovery from WNND. We also detected alterations to the Wnt signaling pathway, which can affect the proliferation and motility of neural progenitor cells but also regulates synapse formation 49 and plasticity 50 ; this might potentially contribute to the failure of mice to recover from both the derailment of neurogenesis and WNV-mediated synapse loss 14 . The genetic signatures present during recovery from WNV infection indicate an environment that impedes both the proliferation of neural progenitor cells and the ability of immature and mature neurons to form new synapses. While published studies have detected IL-1R1 within the hippocampus 51 , subsequent research highlighting astrocyte heterogeneity has not identified these cells as a first target of IL-1 52 . In contrast, numerous studies have indicated neural progenitor cells are a significant target of IL-1. IL-1α and IL-1β have both been shown to induce neural stem cells and progenitor cells to favor the astrocyte lineage rather than the neuronal lineage in vitro 42 . TNF, which has high expression in the WNV-infected brain 5 in an IL-1R-dependent manner 10 , has been shown in other contexts to direct neuronal progenitor cells toward an astrocyte fate via downstream induction of the transcription factor STAT3 53 , which suggests that these might be part of the same anti-neurogenic pathway. In vivo studies using overexpression and stress models have shown that IL-1β decreases neurogenesis 43 , 54 , 55 and influences synaptic plasticity 56 , 57 , processes that are vital for the development and retention of spatial memory. We found that the abundance of IL-1β during acute infection altered the lineage fate of neural stem cells within neurogenic zones of the CNS in favor of astrocyte genesis. This led to a feed-forward cycle of inflammation, as astrocytes became the main source of IL-1β in the recovering hippocampus. IL-1R1-deficient mice were protected from decreased neurogenesis following infection with WNV and retained the ability to learn a spatial memory task. Notably, we demonstrated that the effect was specific to inflammasome-activated IL-1β by confirming that Casp1 −/− mice were also protected from decreased neurogenesis. In studies of cultured NPCs, IL-1β and TNF were both able to alter lineage fate to favor greater numbers of astrocytes, and both of these pathways were dependent on downstream activation of STAT3 58 . While we focused on the role and necessity of IL-1R signaling, TNF, which was also upregulated by astrocytes at 25 d.p.i., might also contribute to the lineage-fate-altering phenotype observed. While multiple studies have described the effects of cytokines released from astrocytes 59 and microglia 60 on memory and hippocampal neurogenesis, the precise contributions of each cell type in vivo remain ill defined. In the current study, we found that astrocytes within the CNS of mice that had recovered from WNV infection had increased expression of Il1b , Casp1 and Tnf , whereas their microglia had increased expression of Ccl2 . Astrocytes display a regional heterogeneity 38 , 52 , which can also reflect differences in homeostatic function, such as synaptogenesis 52 , and response to infection, during which cerebellar astrocytes are poised to quickly mount antiviral programs 61 . In addition to developmentally determined sub-populations 62 , 63 , astrocytes display remarkable plasticity and molecular identity determined in part by neuronal cues 64 . In response to different types of injury, reactive astrocytes can develop the polarized A1 phenotype (infection induced, proinflammatory astrocytes) or A2 phenotype (ischemia induced; promote tissue repair), which are distinguished by distinct genetic signatures 36 . In that study, A1 astrocytes were induced in vivo by injection of lipopolysaccharide, and the combination of IL-1α, TNF and the complement component C1q was observed to promote A1 polarization in vitro 36 . Our model of viral infection also exhibits elevated production of IL-1, TNF and C1q 14 . Consistent with that, we also observed in vivo development and persistence of astrocytes with pan-reactive and A1 markers. Proinflammatory astrocytes lose the ability to phagocytose, due to a decrease in the expression of mRNA encoding the phagocytic receptors Mertk and Megf10, and fail to support synaptogenesis in vitro, due to decreases in the expression of mRNA encoding Gpc6 and Sparcl1, factors that normally promote excitatory synapse formation 36 . Thus, the lack of recovery of synapses in our model could be explained by the generation of this subset of reactive astrocytes. Alternatively, lack of recovery could also be due to the loss of a specific astrocyte population that normally promotes synaptogenesis 52 . Although the expansion of a synapse-promoting astrocyte population leads to seizures in the context of a reactive glioma 52 , this population might be crucial for CNS repair in the setting of the diverse array of diseases associated with loss of synapses in addition to WNV encephalitis, which include Alzheimer’s disease 19 , schizophrenia 20 and lupus 65 . Further studies are needed to more fully elucidate how homeostatic astrocyte populations respond to CNS injury and whether this correlates with regional or functional identity. The discovery that newly derived reactive astrocytes of the proinflammatory A1 subset 36 were responsible for the persistently diminished adult neurogenesis during recovery from WNV infection also suggests cell-type-specific studies will be needed to elucidate these pathways in disease models. There is a growing body of experimental evidence demonstrating the importance of homeostatic neuro–immune interactions for normal cognitive function 66 , 67 . However, the effects of immunological mediators on brain function depend on their levels and locations; this phenomenon has been classically demonstrated as a U-shaped curve relating the effect of IL-1 signaling on cognitive function, in which either overexpression of IL-1 signaling or complete blockade of IL-1 signaling each negatively affects spatial learning 68 . Thus, in the context of neuroinflammation, increased expression of immunological molecules might similarly lead to altered cognitive performance. Of note, we observed a trend toward recovery of neurogenesis in the SVZ that was absent in the hippocampus, an effect that has been reported in response to ionizing radiation 69 , suggestive of differential responses to injury in these neurogenic niches. Interestingly, in a study of hypoxia-driven neuroinflammation, hippocampal neurogenesis decreased, while SVZ neurogenesis increased, an effect that was linked to IL-6 signaling 70 . Further work addressing the mechanism underlying differential responsiveness to cytokine signaling in neurogenic niches could shed light on basic aspects of the biology of neural precursor cells. Methods Animals 5- to 8-week-old male mice were used at the outset of all experiments. C57BL/6J mice were obtained from Jackson Laboratories. Il1r1 –/– mice (backcrossed >10 generations to C57BL/6) were obtained from Jackson Laboratories. Nestin-GFP mice (>10 generations backcrossed to C57BL/6) were obtained from G. Enikolopov (Cold Spring Harbor Laboratories). All experimental protocols were performed in compliance with the Washington University School of Medicine Animal Safety Committee (protocol 20140122). Mouse models of WNV infection For footpad infection, WNV NY-99 (strain 3000.0259) was passaged once in C6/36 Aedes albopictus cells to generate an insect cell–derived stock. 100 plaque-forming units (pfu) of WNV NY-99 were delivered in 50 μl to the footpad of anaesthetized mice. For intracranial infection, WNV-NS5-E218A, which harbors a single point mutation in the gene encoding 2′O-methyl-transferase, was obtained from M. Diamond (Washington University) and was passaged in Vero cells as described previously 41 . Deeply anaesthetized mice were administered 1 × 10 4 pfu of WNV-NS5-E218A or 10 pfu of WNV NY-99 in 10 μl of 0.5% FBS in HBSS into the brain’s third ventricle via a guided 29-gauge needle. Mock-infected mice were deeply anesthetized and administered 10 μl of 0.5% FBS in HBSS into the brain’s third ventricle via a guided 29-gauge needle. Stock titers of all viruses were determined by using BHK21 cells for viral plaque assay as previously described 42 . Microarray analysis Further analysis was performed on previously published hippocampal microarray gene-expression data 14 (Gene Expression Omnibus accession code GSE72139 ) of mock-infected and WNV-NS5-E218A-infected mice at 25 d.p.i. Expression data for selected genes ( P < 0.05 or P < 0.1 for italicized genes; two-tailed Students t -test) were converted to z -scores for each individual gene and animal and are presented as colorimetric heat maps. In vivo bromodeoxyuridine (BrdU) labeling BrdU (Sigma Aldrich) in sterile PBS was injected intraperitoneally for all experiments. For acute immunohistochemical studies (harvested 7–8 d.p.i.), mice were given 150 mg/kg of BrdU at 48 h before tissue harvest. For flow cytometry, mice were given 100 mg/kg of BrdU at 48 h and again at 24 h before tissue harvest. For neuronal BrdU labeling, mice were given 75 mg/kg every 12 h beginning at 3 d.p.i. and ending at 7 d.p.i., for a total of seven doses. Antibodies Antibodies to WNV (1:100 described previously 8 ), NeuN (1:100, Cell Signaling, Cat 12943S, Clone D3S3I), BrdU (1:200, Abcam, Cat ab1893, polyclonal), CD45 (Biolegend, Cat 103114, Clone 30-F11), doublecortin (1:150, Cell Signaling, Cat 4604 S, polyclonal), GFAP (1:50 for flow cytometry; 1: 200 for IHC, BD, Cat 561483, Clone 1B4), IL-1β (R&D, Cat AF-401, polyclonal), Mash1 (BD, Cat 556604, Clone 24B72D11.1), Ki67 (Abcam, Cat AB15580, polyclonal) and synaptophysin (1:250, Synaptic Systems, Cat 101004, polyclonal) were used. Secondary antibodies conjugated to Alexa-488 (Invitrogen, Cat A-21206, polyclonal) or Alexa-555 (Invitrogen, Cat A-21435, polyclonal) were used at a 1:400 dilution. Immunohistochemistry Following perfusion with ice-cold PBS and 4% paraformaldehyde (PFA), brains were immersion-fixed overnight in 4% PFA, followed by cryoprotection in two exchanges of 30% sucrose for 72 h, then frozen in OCT (Fisher). 9-μm-thick fixed-frozen coronal brain sections were washed with PBS and permeabilized with 0.1% Triton X-100 (Sigma-Aldrich), and nonspecific antibodies were blocked with 5–10% normal goat serum (Santa Cruz Biotechnology) for 1 h at 23 °C. A Mouse on Mouse kit (MOM basic kit, Vector) was used per manufacturers protocol when detecting synaptophysin to reduce endogenous mouse antibody staining. After block, slides were exposed to primary antibody (identified above) or isotype-matched IgG overnight at 4 °C, washed with 0.2% FSG in PBS and were incubated with secondary antibodies for 1 h at 23 °C. Nuclei were counterstained with DAPI (Invitrogen) and coverslips were applied with vectashield (Vector). Immunofluorescence was analyzed using a Zeiss LSM 510 laser-scanning confocal microscope and accompanying software (Zeiss). For each mouse, six to eight images were taken from two to three different coronal sections spaced at least 50 μm apart. Positive immunofluorescent signals were quantified using the public domain NIH Image analysis software, ImageJ. Flow cytometry Cells were isolated from brains of wild-type mice at 6, 15 or 30 d.p.i. and stained with fluorescence-conjugated antibodies to CD45, Brd and doublecortin as previously described 37 . In brief, mice were deeply anesthetized with a ketamine–xylazine mixture and were perfused intracardially with ice cold dPBS (Gibco). Brains were aseptically removed, minced and enzymatic digested in HBSS (Gibco) containing collagenase D (Sigma, 50 mg/ml), TLCK trypsin inhibitor (Sigma, 100 μg/ml), DNase I (Sigma, 100 U/μl), HEPES, ph 7.2 (Gibco, 1 M), for 1 h at 23 °C while shaking. The tissue was pushed through a 70 μm strainer and spun down at 500 g for 10 min. The cell pellet was resuspended in a 37% percoll solution and spun at 1,200 g for 30 min to remove myelin debris. Cells were washed in PBS, then resuspended in FACS buffer. Cells were blocked with TruStain fcX anti-mouse CD16/32 (Biolegend, Cat 101320, clone 93) for 5 min on ice, followed by incubation with fluorescence-conjugated antibodies (identified above) for 30 min on ice. Cells were then washed two times in PBS, fixed with 4% PFA for 10 min at 23 °C and were resuspended in FACS buffer. Data were collected with LSR-II (BD Biosciences) and analyzed with Flowjo software. Ex vivo isolation of microglia and astrocytes with microbeads Mice were deeply anesthetized with a ketamine–xylazine mixture and were perfused intracardially with ice-cold dPBS (Gibco). Brains were aseptically removed, minced and enzymatic digested in HBSS (Gibco) containing collagenase D (Sigma, 50 mg/ml), TLCK trypsin inhibitor (Sigma, 100 μg/ml), DNase I (Sigma, 100 U/μl), HEPES, pH 7.2 (Gibco, 1 M), for 1 h at 23 °C with shaking. The tissue was pushed through a 70 μm strainer and spun down at 500 g for 10 min. The cell pellet was resuspended in a 37% percoll solution and spun at 1,200 g for 30 min to remove myelin debris. Cells were washed in PBS, then resuspended in MACS buffer. CD11b + microglia (Miltenyi Biotec, Cat 130-049-601) and ACSA-2 + astrocytes (Miltenyi Biotec, Cat 130-097-678) were isolated using manual MACS according to the manufacturer’s instructions. Non-specific labeling of CD11b + microglia was prevented by incubation of the cells with FcR blocking reagent before labeling the cells with antibodies to ACSA-2 for magnetic isolation. The purified microglia and astrocytes were processed for real-time quantitative RT-PCR as described below. Real-time quantitative RT-PCR cDNA was synthesized using random hexamers, oligodT15 and MultiScribe reverse transcriptase (Applied Biosystems). A single reverse-transcription master mix was used to reverse transcribe all samples in order to minimize differences in reverse transcription efficiency. The following conditions were used for reverse transcription: 25 °C for 10 min, 48 °C for 30 min, and 95 °C for 5 min. Real-time quantitative RT-PCR was performed as previously described 14 . Behavioral testing OFT and Barnes maze testing were performed as previously describe 14 . In brief, mice were tested on the Barnes maze over the course of 5 consecutive days, receiving two trials per day, spaced 30 min apart. For each trial, the mouse was given 3 min to explore the maze and find the target hole. Mice that did not enter the target hole within 3 min were gently guided into the hole. After each trial, the mouse remained in the target hole for exactly 1 min and then was returned to its home cage. The maze was decontaminated with 70% ethanol between each trial. The numbers of errors (nose pokes over non-target holes) were measured. 1 d before Barnes maze testing, mice were tested via OFT to monitor differences in exploratory behavior. Each animal was given 5 min to explore an open field arena before returning to its home cage. Behavior was recorded using a camera (Canon Powershot SD1100IS), and an experimenter blinded to experimental identifiers assigned scored the trials. Anakinra treatment Anakinra (Kineret, Sobi) was diluted in PBS to 10 mg/ml, and mice were treated with 100 mg/kg/day of anakinra or vehicle (PBS) for 5 consecutive days by intraperitoneal injection. Mice were weighed daily to monitor weight loss or adverse events, none of which were observed during the course of treatment. Statistical analysis Statistical analyses were performed using Prism 7.0 (GraphPad Software). All data were analyzed using an unpaired Student’s t -test, one-way or two-way ANOVA with Bonferroni post-test to correct for multiple comparisons as indicated in the corresponding figure legends. A P value of ≤ 0.05 was considered significant. Life Sciences Reporting Summary Further information on experimental design and reagents is available in the Life Sciences Reporting Summary . Data availability Data supporting the findings in this study are available upon request from the corresponding author.
More than 10,000 people in the United States are living with memory loss and other persistent neurological problems that occur after West Nile virus infects the brain. Now, a new study in mice suggests that such ongoing neurological deficits may be due to unresolved inflammation that hinders the brain's ability to repair damaged neurons and grow new ones. When the inflammation was reduced by treatment with an arthritis drug, the animals' ability to learn and remember remained sharp after West Nile disease. "These memory disturbances make it hard for people to hold down a job, to drive, to take care of all the duties of everyday life," said senior author Robyn Klein, MD, Ph.D., a professor of medicine at Washington University School of Medicine in St. Louis. "We found that targeting the inflammation with the arthritis drug could prevent some of these problems with memory." The findings are available online in Nature Immunology. Spread by the bite of a mosquito, West Nile virus can cause fever and sometimes life-threatening brain infections known as West Nile encephalitis. About half the people who survive the encephalitis are left with permanent neurological problems such as disabling fatigue, weakness, difficulty walking and memory loss. These problems not only persist but often worsen with time. Klein and colleagues previously had shown that during West Nile encephalitis, the patient's own immune system destroys parts of neurons, leading to memory problems. "We started wondering why the damage isn't repaired after the virus is cleared from the brain," said Klein, vice provost and associate dean for graduate education for the Division of Biology & Biomedical Sciences. "We know that neurons are produced in the part of the brain involved in learning and memory, so why weren't new neurons being made after West Nile infection?" To find out, Klein; co-first authors Michael Vasek, a postdoc researcher, and graduate research assistant Charise Garber; and colleagues injected mice with West Nile virus or saltwater. During the acute infection, the mice received several doses of a chemical compound that tags neural cells as they are formed. Forty-five days after infection, the researchers isolated the tagged cells from the mice's brains and assessed how many and what kinds of cells had been formed during the first week of infection. Mice ill with West Nile disease produced fewer neurons and more astrocytes – a star-shaped neural cell – than uninfected mice. Astrocytes normally provide nutrition for neurons, but the ones formed during West Nile infection behaved like immune cells, churning out an inflammatory protein known as IL-1. IL-1 is an indispensable part of the body's immune system. It is produced by immune cells that swarm into the brain to fight invading viruses. Once the battle is won, the immune cells depart and IL-1 levels in the brain fall. But in mice recovering from West Nile infection, astrocytes continue to produce IL-1 even after the virus is gone. Since IL-1 guides precursor cells down the path toward becoming astrocytes and away from developing into neurons, a vicious cycle emerges: Astrocytes produce IL-1, which leads to more astrocytes while also preventing new neurons from arising. Hampered by an inability to grow new neurons, the brain fails to repair the neurological damage sustained during infection, the researchers said. "It's almost like the brain gets caught in a loop that keeps IL-1 levels high and prevents it from repairing itself," said Klein, who is also a professor of neuroscience and of pathology and immunology. To see whether the cycle could be broken, Klein and colleagues infected mice with either West Nile virus or saltwater as a mock infection. Ten days later, they treated both groups of mice with a placebo or with anakinra, an FDA-approved arthritis drug that interferes with IL-1. After giving the mice a month to recover, they tested the animals' ability to learn and remember by placing them inside a maze. Mice that had been infected with West Nile virus and treated with a placebo took longer to learn the maze than mock-infected mice. Mice that were infected and treated with the IL-1 blocker learned just as quickly as mock-infected mice, indicating that blocking IL-1 protected the mice from memory problems. "When we treated the mice during the acute phase with a drug that blocks IL-1 signaling, we prevented the memory disturbance," Klein said. "The cycle gets reversed back: They stop making astrocytes, they start making new neurons, and they repair the damaged connections between neurons." But, Klein cautions, IL-1 itself may not be a good drug target for people because of the important role it plays in fighting viruses. Suppressing IL-1 while the virus is still in the brain could exacerbate encephalitis, already a potentially lethal condition. "This is a proof of concept that a drug can prevent cognitive impairments caused by viral encephalitis," Klein said. "This study sheds light on not just post-viral memory disturbances but other types of memory disorders as well. It may turn out that IL-1 is not a feasible target during viral infections, but these findings could lead to new therapeutic targets that are less problematic for clearing virus or to therapies for neurologic diseases of memory impairment that are not caused by viruses."
10.1038/s41590-017-0021-y
Earth
Skiing over Christmas holidays no longer guaranteed—even with snow guns
Maria Vorkauf et al, Snowmaking in a warmer climate: an in-depth analysis of future water demands for the ski resort Andermatt-Sedrun-Disentis (Switzerland) in the twenty-first century, International Journal of Biometeorology (2022). DOI: 10.1007/s00484-022-02394-z Journal information: International Journal of Biometeorology
https://dx.doi.org/10.1007/s00484-022-02394-z
https://phys.org/news/2022-12-christmas-holidays-longer-guaranteedeven-guns.html
Abstract Rising air temperatures threaten the snow reliability of ski resorts. Most resorts rely on technical snowmaking to compensate lacking natural snow. But increased water consumption for snowmaking may cause conflicts with other sectors’ water uses such as hydropower production or the hotel industry. We assessed the future snow reliability (likelihood of a continuous 100-day skiing season and of operable Christmas holidays) of the Swiss resort Andermatt-Sedrun-Disentis throughout the twenty-first century, where 65% of the area is currently equipped for snowmaking. Our projections are based on the most recent climate change scenarios for Switzerland (CH2018) and the model SkiSim 2.0 including a snowmaking module. Unabated greenhouse gas emissions (scenario RCP8.5) will cause a lack of natural snow at areas below 1800–2000 m asl by the mid-twenty-first century. Initially, this can be fully compensated by snowmaking, but by the end of the century, the results become more nuanced. While snowmaking can provide a continuous 100-day season throughout the twenty-first century, the economically important Christmas holidays are increasingly at risk under the high-emission scenario in the late twenty-first century. The overall high snow reliability of the resort comes at the cost of an increased water demand. The total water consumption of the resort will rise by 79% by the end of the century (2070–2099 compared to 1981–2010; scenario RCP8.5), implying that new water sources will have to be exploited. Future water management plans at the catchment level, embracing the stakeholders, could help to solve future claims for water in the region. Working on a manuscript? Avoid the common mistakes Introduction Winter tourism is an important economic sector in mountain regions. Globally, the European Alps are the number one destination for skiing, with 43% of all skier days worldwide. With 24.9 Mio registered skier days in 2018/19, Switzerland ranks as number six in the world (Vanat 2021 ). In the winter season 2018/19, the Swiss cable cars yielded revenues of 758 Mio CHF (transport only; SBS 2019 ), underpinning the substantial economic value. Rising temperatures due to ongoing and future climate change (Rebetez and Reinhard 2008 ; IPCC 2018 ) entail severe reductions in the snow cover (Marty 2008 ; Klein et al. 2016 ; NCCS 2018 ; Hock et al. 2019 ). For the Swiss Alps, winter and spring temperatures are projected to increase by 1.8 K by the end of the twenty-first century if we drastically reduce greenhouse gas emissions, or even up to 3.9 K without any abatement measures (high-emission scenario). Winter precipitation will progressively fall as rain instead of snow and may increase by 12%. However, the projections for the precipitation increase are less clear than for air temperature (NCCS 2018 ). Winter runoff will increase and the peak runoff will occur earlier because of earlier snowmelt (Haeberli and Weingartner 2020 ).The operators of ski areas are thus confronted with major challenges for the future. The snow reliability of resorts has often been assessed by means of the 100-day rule (Witmer 1986 , for instance used by Abegg et al. 2007 ; Scott et al. 2008 ; Steiger and Abegg 2013 ), stating that a resort requires at least 100 consecutive days with a sufficient snow cover (≥ 30 cm). However, snow reliability does not necessarily result in economic profitability. Another indicator is the Christmas rule introduced by Scott et al. ( 2008 ), specifying that the 2 weeks over the Christmas and New Year’s break are a crucial time period for the operators, as these holidays can yield around one quarter of the revenues (Abegg 1996 ). The dominant adaptation strategy of operators to cope with climate change and variability is technical snowmaking (OECD 2007 ; Gonseth and Vielle 2019 ; Spandre et al. 2019b ; Steiger et al. 2019 ). Currently, the majority of ski slopes in the European Alps are equipped for snowmaking. According to SBS ( 2021 ), the area covered with snowmaking in Switzerland massively increased from 14% (2004) to 48% (2014). Today (2020), 53% of all slopes can be snowed-in technically. This is still markedly less than in Italy (90%) and in Austria (70%), but more than in France (37%). The costs for snowmaking, including the water consumption, are substantial. In Switzerland, these amount to 17% of the daily operating expenses (average for resorts with > 25 Mio CHF revenue; SBS 2021 ). Surveys among stakeholders in the skiing industry have shown that the operators of ski resorts are very aware of climate change (Abegg et al. 2008 ). Nevertheless, many do not perceive it as an immediate threat and empathise the high priority of economic competition and short-term weather variability as a major cause for revenue fluctuations (Saarinen and Tervo 2006 ; Hopkins 2015 ; Abegg et al. 2017 ). The adaptation strategy to these more short-term challenges is often also technical snowmaking (Trawöger 2014 ). A study in Austria highlighted a high confidence in snowmaking facilities, even in low-elevation resorts (Wolfsegger et al. 2008 ). However, increasing temperatures will reduce the snowmaking potential, as high temperatures and/or high relative humidity inhibit the snow production (Willibald et al. 2021 ). From 1961 to 2020, the number of hours allowing for snowmaking decreased on average by 26% in Austria, with more pronounced reductions at elevations between 1000 and 1500 m asl (Olefs et al. 2020 ). Nonetheless, water demand is expected to markedly increase by 50% to 110% across the Alps, according to Steiger et al. ( 2019 ). These higher water demands for snowmaking must be put in perspective to water uses in other sectors, such as hydropower production, agriculture, and tourism infrastructures, as well as their future demands under a warmer climate. The ski resort Andermatt-Sedrun-Disentis has recently expanded the ski area with roughly 68 ha of new slopes and with new snowmaking facilities. Such major interventions in the landscape become more and more controversial, especially in times of climate change and a declining demand for ski tickets. Moreover, the short planning horizon of operators does not account for the rising water demand for snowmaking that is very likely under future climatic conditions. Our detailed information about the snowmaking facilities and the snowmaking practices of the operators allow us to present an in-depth analysis of the ski areas future snow reliability throughout the twenty-first century, using the SkiSim 2.0 model developed by Steiger ( 2010 ; based on the SkiSim 1.0 model by Scott et al. 2003 ). Based on the RCP (Representative Concentration Pathway) scenarios for Switzerland (NCCS 2018 ), we simulate the future snow cover in the ski area and assess the snow reliability in terms of the 100-day and the Christmas rule. SkiSim 2.0 includes a snowmaking model, enabling us to estimate the future water consumption for snowmaking. We expect a strong decline in the natural snow reliability by the mid-twenty-first century that will likely be compensated by snowmaking. We hypothesize that maintaining the resort’s snow reliability will only be feasible at the costs of a strongly enlarged water demand. Material and methods Ski resort Andermatt-Sedrun-Disentis The ski resort Andermatt-Sedrun-Disentis in the Swiss central Alps has formerly consisted of two separate skiing regions (Gemsstock/Nätschen and Sedrun/Disentis) that were connected by railway from Andermatt to Sedrun/Disentis (Fig. 1 ). An ambitious project that was launched in 2005 scheduled the expansion of the ski area along with the construction of luxury hotels, penthouse apartments and a golf course. From 2015 to 2018, 130 to 150 Mio CHF were invested to connect the two ski regions with 68 ha of new ski runs, the construction or replacement of 14 ski lifts and a large-scale expansion of the snowmaking facilities. The entire resort Andermatt-Sedrun-Disentis comprises around 270 ha of skiing slopes, 175 ha of which are equipped for snowmaking. With the expansion, the operators obtained an additional concession to build a new reservoir lake and to use ground water in Andermatt whenever the water consumption exceeds the current availability (personal communication with former CEO Silvio Schmid). Fig. 1 Map with the three ski regions Gemsstock, Nätschen/Oberalp and Sedrun. The red line within each region indicates the critical access elevation, above which skiing is possible even if the lower areas are closed. The miniature map of Switzerland shows the location of Andermatt Full size image The highest point of the ski area is on the Gemsstock at 2961 m asl, the lowest point in Andermatt at 1444 m asl. Because of different snowmaking capacities and different water sources, we divided the ski area with approximately 270 ha of ski runs in three regions: (1) Gemsstock, (2) Nätschen/Oberalp, and (3) Sedrun (Fig. 1 ; see Electronic Supplementary Material [ESM 1 ] for the official map). The region Gemsstock is known for freeriding and has mostly northernly exposed ski runs, partly on the small Gurschen and St. Anna firns. The area that is operative for snowmaking (roughly 27 ha, Table 1 ), is mostly situated below 2100 m asl. The more southernly exposed region of Nätschen/Oberalp (Fig. 2 ) includes most of the newly built ski runs and chairlifts. Almost the entire region is now equipped with modern snowmaking facilities featuring the highest water pumping rates (Table 1 ) and with a serviceable area of roughly 99 ha. The highest point of the region is on 2600 m asl. The region Sedrun goes up to 2350 m asl and is the lowest of the three regions. The slopes mainly face towards north or east/west (Fig. 2 ), and all facilities for snowmaking in this region have existed already before the investments between 2015 and 2018, covering an area of 49 ha for snowmaking. The infrastructure dates back to the 1990ies. Table 1 Snowmaking information for the three regions of Andermatt-Sedrun-Disentis , obtained from the operators Full size table Fig. 2 The operative areas for snowmaking along the elevational bands and for the aspects north (N), east or west (E/W; together), and south (S) in the three skiing regions. Currently, around 65% of the skiing slopes can be technically snowed-in Full size image For each of the three regions, we identified a critical access elevation (red line in Fig. 1 ) based on the skiing infrastructure. The area above these critical access elevations can be reached via cable cars. Hence, the ski regions may remain operable even if the slopes below these elevations have to stay closed. If the slopes above the critical access elevations become unskiable, the region would not be operational anymore. In the Gemsstock region, the critical access elevation is at 2000 m asl, on Nätschen/Gütsch at 1800 m asl, and in Sedrun at 1900 m asl. The model SkiSim 2.0 The model SkiSim 2.0 computes the daily snowpack (in mm water equivalents) considering natural and man-made snow using two modules: (1) a natural snow and (2) a snowmaking module. The natural snow module is a degree day model using daily mean temperature and precipitation as input data. In order to distinguish between snow and rain events and a snow/rain mixture, a lower and an upper daily mean temperature threshold is calibrated based on daily snow fall data (< 1 °C: snow, > 3 °C: rain, between: snow/rain mix). The daily melt is estimated based on degree days (daily mean temperature > 0 °C). The so-called degree day factor, which describes the melt that occurs per degree day is also fitted during the model calibration process with the number of snow days (snow depth ≥ 1 cm) per season (ESM 2 ). The number of snow days was slightly overestimated by the model (1.7% in Andermatt and 3.1% in Sedrun; Table 2 ). For further details on the natural snow module, refer to Steiger ( 2010 ). Table 2 The observed vs . modelled number of days with natural snow cover (≥ 1 cm) for the calibration (1981–1987) and for the validation period (1988–2010) at the stations Andermatt and Sedrun. R 2 is the coefficient of determination for the respective period Full size table The years 1981 to 1987 were used for model calibration and 1988 to 2010 for model evaluation (both periods together denote the reference period). In this application, we used separate degree-day factors for three aspect classes: − 25% for north exposed ski slopes, + 25% for south exposed slopes and an unchanged calibrated degree-day factor for east and west-facing slopes. The weather stations Andermatt (1442 m asl; ESM 3 ) and Sedrun (1429 m asl) were used for the input data, Andermatt for the regions Gemssock and Nätschen/Oberalp, and Sedrun for the region Sedrun. Temperature and precipitation are extrapolated from the elevation of these weather stations to the elevation range of the ski areas in 100 m bands. We used a region-specific lapse rate of the air temperature between Sedrun/Gütsch (2287 m asl) and Andermatt/Gütsch, respectively, that was fitted during model calibration. Separate lapse rates were calculated for each month of the year, and for dry (< 1 mm precipitation) and wet days (≥ 1 mm), respectively. For the precipitation, we assumed a constant 3% increase per 100 m of elevation (Steiger 2010 ). The snowmaking module takes into account that the operators of the ski area start to produce snow at certain dates (see Table 1 ), provided temperatures are low enough. For comparability with other SkiSim studies (e.g., Steiger and Scott 2020 ) we used − 2 °C air temperature as threshold for snowmaking. Note that this threshold is rather conservative given the wet-bulb temperature threshold provided by the ski area operators (Table 1 ). For instance, a wet-bulb temperature of − 2 °C corresponds to − 1 °C air temperature at 80% humidity, while at 100% humidity no evaporative cooling occurs. Snow is produced until the base layer is 30 cm thick (corresponding to a snow water equivalent of 120 mm at a snow density of 400 kg m −3 ). This is the so-called base-layer snowmaking, which is required for skiing. Thereafter, more snow is produced to sustain skiing until the end of the scheduled season. In the model, the snow production is calculated hourly, under the assumption of interpolated daily minimum and maximum temperatures. Refer to Steiger ( 2010 ) for a detailed description of the snowmaking module. We ran the model for each of the three regions (Gemsstock, Nätschen/Oberalp, Sedrun) separately, each divided into elevational bands of 100 m. We computed the water consumption for a hydrological year that includes the full skiing season (year y runs from Sept 1 st y-1 to Aug 31 st y ). Based on the daily snowpack, we determined the probability of a continuous snow cover for 100 days in a row (100-day rule) and of a continuous snowpack over Christmas and New Year (Christmas-rule; defined as Dec 22 nd to Jan 4 th ). As suggested by Abegg et al. ( 2021 ), the selection of the snow reliability indicators was done in close co-operation with the ski area operators. Snow reliability and high snow reliability are given when the 100-day rule is fulfilled in 70% and 90% of the winters, respectively, as ski areas are expected to be able to withstand single years with less favorable conditions. We defined the snow reliability of the Christmas-rule in the same way. To achieve results that are representative for the entire regions, we calculated area-weighted means of the probabilities, accounting for the area equipped for snowmaking in the elevational bands and in the aspect classes, as in Steiger and Stötter 2013 (and similar to François et al. 2014 , who weighted by ski lift power). The probabilities of each simulation were calculated based on the number of years in the 30-year time period when the 100-day rule or the Christmas rule were fulfilled. If not indicated differently, all reported probabilities refer to the median of all simulations in a RCP scenario (RCP2.6, RCP4.5 and RCP8.5, see below) for a given time period during the twenty-first century (three time periods, see below). When results are visualized for single aspects they always refer to the east/west aspect (north and south aspect presented in the ESM). The information about technical issues (area equipped for snowmaking, pumping rates, allowed water extraction) and about snowmaking practices (adopted wet-bulb temperatures, start dates, see Table 1 ) was obtained from Andermatt-Sedrun-Disentis directly (formerly SkiArena Andermatt Sedrun ). Climate change scenarios and data availability The CH2018 climate change scenarios were produced for single weather stations (see ESM 3 for the station Andermatt) as well as for a 2 × 2-km grid over whole Switzerland (NCCS 2018 ). There are three RCP scenarios with a total of 68 simulations: RCP2.6 (greenhouse gas emission stop with warming of less than 2 K compared to pre-industrial times; 12 simulations), RCP4.5 (emission stop in second half of the twenty-first century, warming > 2 K; 25 simulations), and RCP8.5 (high-emission scenario without emission stop; 31 simulations). These scenarios include daily simulations of temperature and precipitation until the year 2099. Here, we present the results for three time periods: 2020–2049 (early century), 2045–2074 (mid-century), and 2070–2099 (end of century). We refer to the 30-year periods of the RCP scenarios to get an estimate of the future conditions under the three scenarios. However, single extreme years may differ significantly from these estimates (for instance, “avalanche winter” 1999, ESM 3 ). For the weather station in Andermatt, the scenarios for the twenty-first century do not include any simulations for the minimum air temperature. We therefore used the gridded scenarios and extracted the input data for SkiSim 2.0 (minimum and maximum air temperature, precipitation) from the 2 × 2-km grid cell containing Andermatt. To account for the elevational difference between the grid cell and the weather station, we applied the temperature and precipitation lapse rates of the model. Water consumption We modelled the water consumption of the ski area for the reference period of the climate change scenarios (1981 to 2010). Theoretically, these numbers could then be compared to the actual water usage of that period. However, actual water usage is only available for the winters 2002–2017 (Sedrun) and 2014–2017 (Gemsstock). While these numbers refer to the snowmaking facilities of that time, the modelled water consumption is based on the full expansion of the facilities as in 2018. Thus, we could only carry out a plausibility check on the modelled numbers of the water consumption (see result section). Results Future snow reliability In terms of the 100-day rule and of our 70% threshold, the ski area is naturally snow reliable throughout the early twenty-first century, especially above the critical access elevations (Fig. 3 ; see ESM 4 and 5 for north and south exposure). Under RCP8.5, the snow reliability below 1700 m asl starts to drop below 70% without any technical snow, but it can be maintained high with snowmaking (ESM 6 ). It continues to decrease towards the end of the twenty-first century, but mainly under RCP8.5. While snowmaking will compensate for the lack of natural snow in the regions of Gemsstock and Nätschen/Oberalp, in Sedrun, this will not be feasible at elevations below 1800 m asl (Fig. 3 ; east/west exposure). Compared to the east/west aspect, the snow reliability is lower on southernly exposed slopes, especially below the critical access elevation, where the snow reliability is generally lower (natural snow: 13–23%; technical snow: 0–13%; below the critical access elevation under RCP8.5 by end of century) and it is higher on northernly exposed slopes (natural snow: 7–23%, technical snow: 0–10%; ESM 5 ). Fig. 3 The probability of 100 consecutive days that are operable for skiing on natural snow (dashed line) and with technical snow (solid line) for the three regions Gemsstock, Nätschen/Oberalp, Sedrun under the three RCP scenarios and for three time periods of the twenty-first century. The lines represent the median of all simulations per RCP scenario and 50% of the simulations lie in the shaded ribbon. The horizontal lines indicate the snow reliability at 70% and 90%, the vertical lines refer to the critical access elevation. All results refer to an east/west aspect, north and south exposures are in ESM 4 and 5 . At a probability of 1, the lines of the three scenarios overlap Full size image The natural snow reliability over the Christmas holidays is generally lower than for an operable season of 100 days (Fig. 4 ; see ESM 7 and 8 for north and south exposure). Snowmaking will mostly allow skiing over the holidays. However, under RCP8.5 Christmas skiing becomes increasingly unlikely by the end of the century (RCP8.5; Fig. 5 ). The influence of southernly exposed slopes on the snow reliability is somewhat smaller over the Christmas holidays than for the 100-day rule (natural snow: 3–13%, technical snow. 3–7%; below critical access elevation under RCP8.5, end of century; ESM 8 ). This smaller impact of the exposure is most likely due to lower temperatures in December and January. Fig. 4 The probability that the resort is operable during the Christmas holidays on natural snow only (dashed line) and with technical snow (solid line) for the three regions Gems-stock, Nätschen/Oberalp, Sedrun under the three RCP scenarios and for three time periods of the twenty-first century. See legend of Fig. 3 for a detailed description. All results refer to an east/west aspect, north and south exposures are in ESM 7 and 8 Full size image Fig. 5 The probability of operable Christmas holidays with and without snowmaking at the end of the century under RCP8.5. Areas depicted in white are not serviceable for snowmaking and the red line is the critical access elevation Full size image Gemsstock The natural snow reliability of the Gemsstock region is generally high throughout the whole twenty-first century (100-day rule; Fig. 3 ), particularly above the critical access elevation of 2000 m asl. By the end of the century, natural snow will not suffice to sustain a continuous skiing season of 100 days. But this concerns the lower areas under the RCP8.5 scenario only and may be compensated with technical snowmaking (Table 3 ). Table 3 The median likelihood (area-weighted) for fulfilling the 100-day and the Christmas rule below and above the critical access elevation, and under three RCP scenarios (RCP2.6, RCP4.5, RCP8.5) for three time periods on natural snow and with snowmaking. Dark green: highly snow reliable, light green: snow reliable, orange: not snow reliable Full size table The Christmas holidays are snow reliable at the beginning of the century, but the snow reliability gradually decreases towards the end of the twenty-first century (Fig. 4 ). Under RCP8.5, the low-elevation areas become snow scarce by the mid-century already (enough natural snow in 56% of the winters only). But with snowmaking, the whole Gemsstock region remains snow reliable until the end of the century (Table 3 ; Fig. 5 and ESM 6 ). Nätschen/Oberalp Both the natural and technical snow reliability of the region Nätschen/Oberalp are similarly high as in the Gemsstock region (100-day rule; Fig. 3 ). However, the critical access elevation is at 1800 m asl, above where the slopes can be skied unrestrictedly is 200 m lower than on the Gemsstock. Accordingly, under RCP8.5, the low-elevation areas are not naturally snow reliable anymore by the mid-century already. By the end of the century, natural snow is projected to suffice for a 100-day skiing season in one out of ten winters only (Table 3 ). Because of the widespread snowmaking facilities, the region will generally remain snow reliable throughout the whole twenty-first century, even under RCP8.5. However, the Christmas holidays will likely become increasingly snow scarce (Table 3 ). Below the critical access elevation, producing a base layer of technical snow for Christmas skiing may only be possible in 68% of the winters (RCP8.5). Sedrun Even though Sedrun is the least snow reliable region (Fig. 3 ), the 100-day rule will still be fulfilled throughout most of the twenty-first century. However, from the mid-century on, the natural snow reliability occurs in 66% of the winters below the critical access elevation of 1900 m asl (RCP8.5). With technical snow it can be maintained very high, fulfilling the 100-day rule in 99% of the winters. By the end of the century, the natural snow reliability at low elevations will only be given every fifth winter, and high temperatures will render sufficient snowmaking impossible (only feasible in 64% of the winters; Table 3 ). Above the critical access elevation of 1900 m asl, snowmaking maintains the area snow reliable even under RCP8.5 at the end of the century. The situation during the Christmas holidays is projected to be considerably worse in Sedrun than in the other two regions Gemsstock and Nätschen/Oberalp (Fig. 4 ). By the mid-century, the natural snow reliability at lower elevations will decrease drastically (RCP4.5 and RCP8.5), even above the critical access elevation of 1900 m asl (RCP8.5; Table 3 ). The lack of natural snow can be compensated by snowmaking, but under RCP8.5, the region will not be snow reliable anymore, not even with snowmaking above the critical access elevation (Fig. 5 , Table 3 ). Water consumption Water consumption in the reference period The mean water consumption in the Gemsstock region between 2014 and 2016 was 48 × 10 3 m 3 season −1 , whereas in the snow scarce winter of 2017 it was 150 × 10 3 m 3 season −1 ; thus, it more than tripled. In the region Sedrun, the mean water consumption between 2006 and 2016 was 114 × 10 3 m 3 and in the winter 2017, it increased by ca. 70% to 195 × 10 3 m 3 season −1 . For the region Nätschen/Oberalp, the facilities are simply too new to obtain water consumption data of the past winters (running since winter 2019). Because the model assumes a fully expanded ski area, the water consumption of the past years may mainly serve as a plausibility check for the model outcomes. The modelled baseline water consumption of the three regions of the ski area (reference period 1981–2010) is estimated 46.8 × 10 3 m 3 season −1 for Gemsstock (17% of the total 301.5 × 10 3 m 3 season −1 ; 3% lower than observations 2014–2016), 172.3 × 10 3 m 3 season −1 for Nätschen/Oberalp (57%), and 82.4 × 10 3 m 3 season −1 for Sedrun (26%), respectively. When we used the temperature and precipitation of the reference period in Sedrun (instead of the RCP climate change scenarios), the modelled water consumption for the reference period was 99.0 × 10 3 m 3 season −1 (13% lower than observations 2006–2016). Our model did not include any “water losses” (see below) and assumed that the operators only produced as much technical snow as required to guarantee a minimum snow depth of 30 cm until the scheduled season ends. However, operators often produce more snow than assumed in our model, as the course of the season is still unknown when the snow is produced (mainly between October/November and January). Thus, our modelled water consumption is rather conservative, and it is therefore likely that our future projections are underestimated. Water consumption in the twenty-first century The total water consumption at the end of the century will increase by 4% (RCP2.6), 16% (RCP4.5), or even 79% (RCP8.5) compared to the baseline, respectively. Below the critical access elevations (1800–2000 m asl), the relative increase in the water consumption will be much higher: 15% (RCP2.6), 47% (RCP4.5), and 195% (RCP8.5; reference 82.1 × 10 3 m 3 season −1 ). Above the critical access elevation, increases in water consumption will only amount to 0% (RCP2.6), 3% (RCP4.5), and 35% (RCP8.5) by the end of the century (reference of 219.5 × 10 3 m 3 season −1 ). Hypothetically, the operators could decide to fully abandon snowmaking below the critical access elevations, and only operate the higher areas. This would theoretically diminish the total water consumption of the ski area compared to the reference period, even at the end of the twenty-first century (RCP2.6: − 28%, RCP4.5: − 25%, RCP8.5: − 2%). If the concentration of greenhouse gas emissions were to stay at today’s levels thanks to successfully applied abatement measures (RCP2.6), the total water consumption of the ski area would only be 4% higher than during the reference period by the end of the twenty-first century. Thus, in the following, we report the results of the RCP4.5 and the RCP8.5 scenarios for each region. Gemsstock On the Gemsstock, the modelled water consumption for the reference period was 29.5 × 10 3 m 3 season −1 for areas above the critical access elevation of 2000 m asl, and 17.3 × 10 3 m 3 season −1 for the lower areas. In line with the high snow reliability throughout the twenty-first century, there is practically no increase in the water consumption above 2000 m asl (+ 7% by the end of the century under RCP8.5). Below the critical access elevation, including the runs to the valley bottom, the water demand in the mid-century will rise by 22% and 51% under RCP4.5 and RCP8.5, respectively, and by 35% (RCP4.5) and 162% (RCP8.5) by the end of the century (Fig. 6 ). Fig. 6 The increase in water consumption for the three regions compared to the baseline (reference 1981–2010; modelled with full expanded skiing resort). High and low elevations are above and below the critical access elevation, respectively Full size image Nätschen/Oberalp Above the critical access elevation of 1800 m asl, the yearly water consumption was 146.4 × 10 3 m 3 season −1 , and 25.9 × 10 3 m 3 season −1 for the lower situated areas during the reference period. In this region, only 9% of the area equipped for snowmaking lies below 1800 m asl (Fig. 2 ). As on the Gemsstock, the increase of water consumption above the critical access elevation is moderate (0–5%), except under RCP8.5 at the end of the century with 28%. However, below 1800 m asl, our model projects massive increases in water consumption. Under RCP4.5, these will be 13% (early century), 40% (mid-century), and 61% (end of century), while under RCP8.5, they will even be 24%, 101%, and 271% (Fig. 6 ). Sedrun In the region Sedrun, the modelled yearly water consumption was 43.6 × 10 3 m 3 season −1 for elevations above 1900 m asl (critical access elevation), and 38.9 × 10 3 m 3 season −1 for lower elevations during the reference period. Similar to the other two regions, increases in water consumption will range between 2 and 9%, but under RCP8.5, we project a rise of 78% by the end of the century. Below 1900 m asl, the water use under RCP4.5 will go up by 39% by the end of the century. Under RCP8.5, the additional water required for snowmaking will increase by 53% in the mid-century, and by 142% by the end of the century (Fig. 6 ). Discussion Snow reliability Our in-depth analysis of Andermatt-Sedrun-Disentis shows that the ski area remains snow reliable (consecutive 100-day season) in the twenty-first century, provided that snowmaking will be intensified. Fortunately for the operators, the resort as a whole has multiple entry points that are accessible even when lower parts of the ski area might not be skiable anymore. At high elevations, the entire ski area will be fully operational for at least 100 consecutive days. About three quarters of the total ski area lie above 2000 m asl, and this is also where the majority of the new lifts, slopes and ski runs were constructed (2015–2018). Ultimately, only the snow reliability on the valley runs cannot be maintained. The upper parts of the ski areas, including the newly built slopes, are projected to be operable until the end of the twenty-first century. Due to future climatic conditions and high investment costs for modern snowmaking facilities (Abegg et al. 2008 ), there will be a diminishing number of operable ski areas in the future worldwide (e.g., Fang et al. 2019 ; Spandre et al. 2019b ; Steiger et al. 2019 ; Scott et al. 2020 ). As the snow scarce winters in the late 1980s showed, high-elevation ski areas can benefit from increased visitor numbers when ski areas at lower elevations close (Koenig and Abegg 1997 ; Steiger et al. 2019 ). Accordingly, we assume that in the mid-term, Andermatt-Sedrun-Disentis may even profit from the shutting down of other ski areas. In contrast to the minimal season length of 100 consecutive days, the resort’s situation over the Christmas holidays (Christmas rule) is much less stable throughout the twenty-first century. Unreliable snow conditions for ski areas during Christmas holidays are projected to emerge globally (Berghammer and Schmude 2014 ; Steiger et al. 2019 ; Steiger and Scott 2020 ), as for instance in 2017, when the onset of snow in Andermatt was on January 3 rd . In Andermatt-Sedrun-Disentis , this mainly affects the region Sedrun, where snowmaking will reach its limits by the end of the century. In a comprehensive analysis of 34 ski areas in the canton of Grisons (eastern CH), Abegg et al. ( 2015 ) used the same 70% threshold for the 100-day and the Christmas rule for assessing the snow reliability and highlighted that only 15% of the ski areas would be naturally snow reliable by the end of the century. Snowmaking could increase the share of snow reliable ski areas to 56%, but the required snow production would rise by more than 100%. Sedrun was one of the analysed ski areas and they projected that with snowmaking, the ski area would still be snow reliable by the end of the twenty-first century. Our results with a higher spatial resolution and with the latest version of the Swiss climate change scenarios revealed that the ski area will only be partially snow reliable, fulfilling the 100-day rule, but not the Christmas rule anymore. The low snow reliability in the region Sedrun is partly due to older snowmaking facilities compared to Gemsstock and Nätschen/Oberalp. The technical snow reliability could be increased by renewing these old facilities and thereby allowing for higher water pumping rates (pumping rates at Nätschen/Oberalp are between five to ten times higher than those in Sedrun). This is underpinned by the very snow scarce winter of 2017. While the water consumption in the Gemsstock region tripled, in the region of Sedrun there was an increase of roughly 70% only. It is likely that the older snowmaking facilities restricted the production of technical snow. This reinforces a study across six Norwegian ski resorts, where lower snowmaking capacities were related to a higher vulnerability over the Christmas holidays (Dannevig et al. 2021 ). However, the natural snow reliability was also lower in Sedrun, therefore, the lower overall snow reliability cannot be attributed to the older snowmaking facilities alone. Generally, there has been a rapid technical development of such facilities, as evidenced by the high pumping rates of the new installations in Nätschen/Oberalp (270 L s −1 ). Such a high performance, in combination with sufficient water supply, is considered crucial for the snow reliability in snow scarce years. Even though further gains in the efficiency of snowmaking facilities are likely, the technology itself is also bound to physical limitations (wet-bulb temperature). Hence, unsuitable climatic conditions as they are often observed at the beginning of the season and/or at low elevations, substantially reduce the potential benefit of snowmaking facilities (Berard-Chenu et al. 2022 ). Water consumption Generally, the water consumption for snowmaking in the European Alps is estimated to increase between 50 and 110% (by 2050; Steiger et al. 2019 ). A rising demand for technical snow causes higher costs for the water consumption, but also increased operating costs and additional investments in snowmaking facilities. We project that in an average winter at the end of the century, the entire resort Andermatt-Sedrun-Disentis will require 79% more water for snowmaking (RCP8.5; roughly an increase from 300 × 10 3 m 3 season −1 to 540 × 10 3 m 3 season −1 ). When we take the 730 L day −1 water consumption of a typical 4-person household as a reference (Abwasser Uri 2019 ), the water consumption for snowmaking would increase from approximately 1130 to 2020 households. Further investment in the snowmaking capacity would intensify the water consumption beyond our model results. In the French Alps, the water consumption by the end of the century could even increase by the ninefold, if the area for snowmaking would be increased to 100% (Spandre et al. 2019a ). The model SkiSim 2.0 does not account for water losses through wind drift, sublimation, and evaporation, which may lead to the underestimation of the water consumption during the process of snowmaking. Grünewald and Wolfsperger ( 2019 ) highlighted that water losses ranged between 7 and 35%, depending on weather conditions. In their field tests, water losses augmented by 2.8% per 1 K increase in air temperature. Future conditions for snowmaking will become increasingly unfavourable and hence, future water losses are very likely to increase. Nevertheless, our model results indicate a rather moderate increase in water consumption compared to other ski areas in Switzerland and Austria. For Scuol (eastern Alps, CH), the water consumption by the end of the century may rise by a factor of 2.4–5, in Hochjoch (AU) by a factor of 2.2–3.7 (Abegg and Steiger 2016 ). For the winter season 2007 in the winter tourism region Davos (CH), Rixen et al. ( 2011 ) compared the water and energy consumption of the ski resort with the drinking water and energy consumption of the municipality. In terms of energy consumption, the ski resort used less than 1% of the municipality’s energy. But the water use comprised 21.5% of the municipalities drinking water (but using different water sources). The percental upsurge in water consumption for the region Nätschen/Oberalp, where most of the new ski runs, lifts, and snowmaking facilities were built, is similar as the percental increase for the whole ski area (+ 19% by the mid-century and + 65% by the end of the century, RCP8.5). Most of the water for snowmaking is extracted from the reservoir lake Oberalpsee (max. 200 × 10 3 m 3 season −1 extraction, pumping rate of 270 L s −1 ). Potential future water resources would be an additional reservoir lake (“Ober Gütsch”, 50 × 10 3 m 3 season −1 ) and groundwater in Andermatt (200 × 10 3 m 3 season −1 ). With unabated greenhouse gas emissions (RCP8.5 scenario), the reservoir Oberalpsee will suffice for the snowmaking activities of the region until the mid-century, but the water demand will clearly exceed the availability by the end of the century. Additional water resources in the range of 80 × 10 3 m 3 without any water losses will be required. Noteworthy, if we reduce the greenhouse gas emissions (RCP4.5), the reservoir of the Oberalpsee will still meet the water demands for snowmaking even by the end of the century. As the other regions’ water sources are rivers, the water availability is much more constrained by interannual fluctuations and there are no defined maximum rates of extraction per year (residual water flows in rivers have to be guaranteed). Competition for water The Oberalpsee is also used for hydroelectric power generation. The power station Oberalp produces three quarters of the energy during snowmelt and the subsequent summer months without snow. The withdrawal of water by the ski resort and by the electrical power company is regulated by law (Elektrizitätswerk Ursern 2020 ). Hence, competition and conflicts between the hydropower production and snowmaking will arise if the water level of the Oberalpsee and rivers will drop in the future. Such conflicts may emerge mainly in drier regions of Switzerland, such as the Engadine or parts of the Valais. Peaks in water demand for tourism often coincide with generally low water levels (Reynard 2020 ). Nevertheless, water shortages in Swiss tourism regions are usually caused by unsustainable water management strategies and poor distribution among stakeholders, not by a general lack of water (Clivaz and Reynard 2008 ; Schneider et al. 2016 ). A detailed study for the French Isère department showed that the water availability for snowmaking may even increase due to more rain and increased snowmelt, but mainly at catchment areas below 1500 m asl (Gerbaux et al. 2020 ). In the region of Andermatt, the newly built resort including golf, swimming pools and spa have generated new water demands that have not yet been quantified. We therefore suggest that the increasing water consumption of the ski resort as well as these new sources of water demand is considered in future water management strategies. In addition to the new demands for water and the climatic changes, there are changes in the land use of the Ursern valley that affect the surface runoff of the catchment. Successive abandoning of extensive grazing on alpine grassland is leading to an expansion of shrubland, in particular of the green alder. Because of the higher evapotranspiration of the green alder and abandoned grassland, this land use change reduces the runoff in the Ursern catchment (Inauen et al. 2013 ; van den Bergh et al. 2018 ). However, land use changes primarily affect the runoff during the summer months. Snowmaking usually starts in mid-November or early December and commonly lasts until January. Alaoui et al. ( 2014 ) showed that the water discharge of the Ursern valley during these months is clearly dominated by precipitation. As the amount of winter precipitation is expected to shift by − 2 to 24% by the end of the century (NCCS 2018 ), future competition for water resources in winter will likely not be triggered by a decrease in the supply, but rather by increasing water demands of the ski areas. Conclusions The studied ski resort Andermatt-Sedrun-Disentis features a high natural snow reliability throughout the twenty-first century and reductions in natural snow can mainly be compensated by snowmaking. However, under a climate change scenario with unabated emissions, lower areas (below 1800–1900 m asl) as well as the region of Sedrun, even above the critical access elevation, will not be snow-reliable over the Christmas holidays by the end of the twenty-first century, as the climate will not allow for sufficient snow production. Under this scenario, the water consumption will rise by 79% by the end of the century. The currently largest water source, the reservoir lake Oberalpsee, will then not meet the water demands of the region anymore and new sources such as ground water and a new reservoir lake will have to be exploited. According to the climate change scenarios (CH2018), it is likely that the water supply during the months of highest water consumption (November until January) will not decrease, but the high consumption may lead to competition with other sectors such as hydropower or the new hotels. Although the overall demand for skiing tourism in Switzerland has been decreasing since 2008 (SBS 2021 ), the comparative advantage of Andermatt-Sedrun-Disentis (Steiger and Abegg 2018 )—in combination with the significant expansion of the resort—will likely lead to an increase in tourist numbers.
For many people, holidays in the snow are as much a part of the end of the year as Christmas trees and fireworks. As global warming progresses, however, white slopes are becoming increasingly rare. Researchers at the University of Basel have calculated how well one of Switzerland's largest ski resorts will remain snow reliable with technical snowmaking by the year 2100, and how much water this snow will consume. The future for ski sports in Switzerland looks anything but rosy—or rather white. Current climate models predict that there will be more precipitation in winter in the coming decades, but that it will fall as rain instead of snow. Despite this, one investor recently spent several million Swiss francs on expanding the Andermatt-Sedrun-Disentis ski resort. A short-sighted decision they will regret in future? A research team led by Dr. Erika Hiltbrunner from the Department of Environmental Sciences at the University of Basel has now calculated the extent to which this ski resort can maintain its economically important Christmas holidays and a ski season of at least 100 days with and without snowmaking. The team collected data on the aspects of the slopes, where and when the snow is produced at the ski resort and with how much water. They then applied the latest climate change scenarios (CH2018) in combination with the SkiSim 2.0 simulation software for projections of snow conditions with and without technical snowmaking. The results of their investigations were recently published in the International Journal of Biometeorology. No guarantee of a white Christmas According to the results, the use of technical snow can indeed guarantee a 100-day ski season—in the higher parts of the ski resort (at 1,800 meters and above), at least. But business is likely to be tight during the Christmas holidays in coming decades, with the weather often not cold enough at this time and in the weeks before. In the scenario with unabated greenhouse gas emissions, the Sedrun region in particular will no longer be able to offer guaranteed snow over Christmas in the longer term. New snow guns may alleviate the situation to a certain extent, say the researchers, but will not resolve the issue completely. "Many people don't realize that you also need certain weather conditions for snowmaking," explains Hiltbrunner. "It must not be too warm or too humid, otherwise there will not be enough evaporation cooling for the sprayed water to freeze in the air and come down as snow." Warm air absorbs more moisture and so, as winters become warmer, it also gets increasingly difficult or impossible to produce snow technically. In other words: "Here, the laws of physics set clear limits for snowmaking." Technical snowmaking requires certain weather conditions. Credit: Erika Hiltbrunner, University of Basel 540 million liters The skiing will still go on, however, because technical snowmaking at least enables resort operators to keep the higher ski runs open for 100 consecutive days—even up until the end of the century and with climate change continuing unabated. But there is a high price to be paid for this. The researchers' calculations show that water consumption for snowmaking will increase significantly, by about 80% for the resort as a whole. In an average winter toward the end of the century, consumption would thus amount to about 540 million liters of water, compared with 300 million liters today. But this increase in water demand is still relatively moderate compared with other ski resorts, the researchers emphasize. Earlier studies had shown that water consumption for snowmaking in the Scuol ski resort, for example, would increase by a factor of 2.4 to 5, because the area covered with snow there will have to be largely expanded in order to guarantee snow reliability. For their analysis, the researchers considered periods of 30 years. However, there are large annual fluctuations: In addition, extreme events are not depicted in the climate scenarios. In the winter of 2017 with low levels of snow, water consumption for snowmaking in one of the three sub-areas of Andermatt-Sedrun-Disentis tripled. Conflicts over water use Today, some of the water used for snowmaking in the largest sub-area of Andermatt-Sedrun-Disentis comes from the Oberalpsee. A maximum of 200 million liters may be withdrawn annually for this purpose. If climate change continues unabated, this source of water will last until the middle of the century, at which point new sources will have to be exploited. "The Oberalpsee is also used to produce hydroelectric power," says Dr. Maria Vorkauf, lead author of the study, who now works at the Agroscope research station. "Here, we are likely to see a conflict between the water demands for the ski resort and those for hydropower generation." At first, this ski resort may even benefit from climate change—if lower-lying and smaller ski resorts are obliged to close, tourists will move to larger resorts at higher altitude, one of which is Andermatt-Sedrun-Disentis. What is certain is that increased snowmaking will drive up costs and thus also the price of ski holidays. "Sooner or later, people with average incomes will simply no longer be able to afford them," says Hiltbrunner.
10.1007/s00484-022-02394-z
Medicine
Carbs, sugary foods may influence poor oral health, study finds
Amy E. Millen et al, Dietary carbohydrate intake is associated with the subgingival plaque oral microbiome abundance and diversity in a cohort of postmenopausal women, Scientific Reports (2022). DOI: 10.1038/s41598-022-06421-2 Journal information: Scientific Reports , Nature
https://dx.doi.org/10.1038/s41598-022-06421-2
https://medicalxpress.com/news/2022-04-carbs-sugary-foods-poor-oral.html
Abstract Limited research exists on carbohydrate intake and oral microbiome diversity and composition assessed with next-generation sequencing. We aimed to better understand the association between habitual carbohydrate intake and the oral microbiome, as the oral microbiome has been associated with caries, periodontal disease, and systemic diseases. We investigated if total carbohydrates, starch, monosaccharides, disaccharides, fiber, or glycemic load (GL) were associated with the diversity and composition of oral bacteria in subgingival plaque samples of 1204 post-menopausal women. Carbohydrate intake and GL were assessed from a food frequency questionnaire, and adjusted for energy intake. The V3–V4 region of the 16S rRNA gene from subgingival plaque samples were sequenced to identify the relative abundance of microbiome compositional data expressed as operational taxonomic units (OTUs). The abundance of OTUs were centered log(2)-ratio transformed to account for the compositional data structure. Associations between carbohydrate/GL intake and microbiome alpha-diversity measures were examined using linear regression. PERMANOVA analyses were conducted to examine microbiome beta-diversity measures across quartiles of carbohydrate/GL intake. Associations between intake of carbohydrates and GL and the abundance of the 245 identified OTUs were examined by using linear regression. Total carbohydrates, GL, starch, lactose, and sucrose intake were inversely associated with alpha-diversity measures. Beta-diversity across quartiles of total carbohydrates, fiber, GL, sucrose, and galactose, were all statistically significant (p for PERMANOVA p < 0.05). Positive associations were observed between total carbohydrates, GL, sucrose and Streptococcus mutans; GL and both Sphingomonas HOT 006 and Scardovia wiggsiae; and sucrose and Streptococcus lactarius. A negative association was observed between lactose and Aggregatibacter segnis, and between sucrose and both TM7_[G-1] HOT 346 and Leptotrichia HOT 223 . Intake of total carbohydrate, GL, and sucrose were inversely associated with subgingival bacteria alpha-diversity, the microbial beta-diversity varied by their intake, and they were associated with the relative abundance of specific OTUs. Higher intake of sucrose, or high GL foods, may influence poor oral health outcomes (and perhaps systemic health outcomes) in older women via their influence on the oral microbiome. Introduction The human microbiome plays a critical role in human health and disease 1 . In particular, the oral microbiome is associated with not only the health of the mouth, but also risk of other chronic diseases (e.g., cardiovascular disease 2 , 3 hypertension 4 , type 2 diabetes 5 , and cancer 6 , 7 ). Understanding of the factors (e.g., dietary intake, smoking behavior, medication use, etc.) affecting the composition of the oral microbiome is critical to understanding these observed associations with disease outcomes. Over 700 different species of bacteria have been identified in the oral cavity 8 with, on average, more than 250 different species in any one individual mouth 9 . The diversity of the oral microbiome in relation to oral health is complex. For example, previous data shows that the alpha-diversity of the microbiome in supragingival plaque samples (where cariogenic pathogens reside), decreases with the severity of caries 10 . Differently, the alpha-diversity in subgingival plaque samples, (where periodontal pathogens reside), increases with increasing severity of periodontal disease 11 , 12 , and such a relationship was observed in this cohort with the microbiome of our subgingival plaque samples 13 . Diet has been shown to be associated with both caries and periodontal disease 14 and hypothesized to influence the microbial composition and diversity of the saliva and gingival crevicular fluid 15 . Fermentable carbohydrates (simple sugars and starch) are significant sources of bacterial energy metabolism and are broken down by both bacterial enzymes and by endogenous processes in the oral cavity 15 . There is evidence that fermentable carbohydrates are essential to development of dental caries 16 . However, the association of carbohydrate intake with periodontal disease is less well studied 17 , 18 , 19 , 20 , 21 . Few studies have examining habitual intake of dietary carbohydrates in relation to the diversity and composition of the oral microbiome 22 , 23 , 24 . We studied the association between habitual dietary carbohydrate intake and the subgingival plaque oral microbiome in a cohort of 1204 postmenopausal women, using data from the Buffalo Osteoporosis and Periodontal Disease (OsteoPerio) Study, a cohort study ancillary to the Women’s Health Initiative (WHI) Observational Study (OS) 25 . The OsteoPerio Study used 16S rRNA gene sequencing of oral plaque samples to identify and measure the relative abundance of the oral bacteria found 26 . We hypothesized that the alpha-diversity (within-subject diversity [number of species]) of the oral microbiome would be associated with intake of total carbohydrates, GL, starch, disaccharides (lactose, maltose, sucrose) and monosaccharides (fructose, galactose, and glucose) and that the beta-diversity (between group diversity) of the oral microbiome would differ across quartiles of intake in all carbohydrates and glycemic load (GL). Methods Study design The OsteoPerio Study is an ongoing prospective cohort 26 , and ancillary to the WHI, a national study focused on health outcomes of postmenopausal women 25 . The OsteoPerio study was originated to examine the association between osteoporosis and loss of bone in the oral cavity 27 . Study participants were recruited from the WHI clinical center in Buffalo, NY between 1997 and 2001; 1,342 women participated in the baseline exam (Supplemental Fig. 1 ) 26 . Women were excluded if they had fewer than 6 teeth, bilateral hip replacement, a history of non-osteoporotic bone disease, a recent 10 years history of cancer, or if they were treated for serious diseases 26 . There were 1222 women with sequenced subgingival microbiome and dietary data at baseline, and of these, 18 women were excluded because their self-reported energy intakes were > 5000 or < 600 kcals, leaving a sample of 1204 women. All participants provided informed consent, and the study protocol was approved by the University at Buffalo’s Health Sciences Institutional Review Board. All experiments were in agreement with relevant guidelines regarding Human Subjects Research. Assessment of dietary carbohydrate intake Dietary intake was assessed as part of the WHI OS participant’s year 3 visit, which coincided with the OsteoPerio baseline exam 26 . A modified Block food frequency questionnaire (FFQ), with 122 main questions and 4 summary questions, was administered to participants asking them to recall usual consumption during the last 3 months 28 . The WHI FFQ has been validated in a study conducted among 113 women in the WHI comparing the FFQ to mean intake from four, 24-h recalls and one 4-day food record 28 . The energy-adjusted Pearson correlation coefficients for total carbohydrates and total fiber were 0.63 and 0.65, respectively 28 . Our main exposures include intake of total carbohydrates, GL, total fiber, soluble fiber, insoluble fiber, starch, disaccharide intake (lactose, maltose, sucrose) and monosaccharide intake (fructose, galactose, and glucose). GL reflects both the amount of carbohydrate in a food in addition to its influence on blood sugar. In this study, total carbohydrate including fiber intake, rather than available carbohydrate intake, was used to estimate the GL 29 . Carbohydrate intake is presented as the percent of calories from carbohydrate consumed, or in the case of fiber intake and GL, as grams per 1,000 kcals consumed. All analyses used these energy-adjusted variables. Subgingival plaque samples and sequencing A dental examiner performed an oral examination wherein subgingival plaque samples were taken with paper points from 12 index teeth (or substitutes), as described previously 30 . Paper points were inserted into subgingival pockets of a tooth’s mesiobuccal surface, with samples taken from maxillary and mandibular teeth and stored in freezers at − 80 °C. The composition and diversity of the oral subgingival microbiome were assessed by 16S ribosomal DNA (rDNA) sequencing with the Illumina MiSeq platform as previously described 26 . Briefly, bacterial DNA was isolated from subgingival samples (maxillary and mandibular samples pooled) with the DSP Virus/Pathogen Mini Kit in QIAsymphony SP automated system (Qiagen, Valencia, CA). Before DNA extraction, an enzymatic pretreatment was performed for more efficient isolation of Gram-positive bacteria. Metagenomic DNA was subsequently amplified for the 16S rRNA gene hypervariable V3–V4 region with negative (extraction reagents and microbial DNA free water) and positive (subgingival plaque pools and Zymogen mock DNA standard) controls. Three hundred base paired-end sequencing (2 × 300) was performed using the MiSeq Reagent Kit V3 on the Illumina MiSeq. Paired-end sequences were joined using Paired-End read merger (PEAR version 0.9.6). The joined sequences were then filtered for quality with the Fastx-Toolkit (V.0.013) to isolate the Illumina paired-end reads that had 90% of their bases measured up to a score of at least Q30 31 . This score means that only 1 out of every 1000 bases may be incorrect 32 . Only participant samples that had a minimum at least 3,000 reads were included in our analytic sample 31 . Following quality filtering, sequences were clustered at 97% identity against the Human Oral Microbiome Database (HOMD) version 14.5 33 with Basic Logical Alignment Search Tool (BLAST) aiming at the species level 34 . Finally, in the raw OTU tables, any OTU that had a frequency count of < 0.02% of the total reads was removed from the sample 31 . Assessment of additional covariates Participant characteristics including height, weight, and blood pressure were measured in the OsteoPerio clinic by trained examiners. Either as part of the broader WHI OS 25 or the OsteoPerio Study 26 , data were collected on women’s age, race/ethnicity, education, medical and oral history, lifestyle and health behaviors, dietary supplement intake, and use of medications, including antibiotics, in the last 30 days. Statistical analysis The subgingival microbiome was analyzed using Compositional Data Analysis techniques 35 , 36 to avoid spurious correlations arising from compositional structure in the data. We used the centered log 2 -ratio (CLR) transformation, which represents the abundance of taxa relative to the geometric mean of the sample, and is defined by the formula CLR ( x ) = log 2 ( x / g ( x )), where g ( x ) is the geometric mean of the vector x 37 . We added 1 to all counts because of the existence of some zero values. This removes the zeros and keeps proportions of non-zero counts close to their natural values. Since we are using a logarithm base 2, a CLR transformed abundance of 3 represents a species with 2 3 times greater abundance than the average within the sample. Hereinafter, the CLR transformed relative abundance of each OTU is referred to as “relative abundance”. Measures of relative abundance are considered primary endpoints for this analysis, along with measures of alpha- and beta-diversity. A correlation matrix across all carbohydrate variables was computed. Mean carbohydrate intake was described by the level of participant characteristics. T-tests and ANOVAs were used to test for significant mean differences across characteristics. We examined the association between carbohydrate intake and three indices of alpha-diversity: observed OTU count, the Chao-1 Index 38 , 39 (both representing species richness), and the Shannon Index (representing species evenness) 40 , 41 . We regressed each alpha-diversity measure on each carbohydrate variable and GL to examine the intra-individual microbial diversity in relation to carbohydrate intake. We also tested differences in the beta-diversity of the microbiome by carbohydrate intake by examining measures of Euclidean distance within and between quartile groups of each carbohydrate intake and GL variable using a PERMANOVA test. We visualized the associations by graphing the samples according to the top two principal components explaining variance in our 245 OTUs, color-coding the points by quartile, and drawing 95% content ellipses. We also regressed each OTU’s relative abundance measure on continuous measure of total carbohydrate intake, GL, and subtype of carbohydrate. We present crude models and models adjusted for age, race and ethnicity, frequency of flossing, frequency of brushing, frequency of dental visits, smoking status, pack-years of smoking, and antibiotic use. We also considered models further adjusted for body mass index (BMI) and diabetes status, which may be in the causal pathway between carbohydrate intake and the composition of the subgingival microbiome. Data was missing for smoking status (n = 1), frequency of flossing (n = 5), and pack-years of smoking (n = 27) therefore adjusted models have 1,172 rather than 1,204 participants. Crude and adjusted beta-coefficients, associated standard errors, and p-values for each carbohydrate variable and OTU association are presented. The beta-coefficients represent the difference in the relative abundance of a specific OTU for each one-unit increase in carbohydrate/GL intake. A Bonferroni correction for the p-values was used to account for multiple comparisons (0.05 divided by 245). In exploratory analyses, we also repeated our analyses for total carbohydrate intake further adjusted for sucrose. In this way, we explored to what extent the associations with total carbohydrate intake were explained by simple rather than complex carbohydrate intake. In exploratory analyses, we examined which food groups explained the greatest between person variation in carbohydrate or GL intake. Only carbohydrate variables found to be significantly associated with microbiome relative abundance were examined. We used forward stepwise regression with an inclusion criteria p-value of 0.10 and an exclusion p-value of 0.05 to identify significant contributing foods groups. Results We examined a correlation matrix of all carbohydrate variables and GL. The strongest correlations (≥ 0.70) were seen between total carbohydrates and GL, total fiber, and soluble fiber; between all fiber types (total, soluble, and insoluble); and between fructose and glucose (Supplemental Table 1 ). With the exception of antibiotic use, all participant characteristics were associated with at least some of the carbohydrate components (Tables 1 and 2 ). There was greater mean soluble fiber, fructose, and glucose intake in older compared to younger women. Sucrose intake was highest in Non-Hispanic Black/African Americans and lowest in Hispanic/Latinas. Fructose and glucose intake was highest in Non-Hispanic Black/African Americans and lowest in Non-Hispanic Whites. The mean intake of total carbohydrate, total fiber, soluble fiber, insoluble fiber, and galactose was greater in women with a post-college education compared to those with less education. Intake of total carbohydrates, GL, total fiber, soluble fiber, insoluble fiber, fructose, galactose, and glucose intake was higher in those with a low compared to high BMI. Never-smokers had the highest intake of total carbohydrate, GL, total fiber, soluble fiber, insoluble fiber, fructose, and glucose, followed by former smokers, and current smokers. Dietary sucrose and glucose intake were lower in women reporting diabetes compared to those with no history of diabetes. Table 1 Mean energy-adjusted total carbohydrate intake, glycemic load, fiber intake, and starch intake by category of participant characteristics (n = 1,204). Full size table Table 2 Mean Energy-adjusted Disaccharide and Monosaccharide Intake by Category of Participant Characteristics (n = 1,204). Full size table Mean total carbohydrate intake, total fiber, soluble fiber, and insoluble fiber intake were higher in those who brushed more compared to less frequently. Mean total carbohydrate intake, GL, total fiber, insoluble fiber, lactose, and galactose intake were higher in participants who flossed more compared to less frequently. Dental visits were associated with total and insoluble fiber intake, with higher fiber intake in those who had visited the dentist more as compared to less frequently. There were 122,631 read pairs generated per sample, 120,032 per sample after merging pair-end sequences, 91,165 reads per sample used for OTU-calling, and 86,972 reads per sample that remained in the OTU table. We identified 245 OTUs in the subgingival plaque samples. Firmicutes was the most abundant phylum identified, accounting for more than 45% of reads within the dataset, followed by Bacteroidetes (17.2%) and Fusobacterium (13.5%). The most abundant species identified were Veillonella dispar and Veillonella parvula , two species from the phylum Firmicutes neither of which ferment carbohydrates. As intake of total carbohydrate, GL, lactose, and sucrose increased, all three alpha-diversity measures decreased (Table 3 ), and as starch intake increased, OTU count decreased. Adjustment of the total carbohydrate model for sucrose intake attenuated the associations with alpha-diversity measures (data not shown); however associations were still statistically significant. Microbial beta-diversity was found to be statistically significantly different by quartile of total carbohydrates, fiber (total, soluble, and insoluble), GL, sucrose, and galactose intake (PERMANOVA p < 0.05). Supplemental Fig. 2 plots the associations between the top two OTU principal components among study participants for GL that had the smallest p-value for PERMANOVA. Table 3 Beta-coefficients and standard errors (SE) associated p-values for regression of alpha-diversity measures on carbohydrate intake (n = 1204). Full size table We examined continuous intake of total carbohydrates, GL, and carbohydrate subtypes in relation to the relative abundance of all 245 OTUs (Table 4 ). The beta-coefficients, standard errors, and p-values for each association examined are shown with no adjustment (crude), adjustment for age, race and ethnicity, frequency of flossing, frequency of brushing, frequency of dental visits, smoking status, pack-years of smoking, and antibiotic use (Model 1); and with further adjustment for BMI, and diabetes status (Model 2). In adjusted models, after correction for multiple comparisons, there were significant associations between intake of total carbohydrates, GL, lactose, and sucrose and the relative abundance of at least one OTU. The relative abundance of Streptococcus mutans was positively associated with total carbohydrates, GL and sucrose intake in all models. The association between Streptococcus mutans and total carbohydrate intake in Model 2 was not statistically significant after further adjustment for sucrose intake (data not shown). We also observed a positive association between GL and Sphingomonas HOT 006 in Model 1 , and in Model 2 we observed a positive association between GL and both Sphingomonas HOT 006 and Scardovia wiggsiae. In Models 1 and 2, we observed a negative association between lactose and Aggregatibacter segnis, and a negative association between sucrose and both TM7_[G-1] HOT 3436 and Leptotrichia HOT 223. A positive association between sucrose and Streptococcus lactarius was observed only in Model 2 . Results for all dietary carbohydrate variables and OTUs associated at a p-value of < 0.05 are presented in Supplemental Table 2 . Table 4 Linear regression of the relative abundance of oral OTUs on carbohydrate intake and glycemic load with beta-coefficients (ß), standard errors (SE), and associated p-value for each dietary variable. (n = 1,204). Full size table Exploratory analyses (Supplemental Table 3 ) identified which food groups, from a list of 122 food groups on the FFQ, explained at least 80% of the variation in total carbohydrate, GL, lactose, and sucrose intake. Twenty-four of 122 food groups were identified. We summarized these foods into the following descriptive groups: (1) grains and baked-goods; (2) starchy vegetables and fruit, and cooked tomatoes; (3) sugary drinks; (4) added sugar, candy, frozen desserts, and pudding-type desserts; (5) and dairy products. Discussion In this analysis of postmenopausal women, we observed that intake of total carbohydrates, GL, starch, lactose and sucrose were negatively associated with the alpha-diversity of our microbiome measures; increased intake was associated with lower intra-individual diversity. We also observed differences in the diversity of the oral microbiome across level of intake of total carbohydrates, fiber, GL, sucrose, and galactose (beta-diversity). Intake of total carbohydrates, GL, and sucrose were positively associated with the relative abundance of Streptococcus mutans , a bacteria with an expansion of carbohydrate metabolizing genes 42 , 43 . We also observed a positive association between GL and the relative abundance of both Sphingomonas HOT 006 and Scardovia wiggsiae, and between sucrose and Streptococcus lactarius. We observed a negative association between lactose and Aggregatibacter segnis, and between sucrose and both TM7_[G-1] HOT 346 and Leptotrichia HOT 223. To best of our knowledge, this is one of the first epidemiologic studies to examine associations between habitual carbohydrate intake and subgingival, rather than salivary, microbiome samples; we found that carbohydrate intake is associated with the subgingival microbiome. Minimal research on habitual carbohydrate intake and the oral microbiome has been conducted in humans. In a study in children, there were significant differences in the relative abundance of 18 species from the biofilm of occlusal surfaces by fermentable carbohydrate consumption assessed using an FFQ 23 . They did not identify Streptococcus mutans as one of the18 species. They did identify Aggregatibacter segnis , which we observed as associated with lactose intake, and Rothia mucilaginosa, which we identified as related to total carbohydrate intake in crude analyses. In a study of Danish adults (aged 20 to 81 years), there were no significant differences in salivary bacterial species by intake of energy-adjusted carbohydrates or the proportion of carbohydrates from sugar 22 . In a three week carbohydrate intervention study of 21 athletes, no significant differences in salivary microbial composition were observed 24 . Our analysis does not fully capture associations between carbohydrate intake and all medias of the oral microbiome because we exclusively examined the microbiome in subgingival plaque samples unlike the previous studies of habitual carbohydrate intake that examined the microbiome of the saliva or biofilm of occlusal surfaces 22 , 23 , 24 . We found no evidence of Lactobacillus, known to be associated with caries risk 44 , in our subgingival plaque samples after we filtered out low abundance OTUs. Lactobacillus , highly abundant in saliva, is not found as frequently in the subgingival microbiome 45 . Several studies have identified Streptococcus mutans in subgingival microbiome samples, similar to our study 46 , 47 Carbohydrates are likely accessible to a different composition of bacteria in subgingival, anaerobic conditions, compared to the salivary environment 48 . Despite likely differences in the microbiome of the saliva and subgingival plaque, studies have detected periodontal pathogens in both mediums, concluding that there is some overlap between these two microbiomes 49 , 50 . Likely the previous studies’ use of salivary or occlusal surface samples, and differences in participants’ ages may explain differences in our results and previous study findings. As expected, we found a number of associations between sucrose intake and the subgingival microbiome. Sucrose can be broken down into glucose and fructose and taken up by the bacteria, or it can be cleaved inside the bacterial cell by bacterial enzymes 15 . Starches can be broken down by human salivary amylase or by bacterial amylases. Certain Streptococci such as Streptococcus gordonii and Streptococcus mitis can bind amylase to metabolize starch, while other bacteria, such as Streptococcus mutans, have enzymes of their own capable of metabolizing starch 15 . Once broken down into simple sugars, sucrose can be transported into the bacterial cell for energy production 15 . Experimental studies show that increasing sugar and fermentable carbohydrate intake increases prevalence of caries 51 and that frequent sucrose consumption is associated with decreased species diversity, and increased relative abundance of certain Streptococcus spp. in the oral biofilm 52 . Our results support the existing evidence that certain fermentable carbohydrates (e.g., sucrose) promote the growth of cariogenic oral bacteria, such as Streptococcus mutans 16 , 53 We also observed that increased carbohydrate intake was associated with decreased alpha-diversity similar to other studies 23 , 52 , 54 . The association of carbohydrate intake with periodontal disease, rather than caries, is less well studied 17 , 18 , 19 , 20 , 21 . There is evidence of associations between increased carbohydrate intake and increased gingival bleeding 17 , 55 and positive associations between diets high in percent of calories from carbohydrates and rates of periodontal disease 21 . Leptotrichia spp., which we observed to be positively associate with sucrose intake, has been shown to be associated with gingivitis in some studies 12 , 56 . The other bacteria we identified as associated with carbohydrate intake or GL have not been previously appreciated as contributing to periodontal disease in the literature 12 or in this cohort 13 . There is evidence that fiber intake is associated with decreased risk of periodontal disease progression markers 18 , 20 , 57 . In a previous study, the oral microbiome (from extracted mice jaws) of mice fed sugar and fiber pellets compared to mice fed sugar pellets alone was lower in Streptococcus , Staphylococcus , Lactobacillus , and Enterococcus, as well as greater in alpha-diversity 58 . This suggests fiber consumption may result from mechanical disruption of oral microbiome by fiber. We did not find any significant differences in alpha-diversity or the relative abundance of any of the measured bacterial species by differing fiber intake. It may be that any effect of fiber on the oral microbiome is less important in a cohort of women who frequently brush their teeth. The relationship between carbohydrate intake and the relative abundance of bacteria is not just defined by whether a certain bacterium has the metabolic capability to utilize a carbohydrate. If two types of sugar are available, some bacteria may preferentially utilize one sugar over the other, as they possess regulatory mechanisms for carbohydrate metabolism 59 . Therefore, we may not see strong relationships with certain types of sugar if both are present and bacteria prefer one over the other. Additionally, bacteria can uptake sugars that have been cleaved by other bacteria or by salivary amylase 60 . Therefore, even if bacteria do not possess the metabolic capability to cleave a certain sugar, they may still be able to utilize its components, which is why we may see a relationship with a certain type of carbohydrate even if the bacteria cannot metabolize it. We also identified top contributing food sources of total carbohydrate and GL, sucrose, and lactose in a cohort of postmenopausal women. Our findings suggest that attention to dental hygiene should occur after consumption of these foods (e.g., baked goods, added sugar, candy, milk, etc.). This is in alignment with the American Dental Associations’ guidelines on Diet and Nutrition which states “that oral health depends on proper nutrition and healthy eating habits, and necessarily includes avoiding a steady diet of foods containing natural and added sugars, processed starches and low pH-level acids…” 61 . A recent dietary intervention (n = 11 adults, average age 32 years) showed that milk and yogurt consumption, as compared to sucrose intake, resulted in less growth of cariogenic bacteria. Continued research needs to be conducted to better understand the influence of carbohydrate-containing foods, which also contain other nutrients, on the oral microbiome 62 . Our study has several limitations. Because it was cross-sectional, we cannot make any assumptions about temporality or causality. FFQs, although useful in that they assess habitual dietary intake, are prone to social desirability bias and often underestimate energy intake 63 . We adjusted for energy intake in an attempt to minimize measurement error and underestimation of energy intake 64 . The measure of relative abundance is also not without its shortcomings 65 . Because relative abundance relies on the proportion of the bacteria rather than their absolute number, the measure may induce spurious correlations 65 . However, this limitation is minimized here by adopting Compositional Data Analysis techniques, such as the use of the CLR transformation. Another limitation is that we were unable to examine our oral microbial compositions by anterior versus posterior teeth or by teeth in the upper (maxillary) versus lower (mandibular) jaw arches. This is because we stored plaque samples from all maxillary teeth together and from all mandibular teeth together and then combined these plaque samples prior to sequencing them for bacterial DNA. We did not do an internal assessment of the reliability of our results. We also corrected for multiple testing for 245 OTUs, but did not further correct for multiple testing across our 11 carbohydrate variables and GL. The age distribution of our participants could be considered a limitation. However, the postmenopausal age range gave us an opportunity to examine these effects in a subpopulation where the association between carbohydrates and the oral microbiome has not been previously studied. Findings may be different in samples with different ages; a broader age group might have allowed for examination of how more varied intake of carbohydrates might affect the oral microbiome over the lifespan. Despite the limitations of this study, it has important strengths. It is the first study to examine carbohydrate intake and the subgingival microbiome in a sample consisting exclusively of postmenopausal women. We examined many subtypes of carbohydrate, and GL, in order to better understand which carbohydrate components have the strongest associations with the subgingival microbiome. We were able to control our analyses for potential confounding factors including oral hygiene, smoking, and antibiotic use. The selection of our participants into the OsteoPerio study is another strength—they were not selected based on disease status or dietary intake. This would have made our results less generalizable. In conclusion, our findings suggest that total carbohydrate and GL, as well as intake of the disaccharides sucrose and lactose, are inversely associated with bacteria alpha-diversity in the subgingival microbiome. Furthermore, the beta-diversity of the microbiome varied by total carbohydrates and GL, but also by certain carbohydrate subtypes (sucrose, galactose, and fiber); and we observed that intake of the total carbohydrates, GL, sucrose and lactose to be significantly associated with the relative abundance of specific OTUs estimating bacterial species. Further study of food group intake and dietary patterns will contribute to our understanding of the extent to which the oral microbiome varies in association with carbohydrate consumption and the extent to which these differences are associated with periodontal disease, oral health, and the influence of oral health on systemic health. Data availability Data, codebook, and analytic code used in this report may be accessed in a collaborative mode as described on the Women’s Health Initiative website ( ). Sequence data is also uploaded at the NCBI Sequence Read Archive (SRA) database. The BioProject ID # is PRJNA796273.
The foods we eat on a regular basis influence the makeup of the bacteria—both good and bad—in our mouths. And researchers are finding that this collective of bacteria known as the oral microbiome likely plays a large role in our overall health, in addition to its previously known associations with tooth decay and periodontal disease. Scientists from the University at Buffalo have shown how eating certain types of foods impacts the oral microbiome of postmenopausal women. They found that higher intake of sugary and high glycemic load foods—like doughnuts and other baked goods, regular soft drinks, breads and non-fat yogurts—may influence poor oral health and, perhaps, systemic health outcomes in older women due to the influence these foods have on the oral microbiome. In a study in Scientific Reports, an open access journal from the publishers of Nature, the UB-led team investigated whether carbohydrates and sucrose, or table sugar, were associated with the diversity and composition of oral bacteria in a sample of 1,204 postmenopausal women using data from the Women's Health Initiative. It is the first study to examine carbohydrate intake and the subgingival microbiome in a sample consisting exclusively of postmenopausal women. The study was unique in that the samples were taken from subgingival plaque, which occurs under the gums, rather than salivary bacteria. "This is important because the oral bacteria involved in periodontal disease are primarily residing in the subgingival plaque," said study first author Amy Millen, Ph.D., associate professor of epidemiology and environmental health in UB's School of Public Health and Health Professions. "Looking at measures of salivary bacteria might not tell us how oral bacteria relate to periodontal disease because we are not looking in the right environment within the mouth," she added. The research team reported positive associations between total carbohydrates, glycemic load and sucrose and Streptococcus mutans, a contributor to tooth decay and some types of cardiovascular disease, a finding that confirms previous observations. But they also observed associations between carbohydrates and the oral microbiome that are not as well established. The researchers observed Leptotrichia spp., which has been associated with gingivitis, a common gum disease, in some studies, to be positively associated with sugar intake. The other bacteria they identified as associated with carbohydrate intake or glycemic load have not been previously appreciated as contributing to periodontal disease in the literature or in this cohort of women, according to Millen. "We examined these bacteria in relation to usual carbohydrate consumption in postmenopausal women across a wide variety of carbohydrate types: total carbohydrate intake, fiber intake, disaccharide intake, to simple sugar intake," Millen said. "No other study had examined the oral bacteria in relation to such a broad array of carbohydrate types in one cohort. We also looked at associations with glycemic load, which is not well studied in relation to the oral microbiome." The key question now is what this all means for overall health, and that's not as easily understood just yet. "As more studies are conducted looking at the oral microbiome using similar sequencing techniques and progression or development of periodontal disease over time, we might begin to make better inferences about how diet relates to the oral microbiome and periodontal disease," Millen said.
10.1038/s41598-022-06421-2
Biology
Study develops new way of identifying cancer cells
Mi K. Trinh et al, Precise identification of cancer cells from allelic imbalances in single cell transcriptomes, Communications Biology (2022). DOI: 10.1038/s42003-022-03808-9. www.nature.com/articles/s42003-022-03808-9 Journal information: Communications Biology
https://dx.doi.org/10.1038/s42003-022-03808-9
https://phys.org/news/2022-09-cancer-cells.html
Abstract A fundamental step of tumour single cell mRNA analysis is separating cancer and non-cancer cells. We show that the common approach to separation, using shifts in average expression, can lead to erroneous biological conclusions. By contrast, allelic imbalances representing copy number changes directly detect the cancer genotype and accurately separate cancer from non-cancer cells. Our findings provide a definitive approach to identifying cancer cells from single cell mRNA sequencing data. Introduction Single cell mRNA sequencing has enabled transcriptomic profiling of tumours and their environment with data being generated across the entire spectrum of human cancer. Studying the cancer transcriptomes depends on accurate identification of cancer cells. Therefore, the foundational step of tumour single cell analyses is separating cancer from non-cancer cells. The simplest approach to identifying cancer cells is to use expression of cancer specific marker genes. However, such genes do not always exist and are generally insufficiently precise, especially without corroborating readouts such as cellular morphology. Another approach is to infer the presence of tumour-defining somatic copy-number changes from shifts in average expression 1 , 2 . The idea here is that gains or losses of genomic regions will generally increase or decrease the expression level of genes in these regions respectively. Challenges with this approach include smoothing and denoising expression changes, establishing a baseline against which to measure shifts in expression, segmenting the genome, and identifying changes in expression not due to copy-number changes. Despite these challenges, both marker genes and shifts in average expression, which we collectively refer to as “expression-based annotation”, may accurately identify cancer cells in certain circumstances. However, if there is any novelty or ambiguity in the identity of cancer cells, then these two approaches are inherently fallible as they are both based on expression and not direct evidence that a cell is cancerous, i.e. that it carries the somatic cancer genome. For example, there has been historical controversy about what cell types are malignant in neuroblastoma, a childhood cancer that arises from peripheral nervous sympathetic lineages. In addition to unambiguous cancer cells, neuroblastomas often harbour stromal cells, composed of Schwannian stroma or mesenchymal cells. It has been suggested that these stromal cell types represent cancer lineages, although a rich body of evidence, including cytogenetic investigations, have not supported this proposition 3 . Recent single cell mRNA studies of neuroblastoma have rekindled the debate on the basis of expression-based cancer cell identification 4 . Although neuroblastoma is an exemplar of the difficulties in annotating single cell tumour transcriptomes, the same problems are common to tumours with complex histology or unresolved origins. Even among tumours with well-defined origins, the variability inherent to all cancer can make annotation challenging. The alternative to expression-based annotation is direct detection of either cancer-defining (i.e. somatic) point mutations or copy-number aberrations from the nucleic acid sequences of each transcriptome, which we pursued here. Such approaches utilise additional information from whole genome/exome sequencing of tumour DNA to detect the altered genotype or the allelic imbalance it creates. More specifically, sequencing of tumour DNA is used to identify regions of copy-number change shared by all cancer cells. Within these regions the B-allele frequency or BAF, defined as the fraction of reads from the non-reference allele, will differ from the value of 0.5 that characterises normal cells (Fig. 1a ). The altered BAF is then used to phase together heterozygous bases across the altered region and the nucleotide sequences underlying single transcriptomes can be interrogated for these cancer-defining shifts. The principle of using shifts in BAF to detect copy-number changes has been previously used to detect de novo copy-number changes in single cell data 5 , 6 . Here we leverage the extra information provided by tumour DNA sequencing to use shifts in BAF to precisely identify single cancer cell transcriptomes. Fig. 1: Overview of different approaches to identifying cancer-derived cells. a Genomic changes present in cancer genomes. b Number of cells (y-axis) with N reads covering point mutations (x-axis), separated by low (NB neuroblastoma) and high (RCC renal cell carcinoma) mutation burden. c Number of cells (y-axis) with N reads covering heterozygous single nucleotide polymorphisms (SNP) (x-axis). d Overview of using allelic shifts representing copy-number changes to detect cancer cells. Full size image Results Briefly, our method, which we call alleleIntegrator, works as follows. Firstly, whole genome or exome sequencing is performed on tumour DNA. From this, regions of copy-number change are identified, using established methods such as ASCAT 7 , along with germline heterozygous single nucleotide polymorphisms (SNPs) within altered regions. The alleles with frequency significantly greater than 0.5 (binomial test) are phased together and collectively designated the “major allele”. The allele frequency of all phased heterozygous SNPs within copy-number altered regions is then measured in each single cell transcriptome. Finally, the posterior probability of both the normal genotype (where all alleles have BAF 0.5) and the cancer genotype (where the BAF of each allele matches that implied by the copy-number status of the cancer) are calculated. It is possible that allelic shifts may result from allele-specific expression rather than copy-number change. To control for this, we exclude genes known to be imprinted or have allele-biased expression (e.g. HLA genes), model any residual allele-specific expression using the data, and only consider large regions spanning multiple genes. Those cells with a posterior probability exceeding some threshold (set to 99% throughout this paper) are designated as cancer or normal cells, with all other cells designated as unassigned. To test approaches used to identify cancer cells, we generated or downloaded single cell droplet-based 3′ single cell transcriptomes from 13 individuals and 5 tumour types: renal cell carcinoma (RCC), neuroblastoma, Wilms tumour, Ewing’s sarcoma, and atypical teratoid rhabdoid tumour (AT/RT) 8 , 9 , 10 (Supplementary Table 1 ). We first tested if detection of cancer specific point mutations would identify cancer transcriptomes. Across all samples, the majority of cells had no reads covering a point mutation (Fig. 1b ), with on average 9.7 reads per ten thousand point mutations per cell (range 0 to 556). This implies that identifying cancer cells from point mutations is possible, but depends on the mutation burden being high and the cost of false negatives being low. By contrast, an average of 1522 reads per cell covered heterozygous single nucleotide polymorphisms (SNPs), implying 0.5 informative reads per megabase per transcriptome (Fig. 1c ). As copy-number changes may alter the allelic ratio, these data can be used to detect the cancer genotype (Fig. 1d ). This implies that a loss of heterozygosity (LoH) of 19.7 megabases or more should be detectable in single transcriptomes (assuming a binomial distribution and 99% accuracy). Next, we compared the performance of cancer transcriptome identification using expression- and nucleotide-based copy-number detection. For each patient we ran three copy-number detection methods, CopyKAT 2 , inferCNV 1 , and a statistical model based on allelic ratios 8 , which we named alleleIntegrator. We evaluated how well each method recovered the true copy-number profile and cancer cell transcriptomes. As inferCNV does not call cancer cell transcriptomes, we evaluated this method on its copy-number profile only. We first considered RCC, an adult kidney cancer where the cancer cell transcriptome can be definitively identified based on the tumour marker CA9 , caused by the near universal disruption of the VHL gene underpinning RCC 11 (Fig. 2a ). For each individual, we used single cell transcriptomes from both tumour biopsies expressing CA9 and from macroscopically and histologically normal tissue biopsies from uninvolved regions of the kidney that did not express CA9 . This guaranteed that the assumptions of inferCNV and CopyKAT, that a mixture of cancer and normal transcriptomes be present, were satisfied. Despite this, CopyKAT’s expression-based identification classed 98% (2953) proximal tubular cells derived from normal tissue as cancerous, compared to 0.2% (4 cells) identified as cancer-derived by allelic ratio (Fig. 2b , Supplementary Fig. 1 ). As proximal tubular cells are the probable cell of origin for RCC, it is likely that expression-based copy-number inference incorrectly identified proximal tubular cells as cancer-derived due to their transcriptional similarity to RCC cells. Amongst the 1718 verified cancer cells, expression-based identification called 1096 as tumour and 35 as normal, while alleleIntegrator identified 712 as tumour and 41 as normal (with the remaining cells unassigned). Fig. 2: Comparison of cancer cell annotation and copy-number profile using allelic-ratio and expression-based approaches. a UMAP of RCC single cell transcriptomes showing patient (shading), cell type (contours and labels), and patient composition (barplots). Inset shows expression of RCC marker CA9 . PTC proximal tubular cells derived from normal biopsies. b Cancerous (red) and non-cancerous (grey) cell fraction excluding ambiguous cells by cell type (x-axis) and sample/region (y-axis) called by CopyKAT (left) or alleleIntegrator (right). c Copy-number profile for PD37228 tumour (left) and proximal tubular (right) clusters from normalised averaged expression (top panels, solid black line) and allelic ratio (bottom panel, one dot per bin with ~500 reads), with true copy-number changes from DNA (red shading). d – f As per ( a – c ) but for neuroblastoma. Full size image We next assessed how well the inferred copy-number profiles matched the ground truth—i.e. somatic copy-number profiles obtained from whole genome sequences—at the chromosome level. There is a good visual agreement between the ground truth profile and allelic ratios, while both CopyKAT and inferCNV exhibit deviations from the expected values (Fig. 2c , Supplementary Fig. 2 ). To quantify this comparison, we designate regions as changed or neutral based on an expression cut-off, which we compared to the ground truth. Varying this cut-off produced a receiver operating characteristic (ROC) curve for each method, with average area 0.97 for alleleIntegrator, 0.87 for CopyKAT, and 0.74 for inferCNV (Supplementary Fig. 3 ). In aggregate, these analyses demonstrate the potential for expression-based methods to misidentify normal cells as cancerous, illustrating their shortcomings in identifying novel cancer cell types. As a contrast to RCC, we tested cancer transcriptome identification on single cell transcriptomes from neuroblastomas, which have no definitive single marker equivalent to CA9 in RCC (Fig. 2d ). As before, both expression- and allelic ratio-based identification identified tumour cells accurately (Fig. 2e , Supplementary Fig. 1 ). As neuroblastoma lacks a definitive marker gene, it cannot be known if cancer cell transcriptomes have been captured before expression-based copy-number inference is run. To consider what would happen if an experiment did not capture cancer cells, we ran all methods on sample PD42184, which is derived from a normal adrenal gland and therefore contains no tumour cells. Expression-based copy-number inference predicted 1926 cancer-derived cells, including mesenchymal cells (Fig. 2e ). By contrast, these cells are identified as normal based on their allelic ratio (Fig. 2e ). The expression-based copy-number profiles are also consistent with the mesenchymal cells being cancer derived, with shifts in average expression on chromosomes 1,2,3 and 12 (Fig. 2f , Supplementary Fig. 4 ). As with RCC, this was part of a larger pattern where expression-based profiles only weakly matched the ground truth, while allelic ratios captured the truth with high accuracy despite the complex copy-number profiles, yielding average ROC areas of 0.89 for alleleIntegrator, 0.28 for CopyKAT, and 0.28 for inferCNV (Supplementary Fig. 3 ). Overall, this demonstrates the risk of drawing erroneous biological conclusions, in this case that mesenchymal cells are cancer-derived, when relying on expression-based copy-number inference of cancer transcriptomes. As an extended test, we next considered three additional tumour types: Wilms tumour, ATRT, and Ewing’s sarcoma. Unlike RCC and neuroblastoma, CopyKAT and alleleIntegrator were both able to correctly identify leucocytes and endothelial cells as not cancer derived (Fig. 3a , Supplementary Fig. 5 ). However, CopyKAT incorrectly identifies the majority of Wilms tumour cells as normal (Fig. 3a ). This is likely driven by the heterogeneous nature of Wilms tumour, which produces multiple populations of transcriptionally and histologically distinct cancer cells. Next, we compared each method’s copy-number profile to the ground truth, calculating the sensitivity and specificity with which each method identified neutral/altered genomic regions (Fig. 3b , Supplementary Fig. 3 ). We found similar levels of performance for both expression-based methods, both of which performed poorly compared to allelic ratios (Fig. 3b ). We next asked how clearly regions of gain and loss could be separated from one another by each of the three methods. Looking across all samples, we found that distribution of expression values for regions with no change, copy-number gain, and copy-number loss strongly overlapped (Fig. 3c , Supplementary Fig. 3 ). By contrast, each of these three types of region produced clearly separated peaks in the distribution of allelic ratios (Fig. 3c ). Across our tests, we found both expression-based copy-number callers to perform similarly and to have highly correlated outputs (Supplementary Fig. 6 ). Therefore, the properties of expression-based copy-number callers are likely general, not specific to inferCNV and CopyKAT. Fig. 3: alleleIntegrator accurately recovers copy-number profile and clonal structure for a wide range of tumour types. a Fraction of cells called cancerous (red) and non-cancerous (grey), excluding ambiguous cells, by cell type (x-axis) and tumour type (y-axis), called by alleleIntegrator (left) or CopyKAT (right). b Receiver operating characteristic (ROC) curve for all individuals measuring the sensitivity and specificity with which different methods (line type) recover the true copy-number profile. The table on the right shows the total area under the curve for each method. c Distribution across all individuals and regions of allelic ratios (left) or averaged expression values (middle and right) in 5 megabase regions that contain copy-number gains (dark shading), loss (intermediate shading), or are copy-number neutral (white). d Allelic ratios (y-axis) across the genome (x-axis) from bulk tumour DNA (top panel), cells assigned to the major clone (middle panel) and minor clone (bottom panel). Full size image Beyond distinguishing cancer and normal cells, the high precision of copy-number genotyping by allelic ratios may lend itself to the identification of minor cancer cell populations (subclones) defined by copy-number aberrations. We investigated cancer subclone identification in a neuroblastoma (PD46693) that harboured a minor clone, comprising ~30% of cells, defined by copy-number neutral loss of heterozygosity of chromosome 4. AlleleIntegrator identified 389/1282 sub-clonal cells with a posterior probability of more than 99% (Fig. 3d ). These cells are transcriptionally extremely similar, with only 95 genes and 7 transcription factors significantly differentially expressed between the major and minor clones (Supplementary Fig. 7 , Supplementary Tables 2 , 3 ). Amongst these genes were neuroblastoma-associated genes NTRK1, BCL11A , TH and CHGB , as well as HMX1 , a transcription factor on chromosome 4 that is a master regulator of neural crest development. Although we would not claim that these genes collectively or individually are the definitive target of the sub-clonal copy-number change, this analysis illustrates the power of our approach in deriving functional hypotheses about copy-number changes. This is particularly pertinent in neuroblastoma, where clinical risk is defined by segmental copy-number changes that remain functionally cryptic 12 . Discussion We have shown that allelic imbalances that represent cancer-defining somatic copy-number changes can precisely identify single cancer cell transcriptomes. A prerequisite of this approach, that limits its application, is the presence (and knowledge) of somatic copy-number changes. We consider the main utility of our approach to lie in corroborating or refuting claims of novel cancer cell types and for investigating the functional consequences of sub-clonal copy-number changes. We found expression-based copy-number detection tools to produce highly correlated results, suggesting that the limitations are general to the approach, not specific to the implementation. Where direct nucleotide interrogation is not feasible, the expression of marker genes and detection of average shifts in expression with tools such as CopyKAT, may still provide a reasonable basis for indirectly inferring which single cell transcriptomes possess the somatic cancer genotype. However, our observations caution against identifying novel cancer cell types through such approaches alone, without direct interrogation of underlying nucleotide sequence. Accordingly, our findings suggest that it may be warranted to reappraise recent claims of novel cell types in a variety of cancers, such as neuroblastoma, that were solely based on expression-based cancer cell identification. Methods Identifying cancer cells using allelic ratio To identify cancer cell transcriptomes, we used a bayesian statistical framework 8 , 9 , implemented in an R package, alleleIntegrator. The calling of cancer cell transcriptomes has four steps (Fig. 1a ): 1. Call copy-number changes and heterozygous SNPs. 2. Phase heterozygous SNPs within regions with altered copy-number using tumour DNA. 3. Count reads supporting the major/minor allele in each copy-number segment/transcriptome. 4. Calculate posterior probability of the cancer and normal genotype. The precise step-by-step implementation is contained in the provided code and a detailed description of each step is provided below. Calling heterozygous SNPs and copy-number changes We identified copy-number (CN) states using Battenberg 13 applied to whole genome sequencing of tumour DNA. SNPs were called using bcftools mpileup/call to find sites with reads supporting two alleles and a BAF between 0.2 and 0.8. Sites inconsistent with heterozygosity were excluded using a binomial test with 5% FDR 14 . Alternatively, CN states and heterozygous SNP locations can be provided using alternate methods. Phasing heterozygous SNPs in copy-number region(s) Using alleleCount ( ), we counted reads supporting each allele in tumour DNA in regions of uneven CN (i.e. where the number of maternal/paternal copies differ). The reference (or alternate) allele was assigned to the minor allele when the BAF was greater than (or less than) 0.5. Sites not significantly different from 0.5 (binomial test, 5% FDR 14 ) were excluded. Counting reads by allele in each transcriptome At each phased SNP, we calculated the counts supporting the major and minor allele for each transcriptome using alleleCount in 10X mode (−x flag). These were summed by segment/transcriptome, producing a table of major and minor allele counts for each transcriptome and copy-number segment. Calculating posterior probability of cancer genotype We aimed to compare two possibilities: that the cell contains the cancer genotype or the normal genotype. To this end, we constructed a model that accounts for the major known error processes and properties of transcription: errors can alter the observed allele, transcription occurs in bursts, and transcription can exhibit allelic bias. We used a negative binomial likelihood, where the overdispersion captures extra variability due to transcriptional bursts. We first filter out SNPs that: are imprinted (i.e. only ever express one allele), are not intronic or exonic, or have zero coverage. This filtering is most accurate when cells with the normal genotype can be specified (e.g. leucocytes in a solid tissue tumour). We also exclude genes known to display complex allele-specific expression (ASE), specifically, HLA and HB genes. We specify a site-specific error rate of 0.01 for exonic reads and 0.05 for intronic reads, calibrated by counting non-reference reads at sites homozygous for the reference. After filtering, we calculate the posterior probability of allele-specific expression in normal cells for each gene, using a beta distribution prior with mean 0.5 and spread set manually or to the best fit value of highly expressed genes (default genes > 400 counts). Where normal cells are not given, both alleles are considered equally likely. Next, we calculate the maximum likelihood value of the beta-binomial overdispersion from normal cells using the error rate and ASE values derived above. We optionally marginalise this estimate over the ASE posterior distribution, although we find this step makes no difference to the final estimate. Where normal calls are not given, the overdispersion is set manually or the best fit is calculated across all cells. Including non-normal cells increases the overdispersion, making downstream calls of which cells are cancer-derived more conservative. The expected allelic ratio at a SNP s is then given by $${r}_{s}(f)\,=\, \bigg (\frac{f{\rho }_{s}}{f{\rho }_{s}\,+\,(1\,-\,f)(1\,-\,{\rho }_{s})} \bigg )(1\,-\,2{\epsilon }_{s})\,+\,{\epsilon }_{s}$$ (1) where f is the number of major copies of the segment as a fraction of the total (i.e. 0.5 for diploid, 2/3 for a gain of one copy, 1 for the loss of one copy), ρ is the ASE ratio (i.e. 0.5 for no ASE) and ε is the site-specific error rate. Using this ratio, the likelihood of a region R, having a major allele fraction f is given by, $${P}_{R}\left({data} | f\right)\,=\,{\varPi }_{s\in R}\frac{{m}_{s}\,+\,{n}_{s}}{{m}_{s}}\,\frac{B\big({m}_{s}\,+\,{r}_{s}(f)\big(\frac{1\,-\,\phi }{\phi }\big),{n}_{s}\,+\, \big(1\,-\,{r}_{s}(f)\big)\big(\frac{1\,-\,\phi }{\phi }\big)\big)}{B\big({r}_{s}(f)\big(\frac{1\,-\,\phi }{\phi }\big),\big(1\,-\,{r}_{s}(f)\big)\big(\frac{1\,-\,\phi }{\phi }\big)\big)}$$ (2) where \({m}_{s}\) and \({n}_{s}\) are the number of counts at SNP s from the major and minor allele respectively, \(\phi\) is the previously estimated overdispersion, and B is the standard beta function. Note that the above is just a beta-binomial likelihood, which has been re-parameterise in terms of the mean probability of the beta distribution (r) and the variance of the beta distribution ( \(\phi\) ). The sum is taken across all SNPs that lie within the region R. To get the total likelihood that each cell is cancer derived, we then take the product across all regions with copy-number change, setting \(f\) equal to the implied copy-number fraction in each region, \({a}_{R}\) . That is, $$P({data}{{{{{\rm{|}}}}}}{cancer})\,=\,{\varPi }_{R}{P}_{R}({data}{{{{{\rm{|}}}}}}{f\,=\,a}_{R})$$ (3) where \({a}_{R}\,=\,1\) in regions of loss of heterozygosity, \({a}_{R}\,=\,2/3\) in regions where 1 copy is gained, etc. By contrast, the likelihood for the cell being normal is given by setting \(f\,=\,0.5\forall R\) , that is $$P({data}{{{{{\rm{|}}}}}}{normal})\,=\,{\varPi }_{R}{P}_{R}({data}{{{{{\rm{|}}}}}}f\,=\,0.5)$$ (4) Finally, the posterior probability of a cell being cancer is calculated assuming a flat prior as, $$P({cancer}{{{{{\rm{|}}}}}}{data})\,=\,\frac{P({data}{{{{{\rm{|}}}}}}{cancer})}{P({data}{{{{{\rm{|}}}}}}{cancer})\,+\,P({data}{{{{{\rm{|}}}}}}{normal})}$$ (5) Each cell is then assigned as cancer where \(P({cancer|data})\) exceeds 0.99, normal where it is <0.01, and unassigned otherwise. Statistics and reproducibility Statistical analysis was performed as described elsewhere in the methods. The selection of samples for benchmarking purposes was chose to represent a coverage of a broad range of different cancer types, with biological replicates (different cancers of the same type) and technical replicates (multiple single cell transcriptomics reactions from the same individual) were generated wherever possible. Ethics approval Human tumour tissues were collected through studies approved by UK NHS research ethics committees. Patients or guardians provided informed written consent for participation in this study as stipulated by the study protocols. This study has the reference NHS National Research Ethics Service reference 16/EE/0394 (paediatric tissues). 10X single cell sequencing of fresh tissue and bulk sequencing of DNA Fresh tissues were processed to generate single suspensions for processing on the Chromium 10X controller (V2/3 3′ chemistry), as previously described 8 . Libraries were produced according to the manufacturer’s instructions and sequenced on an Illumina HiSeq4000 device. Sequencing of bulk DNA was performed as previously described 8 . Data QC, clustering, and visualisation We used R (v4.0.4) and Seurat (v.4.0.3) for these analyses. Cells with <200 genes, <600 UMIs, mitochondrial fraction exceeding 20% (30% for renal cell carcinoma (RCC) normal tissue), or Scrublet 15 doublet score >0.5 were excluded. High resolution clusters (resolution = 10) with >50% cells failing QC were also excluded. Data were log normalised and scaled, and principal components were calculated using highly variable genes using the standard Seurat workflow. Louvain clustering was performed with resolution 1 and a uniform manifold approximation and projection (UMAP) calculated, with the number of principal components used for each dataset as follows: 25 for RCC, 30 for Ewing’s sarcoma, 40 for Wilms tumour, 50 for atypical teratoid rhabdoid tumour (AT/RT), and 55 for neuroblastoma (NB). Finally, cells from RCC and NB datasets were labelled using the published annotation; and leucocytes, endothelium, mesenchyme (in NB only), proximal tubular cells (in RCC only), and tumour cells were retained. Annotation of Wilms tumour, Ewing’s sarcoma and AT/RT datasets was performed based on expression of known genetic markers, curated from literature, for different cell types, including tumour populations. Coverage of point mutations and heterozygous SNPs For all samples, heterozygous SNPs were identified (as described above) and point mutations were called against the GRCh37d5 reference as previously described 8 , 9 . Coordinates were lifted over to GRCh38 and counts covering point mutations and SNPs were calculated for each transcriptome using allele counter. Calling copy-number aberrations Clonal and sub-clonal copy-number profiles were determined using Battenberg 13 (v2.2.5). Segments shorter than 1 Mb or 10% of the chromosome were removed as likely artifacts. Chromosomes were set to the same state where ≥90% had a particular change and gaps filled where consecutive segments had the same copy-number state and were <1 Mb apart. Sub-clonal copy-number segments were defined as those with a second copy-number state detected in a smaller fraction of tumour cells (≥10% but <50%) and are longer than 20 Mb. Evaluating accuracy of transcriptome classification inferCNV 1 (1.6.0) and CopyKAT 2 (v1.0.4) were run with default parameters per-sample using cellranger filtered counts and 100% of leucocytes specified as normal. Both methods generate expression profiles on a log scale for informative cells within each sample. In addition, CopyKAT also classified these cells as being diploid, aneuploid, or uncalled. For each sample, expression ratio per 5 Mb window was averaged by cell type. A range of thresholds were used to quantify copy-number call accuracy and construct a receiver operating characteristic (ROC) curve. To assess the correlation between average expression ratios calculated by CopyKAT and inferCNV, a Pearson correlation coefficient was calculated using R. To visualise the allelic ratio in each sample, allele-specific counts were aggregated by cell type into bins chosen such that each bin contained at least 500 counts. Analysis of PD46693 subclones Cells with posterior probability >0.99 of loss of heterozygosity of chromosome 4 in PD46693 were assigned to the subclone, those with posterior probability <0.01 were assigned to the major clone, and all others were called ambiguous. Differential gene expression was performed using negative binomial regression in DESeq2 16 , treating cells in the major/minor clone as replicates and removing genes with ≤10 reads. We separately tested all genes and just transcription factors for significance, using a multiple hypothesis corrected 4 p value cut-off of 0.01. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability WGS-derived clonal and sub-clonal copy-number profile for individual samples, identified by Battenberg 13 , can be found in Supplementary Data 1 . Previously published data was obtained for renal cell carcinoma 8 , neuroblastoma 9 , and Wilms tumour 10 . Newly generated data for atypical teratoid rhabdoid tumour (AT/RT) and Ewing’s sarcoma has been deposited in the EGA under accession code EGAD00001009005. Numerical source data underlying figures can be found in Supplementary Data 2 . Code availability The R package, alleleIntegrator, is available at . All code used to reproduce the analysis and figures described in this paper is available at . The exact versions of both the R package and analysis code used for this paper are also available from zenodo 17 , 18 .
A new method of separating cancer cells from non-cancer cells has been developed by researchers at the Wellcome Sanger Institute, in a boost for those working to better understand cancer biology using single-cell mRNA sequencing. The study, published today in Communications Biology, improves on existing methods to identify which cells in a sample are cancerous and provides crucial data on the microenvironment of tumors. The software is openly available for researchers around the world to apply to their own data, advancing the effectiveness of single-cell sequencing to understand cancer. Single cell mRNA analysis of cancer cells is one of the leading edge techniques being used to better understand cancer biology. The data generated can be used to try to disrupt cancers with drugs or work out how cancers arise in the first place. A fundamental step in this process is separating cancer and non-cancer cells, but this isn't always an easy task. As well as the many types of cancer, there will also be molecular differences between cancer cells of the same type within a single tumor. Currently, the best method of doing this is to measure the average gene expression of cells in the sample, with higher or lower expression used to distinguish cancer cells from healthy cells. But this method can lead to false conclusions. In this new study, researchers at the Wellcome Sanger Institute performed whole genome sequencing and single-cell mRNA sequencing on samples collected by Great Ormond Street Hospital (GOSH). By locating imbalances of alleles in these data, which indicate copy number changes in the genome, the team was able to identify cancer cells more reliably than with previous methods. This approach will primarily be useful for validating new cancer cell types and better understanding the microenvironment of tumor tissue. "Being able to know how the transcriptome is different in cells with aberrant genomes, such as those found in cancers, is valuable knowledge and will expand the questions that we can answer using single-cell sequencing," says Dr. Matt Young. The method, named alleleIntegrator, is available as a software package for researchers across the world to use.
10.1038/s42003-022-03808-9
Medicine
Study finds genetic markers may predict severity of COVID-19 infection
Iain R. Konigsberg et al, Host methylation predicts SARS-CoV-2 infection and clinical outcome, Communications Medicine (2021). DOI: 10.1038/s43856-021-00042-y
http://dx.doi.org/10.1038/s43856-021-00042-y
https://medicalxpress.com/news/2021-10-genetic-markers-severity-covid-infection.html
Abstract Background Since the onset of the SARS-CoV-2 pandemic, most clinical testing has focused on RT-PCR 1 . Host epigenome manipulation post coronavirus infection 2 , 3 , 4 suggests that DNA methylation signatures may differentiate patients with SARS-CoV-2 infection from uninfected individuals, and help predict COVID-19 disease severity, even at initial presentation. Methods We customized Illumina’s Infinium MethylationEPIC array to enhance immune response detection and profiled peripheral blood samples from 164 COVID-19 patients with longitudinal measurements of disease severity and 296 patient controls. Results Epigenome-wide association analysis revealed 13,033 genome-wide significant methylation sites for case-vs-control status. Genes and pathways involved in interferon signaling and viral response were significantly enriched among differentially methylated sites. We observe highly significant associations at genes previously reported in genetic association studies ( e.g. IRF7 , OAS1 ). Using machine learning techniques, models built using sparse regression yielded highly predictive findings: cross-validated best fit AUC was 93.6% for case-vs-control status, and 79.1%, 80.8%, and 84.4% for hospitalization, ICU admission, and progression to death, respectively. Conclusions In summary, the strong COVID-19-specific epigenetic signature in peripheral blood driven by key immune-related pathways related to infection status, disease severity, and clinical deterioration provides insights useful for diagnosis and prognosis of patients with viral infections. Plain language summary Viral infections affect the body in many ways, including via changes to the epigenome, the sum of chemical modifications to an individual’s collection of genes that affect gene activity. Here, we analyzed the epigenome in blood samples from people with and without COVID-19 to determine whether we could find changes consistent with SARS-CoV-2 infection. Using a combination of statistical and machine learning techniques, we identify markers of SARS-CoV-2 infection as well as of severity and progression of COVID-19 disease. These signals of disease progression were present from the initial blood draw when first walking into the hospital. Together, these approaches demonstrate the potential of measuring the epigenome for monitoring SARS-CoV-2 status and severity. Introduction Coronaviruses (CoV) comprise a large group of human and animal pathogens, including the novel enveloped RNA betacoronavirus referred to as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 5 . This pathogen is associated with coronavirus disease 2019 (COVID-19) first identified in Wuhan, China in 2019 6 and declared a pandemic on March 11, 2020 7 . Since the onset of the pandemic, multiple tests for diagnosing COVID-19 have been launched, including real-time reverse transcriptase–polymerase chain reaction (RT-PCR), specific antibody detection, and next-generation sequencing assays that query for current or past infections 1 . With the exception of next-generation sequencing, which can discern viral subtypes, most diagnostic tests are viral strain dependent, can carry a high false negative rate, do not discern if the virus is viable and replicating, and do not predict clinical outcomes of infection 1 , 8 , 9 . For example, pre-symptomatic patients may test negative 10 , 11 while patients who have recovered may continue to test positive though they are no longer infectious 12 . Accurate diagnostics are urgently required to control continued communal spread, to better understand host response, and for the development of vaccines and antivirals 13 . Individuals infected with SARS-CoV-2 have a variable course of infection, ranging from asymptomatic to death. Although the fatality rate varies tremendously according to demographic characteristics and co-morbidities 14 , the U.S. ranks as one of the countries with the highest COVID-19 mortality rates 15 . Identification of which SARS-CoV-2-infected patients are most likely to develop severe disease would enable clinicians to triage patients via augmented clinical decision support. Having more information on disease severity has recently become critical due to widespread lack of hospital and intensive care unit (ICU) capacity, necessitating difficult decisions about resource triage. To our knowledge, no test can predict COVID-19 clinical course or severity, although work on cytokine abundance ratios after hospitalization has been proposed as a prognostic indicator of severe outcomes 16 . There is considerable evidence that enveloped RNA viruses such as CoV can manipulate the host’s epigenome via evolved functions that antagonize and regulate the host innate immune antiviral defense processes 2 , 3 , specifically via DNA methylation. Viral-mediated antagonism of antigen-presentation gene expression in the case of Middle East respiratory syndrome coronavirus (MERS-CoV) was shown to occur via DNA methylation 4 . DNA methylation changes at cytosine-phosphate-guanine (CpG) sites have been increasingly leveraged in the emerging field of clinical epigenetics to characterize unique epigenetic signatures that diagnose disease. To date, considerable success has been demonstrated in developing highly accurate and robust machine learning (ML)-based disease classifiers using DNA methylation patterns to differentiate Mendelian disorders 17 , behavior disorders 18 , coronary artery disease 19 , and some cancers 20 , 21 , 22 . Consequently integration of a methylation-based disease classification can result in relevant improvement in clinical practice 23 , 24 . With a goal to leverage Illumina’s Infinium MethylationEPIC Array to classify differential methylation signatures of SARS-CoV-2-positive (hereafter referred to as SARS-CoV-2+, regardless of additional symptoms) and control peripheral blood DNA samples (either confirmed SARS-CoV-2 negative or samples collected prior to the SARS-CoV-2 pandemic), we conducted this study to determine whether DNA methylation patterns could differentiate SARS-CoV-2-infected patients from non-infected patients from whole blood obtained from patients. Our secondary objective was to determine whether DNA methylation patterns could differentiate patients with SARS-CoV-2 infection who go on to develop severe disease. In this study, we identified a strong COVID-19-specific epigenetic signature in peripheral blood driven by key immune-related pathways related to SARS-CoV-2 infection status, disease severity, and clinical deterioration. Methods Source of data This protocol was reviewed and approved by the Colorado Multiple Institutional Review Board (COMIRB) and the research adheres to the ethical principles of research outlined in the U.S. Federal Policy for the Protection of Human Subjects. SARS-CoV-2+ were defined as those patients who tested positive for SARS-CoV-2 infection via a routine diagnostic RT-PCR assay in the Biobank at the Colorado Center for Personalized Medicine (Thermo Fisher Scientific, Waltham, MA) or in the UCHealth University of Colorado Hospital Clinical Laboratory (Roche Diagnostics, Indianapolis, IN) of a nasopharyngeal swab collected in viral transport media; controls were defined as those who tested negative. Peripheral blood DNA samples were collected in EDTA tubes from patients seen at the UCHealth University of Colorado Hospital and tested for SARS-CoV-2 epigenetic signatures starting on March 1, 2020. Blood specimens were collected from patients consented to the University of Colorado COVID-19 Biorepository ( ) or the University of Colorado Emergency Medicine Specimen Bank (EMSB) 25 . Control subjects included patients from each study who tested negative for SARS-CoV-2 infection during the index visit. Through the University of Colorado COVID-19 Biorepository and the EMSB, patients tested were consented for blood collection and data abstraction from their electronic health record (EHR). Data obtained from EHR abstraction included demographics, past medical history, laboratory testing (including SARS-CoV-2), treatments, vital signs, hospital disposition, and clinical outcomes. In addition, previously collected samples from patients with acute upper respiratory viral infections (SARS-CoV-2 negative/pan-negative for upper respiratory viral infections/positive for non-SARS-CoV-2 upper respiratory viral infections) between February 5, 2018 and January 1, 2020 were obtained through the EMSB as SARS-CoV-2-negative controls. Additional biospecimens included discarded clinical samples from patients not approached for biorepository enrollment through the UCHealth University of Colorado Hospital Clinical Laboratory. Discarded samples were linked to a limited EHR dataset through the Colorado Center for Personalized Medicine’s health data warehouse, Health Data Compass, and then deidentified. The limited dataset included age, gender, race, ethnicity, viral test status (SARS-CoV-2 and other upper respiratory viruses), and clinical outcomes. The use of discarded samples and accompanying limited datasets was determined to be exempt from Institutional Review Board approval and the need for informed consent by COMIRB. All samples were frozen at −20 °C after collection prior to processing for methylation analyses. Customization of the Infinium MethylationEPIC Array Following a literature review of known epigenetic associations with respiratory viral infections from recent CoV outbreaks, we selected additional content to enrich Illumina’s Infinium MethylationEPIC Array 26 . We specifically enriched for known HLA alleles accounting for known genomic variation 27 as well as multiple alternative haplotypes and unpublished reference sequences spanning the major histocompatibility complex genomic region, the natural killer cell immunoreceptor, and other immunogenetic loci (e.g., cytokines, interferon response genes), to enhance the sensitivity of immune response detection. The custom panel targeted 262 genes with 7831 additional probes. While the majority of the additional probes targeted unique sequences within the genome, a number of probes were intentionally designed to target genomic sequences with a limited degree of repetitiveness. The list of genes and the Illumina IDs for the probes that target these genes are given in Supplementary Data 1 . Methylation array and quality assessment DNA extraction Biospecimens were accessioned and tracked via the Colorado Anschutz Research Genetics Organization (CARGO) laboratory information management system (LIMS). Genomic DNA was extracted from SARS-CoV-2+ peripheral blood on the bead-based, automated extraction Maxwell(R) RSC System (Promega) in a biological safety cabinet in compliance with CDC safety guidelines and procedures for handling SARS-CoV-2 biospecimens (biospecimens from SARS-CoV-2+ cases) and from controls on the Autogen FlexSTAR+ using the Autogen’s FlexiGene Blood Extraction Kit (Holliston, MA). All DNA samples were quantified using both absorbance (NanoDrop 2000; Thermo Fisher Scientific, Waltham, MA) and fluorescence-based methods (Qubit; Thermo Fisher Scientific, Waltham, MA) using standard dyes selective for double-stranded DNA, minimizing the effects of contaminants that affect the quantitation. DNA quality was assessed using an Agilent TapeStation (Agilent, Santa Clara, CA). Samples were then uploaded to CARGO’s LIMS, barcoded, and labeled. Bisulfite conversion and amplification Purified DNA samples were processed using the Zymo EZ-96 DNA Methylation bisulfite conversion kits (Zymo, Irvine, CA) as described previously 28 . The product of this process contains cytosine converted to uracil if it was previously unmethylated. The bisulfite-treated DNA was subjected to whole-genome amplification via random hexamer priming and Phi29 DNA polymerase, and the amplification products were then enzymatically fragmented, purified from dNTPs, primers, and enzymes, and applied to the Illumina chip as described elsewhere 29 . Hybridization and single-base extension The bisulfite-converted amplified DNA products were denatured into single strands and hybridized to the customized Infinium 850K BeadChip (EPIC+; Illumina Inc., San Diego, CA) via allele-specific annealing to either the methylation-specific probe or the non-methylation probe. Hybridization to the chip was followed by single-base extension with labeled di-deoxynucleotides according to Illumina’s Infinium protocol at the CARGO laboratory 28 . Fluorescence staining and scanning of chip The hybridized BeadChips were stained, washed, and scanned to show the intensities of the un-methylated and methylated bead types using Illumina’s iScan System. Data processing and quality control (QC) IDAT files were processed, filtered, and normalized using the SeSAMe R package 30 . Type I probe channel was empirically determined from signal intensities. Probe detection P values (representing the ability to differentiate true signal from background fluorescence) were calculated for each color channel using pOOBAH, which leverages the fluorescence of out-of-band (OOB) probes. Normalization was performed using noob, which uses OOB probes to perform a normal-exponential deconvolution of fluorescent intensities 31 . Finally, a common dye bias that results in greater intensities in the red color channel was corrected to ensure that the distribution of intensities in the two color channels were equal. Probes with detection P values >0.05 were removed, as well as probes overlapping single-nucleotide polymorphisms with global minor allele frequency >1% in dbSNP, probes with poor mapping, and probes containing non-unique sequence according to Zhou et al. 32 . Beta values were logit-transformed into M values for modeling. Probes with >25% missingness were removed. Remaining missing values were then imputed with mean probe M value. Selection of discovery/training and testing cohorts and controls Case–control analyses were performed using the entire genotyped dataset passing epigenetics QC, with SARS-CoV-2 infection status determined as described above (see Fig. 1 for a summary of the workflow). Analyses were repeated including and excluding controls with other upper respiratory infections validated by clinical respiratory panels. Measurements of disease severity and progression (e.g., hospitalization, ICU admittance, ventilator use) were extracted from chart review within the UCHealth EHR. Fig. 1: Flowchart of the study sample collection. Six hundred and forty-eight samples were collected for analysis, of which 644 were processed on MethylationEPIC arrays. Five hundred and twenty-five arrays passed quality control and were included in the final analysis. Full size image Control for batch effect and robustness of the identified epigenetic signatures To minimize possible batch effects and other sources of variability, samples were split into SARS-CoV-2+ and SARS-CoV-2-negative control sets, randomized within sets to account for unavailable phenotypes, and then distributed across chips. To reduce batch and plating effects a minimum of two SARS-CoV-2+ and two SARS-CoV-2-negative control samples were run on each chip (12 chips per plate, 8 samples each) and positive/negative status was randomized across the chip. Epigenome-wide association study (EWAS) with COVID-19 disease status Preprocessing was performed using the GLINT 33 package for association testing and estimating components to adjust for population structure (EPISTRUCTURE 34 ) and we used ReFACTor 35 to account for cell-type proportions. We chose ReFACTor to account for cell proportion information in a data-driven fashion. The linear mixed-effects model in GLINT was fit to each probe, testing for differences based on COVID-19 disease status while correcting for age, sex, chip position, 6 ReFACTor components, 1 EPISTRUCTURE component, and a variance component representing individual covariance 36 . Enrichment of top hits in common databases was performed using enrichR 37 . Probes were sorted by adjusted P value and the top 800 genes to which differentially methylated probes map were used as input to perform overrepresentation enrichment analysis within Gene Ontology (GO) categories, Kyoto Encyclopedia of Genes and Genomes pathways (KEGG), BioPlanet, and WikiPathways 38 , 39 , 40 , 41 . Probes were annotated to CpG island and genic regions using annotatr 42 . Clinical outcome stratification Clinical data were abstracted via detailed chart review for all EMSB patients. COVID-19 disease severity was determined by an ordered severity score of (1) discharged from emergency department; (2) admitted to inpatient care; (3) progressed to ICU; and (4) death. We also determined a hospital duration variable, where individuals without a measured hospital stay (i.e., discharged from the emergency department) were assigned 0 and individuals who died were removed from the cohort for length of stay analysis to minimize bias associated with timing of decisions to withdraw care. Construction and validation of a prediction model Predictive modeling was performed using the Lasso 43 and Elastic Net 44 algorithms for sparse penalized regression modeling available in the glmnet software package 45 . For each prediction model, only autosomal methylation probes passing QC were included, to remove potential confounding from sex-linked chromosomes. No demographic, clinical, or cell count variables were included in the predictive models, requiring the algorithm to pick CpG sites with strong enough associations to surpass the level of penalization of the hyperparameters across the entire least angle regression path. For each trait of interest, a separate model was created and best-fitting parameters were chosen after tenfold cross-validation either by maximizing area under the receiver-operator characteristic curve (AUC for dichotomous traits) or minimizing mean-squared error (MSE for quantitative traits). Each was fit across a grid of parameters representing various strengths of penalization and combination of L1 and L2 penalties under the weighted elastic net model. Both the days of hospitalization and case severity were modeled as continuous outcomes. To assess performance for quantitative traits in a manner comparable to dichotomous traits, we swept across potential cutpoints to estimate AUCs for this newly derived dichotomous variable. While case–control status was the primary phenotype of interest, measures of severity were assessed in SARS-CoV-2+ cases only. To estimate stability of estimation in parameters, we performed 100 iterations of model training and testing. Within each iteration for case–control and severity outcomes, we employed tenfold cross-validation to derive the model and a held-out set of 30% removed from train/test to gauge out-of-sample performance of the best-fitting model. Our train/test and validation splits were created within each stratum to preserve representation across all outcomes and reflect the distribution across the total dataset. For hospitalization duration, the train/test/validation models had instability in convergence and so we reverted to a train/test model using the tenfold cross-validation within the default cv.glmnet() function. We assessed overall performance for the dichotomous COVID+/COVID− case–control status using out-of-sample AUC, the F1 score (a measure of the relationship between precision and recall), the distribution of best-fit λ penalty via cross-validation, and the number of probes chosen in the final model. For the quantitative outcomes, we assessed overall performance using out-of-sample R 2 , the slope of the model, and λ number of probes. Finally, these were stratified each across the elastic net weights ( α ) from 0.01 to 1, representing the proportion of ridge (L2) vs Lasso (L1) penalty to choose a final model. All models included nonzero λ to encourage sparsity (a L2-only model would include prediction from the entire array). Final models described in results were chosen based on best-performing (maximum R 2 or AUC) vs median values for each chosen set of hyperparameters. The final, out-of-sample best-fit prediction for each outcome was considered the “methylation score” used in downstream modeling, characterization of association, and determination of potential confounding with demographic and blood cell proportion characteristics. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Results Study cohort We identified 675 patients tested for either SARS-CoV-2 or other acute upper respiratory infections. Of these, 164 were SARS-CoV-2+ by RT-PCR, 58 historical EMSB patients had positive (non-SARS-CoV-2) acute upper respiratory viral RT-PCR tests, 7 had positive (non-SARS-CoV-2) acute upper respiratory viral RT-PCR tests during the pandemic, and 296 were negative for all viral infections and thus served as controls. We excluded 32 samples from the dataset as these were derived from a run with failed hybridization and removed 8 duplicates, resulting in a final cohort of 525 (Fig. 1 ). Supplementary Table 1 summarizes the demographics and clinical outcomes of patients tested, including proportion of patients with other acute upper respiratory infections. Incidences of non-SARS-CoV-2 respiratory infections are displayed in Supplementary Table 2 . The median time from sample collection to hospital admission was 0 days (interquartile range (IQR): 0, 1). In all, 83.4% of samples were collected on the day of admission and only 8.7% were collected >5 days after hospital admission. Samples from patients who were SARS-CoV-2 positive were drawn with the first blood sample in the emergency department 83% (median blood draw: 0 days, IQR: 0, 1 days) of the time; other samples drawn later in the hospital admission in this group were from patients who developed COVID while admitted to the hospital. Samples from two SARS-CoV-2-positive patients were obtained 6 and 9 days prior to hospital admission. Samples from SARS-CoV-2-negative patients were drawn with the first blood sample in the emergency department 80% (median blood draw: 0 days, IQR: 0, 2 days) of the time and 95% were drawn within 7 days of hospital admission. No samples were obtained on days before hospital admission in the SARS-CoV-2-negative patients. Disease-specific DNA methylation signature and differentially methylated probes We first performed an EWAS to identify biological signals associated with COVID-19 disease status. After adjustment for age, sex, array position (batch effect), cell proportions via ReFACTor and ancestry via EPISTRUCTURE components, EWAS of COVID-19 disease status in 164 SARS-CoV-2+ compared to 296 controls yielded 13,033 significant CpGs mapping to 6117 unique genes at false discovery rate (FDR)-adjusted P value < 0.05 (Fig. 2 and Supplementary Data 2 ), with moderate inflation that is typical of EWAS 46 (Supplementary Fig. 1 ). In total, we observed 35 probes with an unadjusted P value < 10 −20 , and 183 with an unadjusted P value < 10 −10 . Significant probes overlap 1625 CpG islands and 1001 FANTOM5 47 enhancers (Supplementary Fig. 2 ). We observed that 52.1% of all significant probes are hypermethylated; however, 78% of the top 100 probes sorted by adjusted P value are hypomethylated (Fisher’s Exact Test P value = 9.46 × 10 −8 ). Custom probes on the EPIC+ chip are enriched in significant EWAS results ( P value = 9.94 × 10 −7 , Fisher’s Exact Test): specifically, 1.72% of EPIC probes are significant as opposed to 2.51% of custom probes. Principal component analysis of top associations reveals clustering by COVID-19 disease status (Supplementary Fig. 3 ). Because of concerns that population admixture may confound results, the COVID-19 disease status EWAS was repeated with EHR-defined race and ethnicity as additional covariates beyond that modeled via EPISTRUCTURE and mixed-effects modeling. This had a minimal effect on results. Fig. 2: Differentially methylated CpGs associated with SARS-CoV-2 infection. a Miami plot (top panel) of hypermethylated (top) and hypomethylated (bottom) probes in SARS-CoV-2+ compared to control samples. Significance lines represent FDR-adjusted P value <0.05 threshold. b Volcano plot of significant (red; FDR-adjusted P value <0.05) CpG sites (blue CpG sites have FDR-adjusted P value >0.05). Change in percentage methylation on the x axis represents the difference in average beta value at a site between cases and controls. Probes for intergenic CpG sites do not have gene annotations. Data used to plot this figure are available as Supplementary Data 5. Full size image Top hypomethylated CpG sites show strong enrichment for interferon and viral response-related pathways including Type I Interferon Signaling Pathway (KEGG, adjusted P value = 7.40 × 10 −10 ) and Negative Regulation of Viral Genome Replication (GO:BP, adjusted P value = 1.93 × 10 −6 ; Supplementary Fig. 4a ). Hypermethylated CpG sites also show enrichment for relevant biological processes such as Focal Adhesion (GO:CC, adjusted P value = 0.0187; Supplementary Fig. 4b ). cg17114584, the third most significant probe with an adjusted P value of 1.78 × 10 −43 , shows 16.9% hypomethylation in cases. This CpG is located in exon 6 of the interferon regulatory factor 7 ( IRF7 ). IRF7 encodes a transcription factor that regulates the expression of interferon a and b, as well as interferon-stimulated genes. Other top CpGs are in genes relevant to viral response: OAS1 (2’-5’-oligoadenylate synthetase 1) is interferon-induced and activates RNase L, which degrades viral (and cellular) RNA (adjusted P value 1.05 × 10 −21 , 3.8% methylation change). MX1 encodes an interferon-induced GTPase that inhibits viral replication. DTX3L and PARP9 form a complex that is involved in interferon-mediated antiviral defenses. This complex has also been shown to promote M1 polarization in macrophages by preventing STAT1 phosphorylation 48 . IFIT3 encodes another interferon-induced antiviral protein. Overall, we observe strong hypomethylation of interferon- and viral response-related pathways, which is expected as these pathways are activated transcriptionally in SARS-CoV-2+ individuals 49 . Specificity of the COVID-19 disease signature from other respiratory infections We next compared 164 SARS-CoV-2+ samples to 65 samples with other upper respiratory infections to determine the specificity of the methylation signature to SARS-CoV-2. This analysis yielded 1501 significant CpGs (adjusted P value < 0.05) (Supplementary Data 3 ), of which 780 (52%) were present in the SARS-CoV-2+ compared to controls analysis (Fig. 3 ). Comparison of 65 other (non-SARS-CoV-2) upper respiratory infection samples to controls yielded 516 significant CpGs (Supplementary Data 4 ), of which 116 (22%) were present in the SARS-CoV-2+ compared to controls analysis. Furthermore, examination of the strength of the signal demonstrates that the shared probes in the SARS-CoV-2+ vs control and SARS-CoV-2+ vs other upper respiratory infections analysis have low P values and high effect sizes, whereas this is not the case for probes shared by SARS-CoV-2+ vs control and other upper respiratory infections vs control analyses (Supplementary Fig. 5a ). These comparisons suggest high specificity of the COVID-19 disease epigenetic signature. To further investigate this, we examined the significant CpGs from our COVID-19 disease signature compared to control EWAS. We observe the same trend of high correlation of effect sizes (methylation change) in SARS-CoV-2+ compared to control and SARS-CoV-2+ compared to other respiratory infections (Pearson R = 0.87; P < 2.2 × 10 −16 ) and very low correlations of effect sizes in SARS-CoV-2+ compared to control and other upper respiratory infections compared to control analyses (Pearson R = −0.027; P = 0.0022) (Supplementary Fig. 5b ). While we do not have sufficient power to examine the specific viruses (other CoV, influenza, etc.), these results strongly point to the specificity of our COVID-19 disease epigenetic signature to detect SARS-CoV-2 infection. Fig. 3: Overlap of differentially methylated CpGs between disease groups. Venn diagram of overlaps between SARS-CoV-2+–Control EWAS (13,033 significant probes), SARS-CoV-2+–other respiratory infection EWAS (516 significant probes), and other respiratory infection–Control EWAS (1501 significant probes). Full size image Development and validation of a classification model for prediction of disease classes and disease severity To combine methylation data across the genome into a single predictor, we employed ML models of sparse regression trained via cross-validated glmnet 45 as described in “Methods.” To determine the sensitivity of our model, 460 subjects (SARS-CoV-2+ vs controls) from the testing cohort were supplied to the classification model, with prediction optimized after the approach defined in “Methods.” Only methylation probes were used in feature selection. All models showed relative stability across iterations (Supplementary Fig. 6 ) and yielded sparse results. Details of each top model are available in Supplementary Table 3 . The best-fitting model has a performance of 93.6% in cross-validation for detecting SARS-CoV-2 infection (Fig. 4a, b ). Model performance was similar in females and males (93.7 and 93.5%, respectively). In addition, model performance on older individuals and younger individuals (median age = 56 years) was comparable: 94.4 and 92.8%, respectively. Similarly, race/ethnicity information was not significantly correlated with case–control score (all groups P > 0.05). When age and race/ethnicity categories were included in a multivariable model along with our prediction score, no additional covariates significantly predicted COVID-19 disease status (all other P > 0.4). Similarly, BMI was not associated ( P ~ 0.4). Fig. 4: Performance of SARS-CoV-2 infection status and severity predictive models. a Out-of-sample case–control methylation score for all 460 individuals (164 SARS-CoV-2+, 296 SARS-CoV-2−) compared to case–control status, plotted by biological age. b Receiver-operating characteristic (ROC) curve for data in a . c ROC curve of cross-validated prediction of long hospital duration. d Violin and jittered scatter plots of severity methylation scores for each outcome in cases. Data used to plot this figure are available as Supplementary Data 6. Full size image To determine the direct association of methylation with clinical outcomes, an additional logistic regression was performed for the subset of individuals with complete blood cell count (CBC) data (341 individuals total). The inclusion of additional blood cell count data did not impact the association between the methylation score and outcome ( P value < 2 × 10 −16 with or without adjustment), and in the larger CBC model (including total hematocrit, white blood cell count, platelets, neutrophils, lymphocytes, monocytes, eosinophils, and basophils), only hematocrit ( P ~ 0.05) approached nominal significance. The inclusion of hematocrit moderately improved Akaike information criterion in logistic regression but with limited performance increase in multivariable modeling AUC (93.6 vs 94.1%). Severity analysis focused on hospital length of stay (median duration: 6 days, IQR 3–11, max 53 days), as well as across the spectrum of severity (34 discharged from emergency room, 84 hospitalized, 35 admitted to ICU, and 11 deaths). The best-fitting model for hospital duration had a cutpoint at 20 days, yielding an AUC of 79.6% with 14 individuals with longer stays vs 135 with shorter stays (or 0 days in hospital) (Fig. 4c ). Dichotomizing the best-fit severity measurements yields AUCs of 79.1, 80.8, and 84.4 for hospital admission vs discharge, floor hospital admission vs ICU, and survival vs death, respectively (Fig. 4d ). Discussion Here we report DNA methylation profiling in conjunction with analysis using ML techniques to identify a SARS-CoV-2-specific epigenetic signature in peripheral blood from a large cohort of individuals tested using conventional RT-PCR technology. We also describe the development of a classification algorithm that has high sensitivity and specificity in predicting infection and in-hospital clinical deterioration and that confidently rejects the probability of healthy individuals to be affected by SARS-CoV-2 infection. While any predictive signal invites concern of potential confounding, the methylation signature (derived solely from CpGs, not including any clinical or demographic information) we observe is not driven by confounding either from demographics or typical laboratory measurements (e.g., blood cell counts, BMI). Our findings suggest that measurement of methylation signals that arise during and after SARS-CoV-2 infection may provide clinicians the ability to detect viral infection as well as predict patient clinical course after viral challenge. Unlike sequencing, RT-PCR, and antibody tests, the methylation array is able to predict the severity of SARS-CoV-2 infection and ultimately could provide clinicians with information on how to manage patients infected with SARS-CoV-2. Our results support the hypothesis that the host epigenome, as measured in peripheral blood, is modified by infection from SARS-CoV-2 and can be used to identify novel biology and it is useful for clinical diagnosis, prognosis, and triage. Despite being a heterogeneous tissue, we relied on peripheral blood as the target tissue because it has proven to be a reliable source for generating epigenetic signatures and disease classifiers in other settings 50 , 51 , 52 , 53 , 54 , 55 , 56 . In this study, we observed many methylation changes that are, on average, >10% differentially methylated in the SARS-CoV-2+ group, including IRF7 and MX1 interferon-related genes. These are much larger effect sizes than typically observed in EWAS in peripheral blood 57 and similar to the clinical utility of epigenetics observed in cancer 22 . We did not observe confounding by cell proportions, measured by CBC from the EHR, providing strong support for the epigenetic signature of SARS-CoV-2. Although cell-type heterogeneity can be a strong confounder in epigenetic studies 58 , 59 , 60 , we did not pursue adjustment for cell proportions beyond adjustment for cell-type proportions using ReFACTor 35 because our primary objective is to develop a COVID-19 disease-specific diagnostic methylation platform, rather than interrogate the underlying pathology. To validate the customized EPIC methylation platform as a reliable tool for the clinical diagnosis of COVID-19 disease, we performed an EWAS with SARS-CoV-2 infection status. We observed that the epigenetic signature of SARS-CoV-2 infection is enriched for pathways related to host viral response, and specifically for Type I Interferon signaling that is a hallmark of host response to this virus 61 . Our findings of altered DNA methylation in interferon response genes are in concordance with published results of changes in the expression of interferon response genes by SARS-CoV and MERS-CoV viruses through changes in histone modifications 2 , 3 . One of the most significant probes (adjusted P = 1.77 × 10 −43 , 16.9% hypomethylation) is located in the gene encoding IRF7 ; loss-of-function variants in 13 genes including IRF7 were recently found to be associated with life-threatening COVID-19-associated pneumonia 62 . Another interferon-induced gene, OAS1 , was similarly significant (adjusted P value 1.05 × 10 −21 , 3.8% methylation change). In a recent GWAS on critical illness due to SARS-CoV-2, significant associations and replication were observed for variants in the OAS gene cluster, which includes OAS1 63 , for which variants had previously been associated in candidate gene studies of SARS-CoV infection 61 , 64 . Also, in a Mendelian randomization study it was recently shown that increased circulating OAS1 proteins were associated with reduced SARS-CoV-2 susceptibility and disease severity 65 . Collectively, published genomics studies support several of the strongest associations observed in our study. Previous work also demonstrated that viruses that cause severe disease (e.g., MERS-CoV, H5N1) alter host response by changing methylation landscape of antigen-presenting genes in the HLA region 4 . While we did not observe genome-wide significant signals at classical HLA alleles, we observed six FDR q < 0.05 probes in the region, in HLA-V , DOA , DQA1 , DQA2 , and DRA , albeit with attenuated significance compared to top CpGs (minimum q ~ 0.0109), suggesting that the mechanism of host manipulation by SARS-CoV-2 may be different. However, these results should be interpreted with caution as interrogation of the HLA region is complex; HLA-V for example is a pseudogene 66 . As the signatures identified in this study appear to be reactive to the disease, aspects of the disease process are expected to impact these results. Namely, we anticipate these changes to be time-sensitive, as the infection will need to have spread enough to induce methylation changes. Similarly, our case–control variables were defined by RT-PCR, which can carry a high false negative rate depending on the stage of infection and timing of sample collection 9 , and may have reduced the classification accuracy. However, we have follow-up EHR information for the patients in this cohort, which minimizes the risk of misclassification bias. We do not expect this potential confounder to affect the measures of severity used in this study as these were determined directly from chart review, but we acknowledge that, for the initial analysis, the numbers of cases may have limited the statistical power and prognostic ability of ML. With additional cases that account for inherent genetic variability within the population, methylation patterns will become more refined and the AUC of these ML models to predict disease severity is likely to increase. While “duration of hospital stay” may not be as immediately actionable as predicting ICU admittance or ventilator use, and it is confounded by pre-existing frailty, social support (or lack of), socio-economic status, and need for ongoing care once the acute illness has receded, the increased variability in the continuous outcome provides improved signal as observed both in our EWAS and our ML modeling. For this analysis, the 11 individuals who died were removed from duration analyses, as their length of stay would be difficult to compare to those who survived. Although the emerging field of epigenetics has demonstrated actionable classification with much smaller sample sizes in contrast to traditional GWAS in other common disease domains 67 , we recognize that additional cases, and in particular understanding the less-severe end of the spectrum (which are likely to be under-reported in data from health systems), will improve our understanding of outcomes across the spectrum of disease severity. We note that, even in our limited sample sizes, the AUCs for ICU admittance still indicate there is signal that can be resolved through future collections. Another limitation of our work is the specificity of the epigenetic signature to SARS-CoV-2 over other respiratory infections. Initial targeted epigenetic analyses demonstrate a trend toward differential methylation, though these findings are limited by low numbers. Currently, we are targeting the collection of biospecimens from patients with respiratory infections other than SARS-CoV-2 for these follow-up studies. Researchers have previously compared the robustness of DNA methylation profiling vs RNA transcriptome profiling in developing classifiers for different disease states 24 , 68 , 69 , 70 . One of the advantages of DNA methylation analysis compared to RNA analysis arises from the relative stability of deoxyribonucleic acid over ribonucleic acid 9 , 71 . The inherent instability of RNA, due to its 2’-OH group and the ubiquitous presence of ribonucleases, requires the use of plasticware, buffers, and processing reagents that are devoid of chemical and enzymatic species that stimulate RNA hydrolysis. Contamination even with a small amount of ribonuclease can degrade RNA samples to the degree where they cannot be analyzed. The strong signature of viral-driven epigenetic changes may have the ability to detect SARS-CoV-2 infection in patients who never develop symptoms (asymptomatic) and in patients who are not yet symptomatic (pre-symptomatic) 72 . While asymptomatic testing following exposure has increased in recent months, the current testing strategy in the U.S. still predominantly targets symptomatic patients despite estimates that asymptomatic patients represent 40–45% of infected individuals 10 , 72 . Transmission during the incubation period has been reported, and the viral load of symptomatic and asymptomatic patients is similar 73 , 74 , 75 , 76 . The relationship between SARS-CoV-2 viral shedding and risk of transmission is unclear, and the percentage of transmission attributable to asymptomatic or pre-symptomatic infection of SARS-CoV-2 is unknown 77 . We believe that the epigenetics platform may efficiently identify asymptomatic and pre-symptomatic infections, which may, if applied broadly, aid in limiting the spread of SARS-CoV-2. Due to the widespread occurrence of SARS-CoV-2 and progression to COVID-19 disease, there is the need for scalable testing technologies that can deployed on the national level for surveillance, screening, and prognosis for those infected. The purpose of this study was to identify high-confidence host methylation biomarkers that are able to indicate SARS-CoV-2 infection and predict clinical course of the viral disease in a given patient. This study is a first step toward selecting biomarkers for inclusion on a high-throughput methylation beadchip array specifically for the clinical diagnosis of COVID-19 disease that is also cost-effective given the added value of predicting subsequent clinical outcomes. To that end, we focused on sparse predictive models. Notably, these models are not significantly confounded by demographics or blood cell count information, denoting their specificity to the current infection of the patient, and reducing concern of overfitting to one patient sub-population. These biomarkers can also be used in risk stratification of SARS-CoV-2-infected patients, an unmet need given that none of the existing testing modalities (nucleic acid amplification tests, antigen tests, serology/antibody tests) can achieve this level of specificity. By identifying DNA methylation patterns associated with critical illness, we contend that a methylation test will provide patient-specific treatment targets before critical illness ensues. Pre-emptive dexamethasone 11 , 78 , anticoagulation 12 , or new pharmacologic targets may prevent mortality, guided by these epigenetics patterns. Although our findings must be complemented with further clinical assessment, our model has shown its capacity to leverage methylation quantification as an innovative strategy to generate epigenetic signatures that assess host response to SARS-CoV-2, which is scalable and may have the ability to confirm positive tests in asymptomatic patients and entire communities, and may ultimately differentially diagnose other viruses causing similar symptoms all within in a comprehensive high-throughput manner. Data availability The datasets generated during the current study are available in the Gene Expression Omnibus repository (accession GSE167202) and include original .idat array files and the final processed data matrix for DNA methylation analyses. Source data used to generate Figs. 2 and 4 are available as Supplementary Data 5 and 6 . Code availability Raw array data were processed using SeSAMe 1.7.6 in R 4.0.1. EWAS was carried out using GLINT 1.0.4 on the command line. Machine learning analyses were done using Glmnet v2.0-18 and Data.table v1.11.4 in R 3.5.1. Plotting and consolidation was done in R 4.1.0 using ggplot2 v2_3.3.3 and Data.table v1.14.0. All packages are available through CRAN and Bioconductor.
Scientists at the University of Colorado School of Medicine, along with colleagues at UCHealth University of Colorado Hospital, have discovered specific genetic biomarkers that not only show who is infected with COVID-19, but offer insights into how severe the disease might be, filling a major diagnostic gap. "I think this study is a tremendous proof-of-concept in the realm of COVID-19 testing, one that can be applied to other diseases," said the study's lead author, Kathleen Barnes, Ph.D., professor at the CU School of Medicine. "It's a major move forward in the world of precision medicine." The study, published Tuesday in the journal Communications Medicine, suggests that specific signals from a process called DNA methylation varies between those infected and those not infected with SARS-CoV-2. And they can indicate the severity of the disease even in the early stages. DNA methylation, critical in how cells function, is an epigenetic signaling tool that cells use to turn genes off. Any mistakes in the process can trigger a variety of disease. Barnes believes that paying attention to these signals could help fill a needed gap in the current world of COVID testing. Most COVID-19 antigen or rapid tests are dependent on viral strains and can carry high false negative rates. They don't predict if the virus is viable and replicating, nor do they predict clinical outcomes, the study said. A pre-symptomatic patient may test negative for the SARS-CoV-2 virus while patients who have recovered may still test positive despite no longer being infectious. "Accurate diagnostics are urgently required to control continued communal spread, to better understand host response, and for the development of vaccines and antivirals," the study said. "Identification of which SARS-CoV-2 infected patients are most likely to develop severe disease would enable clinicians to triage patients via augmented clinical decision support." But the authors said they didn't know of any test that can predict the clinical course of COVID-19. With that in mind, they analyzed the epigenome in blood samples from people with and without COVID-19. They customized a tool from Illumina called the Infinium Methylation EPIC array to enhance immune response detection. Researchers then profiled peripheral blood samples from 164 COVID-19 patients and 296 control patients. The peripheral blood DNA samples were collected from patients seen at UCHealth and tested for SARS-CoV-2 epigenetic signatures starting March 1, 2020. Most blood specimens were collected in the University of Colorado Emergency Medicine Specimen Bank under the direction of study co-author Andrew Monte, MD, Ph.D., and passed on to the Colorado Anschutz Research Genetics Organization (CARGO). Additional specimens were taken from patients consented to the University of Colorado COVID-19 Biorepository. The researchers discovered specific genetic markers of SARS-CoV-2 infection along with indications of how severe the disease might be. "These signals of disease progression were present from the initial blood draw when first walking into the hospital," the study said. "Together, these approaches demonstrate the potential of measuring the epigenome for monitoring SARS-CoV-2 status and severity." According to Barnes, the findings could ultimately lead to a new and more accurate way to test for COVID-19. "We are exploring how this platform could add value to the COVID diagnostic world," she said. "We think it adds value to knowing what patients develop more serious disease. This could tell you if you could ride out the infection or if it is likely to get worse."
10.1038/s43856-021-00042-y